CN104185981A - Method and terminal selecting image from continuous captured image - Google Patents

Method and terminal selecting image from continuous captured image Download PDF

Info

Publication number
CN104185981A
CN104185981A CN201380003176.6A CN201380003176A CN104185981A CN 104185981 A CN104185981 A CN 104185981A CN 201380003176 A CN201380003176 A CN 201380003176A CN 104185981 A CN104185981 A CN 104185981A
Authority
CN
China
Prior art keywords
image
target area
area
parameters
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201380003176.6A
Other languages
Chinese (zh)
Inventor
魏代玉
郑士胜
齐家
李俊霖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Device Co Ltd
Original Assignee
Huawei Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Device Co Ltd filed Critical Huawei Device Co Ltd
Publication of CN104185981A publication Critical patent/CN104185981A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • H04N23/635Region indicators; Field of view indicators

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

The invention embodiment provides a method and terminal selecting an image from continuous captured images; the method comprises the following steps: continuously capturing N images; determining image feature parameters of a target zone of each image in the N images, wherein the target zone refers to the image zone, matched with a touch focusing zone, of each image when the N images are taken, and/or the target zone comprises a human face zone in each image; determining image quality of each image according to the image feature parameter of the target zone; selecting M images from the N images according to the image quality of the N images. The human face image zone or the touch focusing zone are determined as target zones of the continuous captured images, and the quality of the continuous captured images can be determined according to the image feature parameters of the target zone, thus selecting the images, with better image quality, from the continuously captured images according to the image of the continuous captured images.

Description

From continuous shooting image, select method and the terminal of image
Technical field
The embodiment of the present invention relates to image processing field, more specifically, relates to a kind of method and terminal of selecting image from continuous shooting image.
Background technology
When the terminal with continuous shooting function is captured, (for example mobile phone candid photograph moving scene), often understands multiple images of continuous shooting (photo).
In the prior art, the image that terminal can produce continuous shooting all stores.But, the picture quality that continuous shooting produces is uneven, needs user oneself that poor quality's image is picked out one by one, retains a user and thinks reasonable image, aforesaid way may be that the hobby based on user is selected image, the quality relative mistake of possible image some.
Summary of the invention
The embodiment of the present invention provides a kind of method and terminal of selecting image from continuous shooting image, and the picture quality of selecting is like this relatively good.
First aspect, the embodiment of the present invention provides a kind of method of selecting image from continuous shooting image, comprising: continuous shooting N opens image; Determine that described N opens the image features of the target area of every image in image, wherein said target area is image-region corresponding to touch focusing area described every image, while opening image with N described in continuous shooting, and/or described target area comprises the facial image region in described every image, described image features is used to indicate the picture quality of described target area; ; According to the image features of described target area, determine the picture quality of described every image; According to described N, open the picture quality of image, from described N, open and image, select M and open image.
In conjunction with first aspect, in a kind of implementation of first aspect, described method is carried out by the terminal with touch focus function, described target area is image-region corresponding to touch focusing area described every image, while opening image with N described in continuous shooting, described continuous shooting N opens image, comprising: the described touch focusing area that obtains user's selection of described terminal; According to N described in described touch focusing area continuous shooting, open image.
In conjunction with any of first aspect or its above-mentioned implementation, in the another kind of implementation of first aspect, described target area comprises the facial image region in described every image, the image features of described target area comprises the facial parameters of people's face in described target area, wherein said facial parameters is used to indicate position and/or the expression of described people's face, described definite described N opens the image features of the target area of every image in image, comprising: from described target area, extract described facial parameters.
In conjunction with any of first aspect or its above-mentioned implementation, in the another kind of implementation of first aspect, the image features of described target area also comprises at least one parameter in the following parameter of described target area: definition, brightness, contrast, noise and saturation, described definite described N opens the image features of the target area of every image in image, comprising: from described target area, obtain described at least one parameter.
In conjunction with any of first aspect or its above-mentioned implementation, in the another kind of implementation of first aspect, before the described image features according to described target area is determined the picture quality of described every image, described method also comprises: the image features of determining the background area of described every image, wherein said background area is other regions except described target area in described every image, and the image features of described background area is used to indicate the picture quality of described background area; The described image features according to described target area is determined the picture quality of described every image, comprising: according to the image features of the image features of described target area and described background area, determine the picture quality of described every image.
In conjunction with any of first aspect or its above-mentioned implementation, in the another kind of implementation of first aspect, the image features of described background area comprises the definition of described background area and/or swarms into thing indication parameter, and whether the wherein said thing indication parameter of swarming into is used to indicate described background area and exists and swarm into object.
In conjunction with any of first aspect or its above-mentioned implementation, in the another kind of implementation of first aspect, described facial parameters comprises the extent index and smile extent index nictation of described people's face.
In conjunction with any of first aspect or its above-mentioned implementation, in the another kind of implementation of first aspect, the described image features according to the image features of described target area and described background area is determined the picture quality of described every image, comprise: according to the numerical value of parameters in described image features and default code of points, determine the score of described parameters; According to the weight of the score of described parameters and default described parameters, determine the score of described every image; Described picture quality of opening image according to described N is opened and image, is selected M and open image from described N, comprising: according to described N, open the score of image, from described N, open and image, select the image that score comes front M.
Second aspect, provides a kind of terminal, comprising: continuous shooting unit, and for continuous shooting, N opens image; The first determining unit, for determining that the described N of described continuous shooting unit photographs opens the image features of the target area of every image of image, wherein said target area is image-region corresponding to touch focusing area described every image, while opening image with N described in continuous shooting, and/or described target area comprises the facial image region in described every image, described image features is used to indicate the picture quality of described target area; The second determining unit, for according to the image features of the definite described target area of described the first determining unit, determines the picture quality of described every image; Selected cell, the picture quality of opening image for the described N determining according to described the second determining unit, opens and image, selects M and open image from described N.
In conjunction with second aspect, in a kind of implementation of second aspect, described terminal has touch focus function, described target area is image-region corresponding to touch focusing area described every image, while opening image with N described in continuous shooting, the described touch focusing area that described continuous shooting unit is selected specifically for obtaining the user of described terminal; According to N described in described touch focusing area continuous shooting, open image.
In conjunction with any of second aspect or its above-mentioned implementation, in the another kind of implementation of second aspect, described target area comprises the facial image region in described every image, the image features of described target area comprises the facial parameters of people's face in described target area, wherein said facial parameters is used to indicate position and/or the expression of described people's face, and described the first determining unit specifically for extracting described facial parameters from described target area.
In conjunction with any of second aspect or its above-mentioned implementation, in the another kind of implementation of second aspect, the image features of described target area also comprises at least one parameter in the following parameter of described target area: definition, brightness, contrast, noise and saturation, described the first determining unit specifically for obtaining described at least one parameter from described target area.
In conjunction with any of second aspect or its above-mentioned implementation, in the another kind of implementation of second aspect, described terminal also comprises: the 3rd determining unit, for determining the image features of the background area of described every image, wherein said background area is other regions except described target area in described every image, and the image features of described background area is used to indicate the picture quality of described background area; Described the second determining unit, specifically for according to the image features of the image features of described target area and described background area, is determined the picture quality of described every image.
In conjunction with any of second aspect or its above-mentioned implementation, in the another kind of implementation of second aspect, the image features of described background area comprises the definition of described background area and/or swarms into thing indication parameter, and whether the wherein said thing indication parameter of swarming into is used to indicate described background area and exists and swarm into object.
In conjunction with any of second aspect or its above-mentioned implementation, in the another kind of implementation of second aspect, described facial parameters comprises the extent index and smile extent index nictation of described people's face.
In conjunction with any of second aspect or its above-mentioned implementation, in the another kind of implementation of second aspect, described the second determining unit, specifically for according to the numerical value of parameters in described image features and default code of points, is determined the score of described parameters; According to the weight of the score of described parameters and default described parameters, determine the score of described every image; The score that described selected cell is opened image specifically for the described N determining according to described the second determining unit, opens and image, selects the image that score comes front M from described N.
The third aspect, provides a kind of terminal, comprising: camera, and for continuous shooting, N opens image; Processor, for determining that the described N that described camera is taken opens the image features of the target area of every image of image, wherein said target area is image-region corresponding to touch focusing area described every image, while opening image with N described in continuous shooting, and/or described target area comprises the facial image region in described every image, described image features is used to indicate the picture quality of described target area; According to the image features of described target area, determine the picture quality of described every image; According to described N, open the picture quality of image, from described N, open and image, select M and open image.
In conjunction with the third aspect, in a kind of implementation of the third aspect, described terminal has touch focus function, described target area is image-region corresponding to touch focusing area described every image, while opening image with N described in continuous shooting, the described touch focusing area that described camera is selected specifically for obtaining the user of described terminal; According to N described in described touch focusing area continuous shooting, open image.
In conjunction with any of the third aspect or its above-mentioned implementation, in the another kind of implementation of the third aspect, described target area comprises the facial image region in described every image, the image features of described target area comprises the facial parameters of people's face in described target area, wherein said facial parameters is used to indicate position and/or the expression of described people's face, and described processor specifically for extracting described facial parameters from described target area.
In conjunction with any of the third aspect or its above-mentioned implementation, in the another kind of implementation of the third aspect, described facial parameters comprises the extent index and smile extent index nictation of described people's face.
In conjunction with any of the third aspect or its above-mentioned implementation, in the another kind of implementation of the third aspect, the image features of described target area also comprises at least one parameter in the following parameter of described target area: definition, brightness, contrast, noise and saturation, described processor specifically for obtaining described at least one parameter from described target area.
In conjunction with any of the third aspect or its above-mentioned implementation, in the another kind of implementation of the third aspect, described processor is also for determining the image features of the background area of described every image, wherein said background area is other regions except described target area in described every image, and the image features of described background area is used to indicate the picture quality of described background area; Described processor, specifically for according to the image features of the image features of described target area and described background area, is determined the picture quality of described every image.
In conjunction with any of the third aspect or its above-mentioned implementation, in the another kind of implementation of the third aspect, the image features of described background area comprises the definition of described background area and/or swarms into thing indication parameter, and whether the wherein said thing indication parameter of swarming into is used to indicate described background area and exists and swarm into object.
In conjunction with any of the third aspect or its above-mentioned implementation, in the another kind of implementation of the third aspect, described processor, specifically for according to the numerical value of parameters in described image features and default code of points, is determined the score of described parameters; According to the weight of the score of described parameters and default described parameters, determine the score of described every image; According to described N, open the score of image, from described N, open and image, select the image that score comes front M.
In the embodiment of the present invention, by by facial image region or touch the target area that focusing area is defined as continuous shooting image, and according to the image features of target area, determine the quality of continuous shooting image, and then from multiple images of continuous shooting, select image according to the quality of continuous shooting image, the picture quality of selecting is like this relatively good.
Accompanying drawing explanation
In order to be illustrated more clearly in the technical scheme of the embodiment of the present invention, to the accompanying drawing of required use in the embodiment of the present invention be briefly described below, apparently, below described accompanying drawing be only some embodiments of the present invention, for those of ordinary skills, do not paying under the prerequisite of creative work, can also obtain according to these accompanying drawings other accompanying drawing.
Fig. 1 selects the indicative flowchart of the method for image according to the embodiment of the present invention from continuous shooting image.
Fig. 2 is according to the indicative flowchart of the method for detecting human face of the embodiment of the present invention.
Fig. 3 is according to the indicative flowchart of nictation/smile degree detecting method of the embodiment of the present invention.
Fig. 4 is the indicative flowchart of swarming into thing determination methods according to the embodiment of the present invention.
Fig. 5 selects the flow chart of the method for image according to the embodiment of the present invention from continuous shooting image.
Fig. 6 is according to the schematic block diagram of the terminal of the embodiment of the present invention.
Fig. 7 is according to the schematic block diagram of the terminal of the embodiment of the present invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is clearly and completely described, obviously, described embodiment is a part of embodiment of the present invention, rather than whole embodiment.Embodiment based in the present invention, the every other embodiment that those of ordinary skills obtain under the prerequisite of not making creative work, should belong to the scope of protection of the invention.
Fig. 1 selects the indicative flowchart of the method for image according to the embodiment of the present invention from continuous shooting image.The method of Fig. 1 can be carried out by terminal, for example, be smart mobile phone or camera.
110, continuous shooting N opens image.
120, determine that N opens the image features of the target area of every image in image, wherein target area be every image, image-region corresponding to touch focusing area while opening image with continuous shooting N, and/or target area comprises the facial image region in every image, image features is used to indicate the picture quality of target area.
It should be noted that, when target area, be every image, during image-region corresponding to the touch focusing area while opening image with continuous shooting N, said method can be carried out by the terminal with touch focus function, and this terminal has adopted touch focus function when the above-mentioned N of continuous shooting opens image.Touch focus function and specifically can refer to that terminal is when taking, the user of this terminal is for example, by certain position or the region of hand or other objects (felt pen) point touching screen, to select this time to take the function of focus.
It should be noted that, above-mentioned target area can be image-region corresponding to touch focusing area every image, while opening image with continuous shooting N, for example, user is when continuous shooting, the mode that can focus by touch on touch-screen is selected the focus of image, with every pictures, can be above-mentioned target area with the identical or corresponding image-region in touch focusing area position on touch-screen.Certainly, user when touching focus function photographic images, can take scenery, also can take personage.When taking scenery, above-mentioned target area can comprise the scenery image-region that in image, user is concerned about; When taking personage, above-mentioned target area can comprise the facial image region of image, or the combination in background image region and facial image region.
Alternatively, when terminal is not used or do not have the focus function of touch, above-mentioned target area still can comprise the human face region of every image, and the mode that can detect by people's face is determined this target area.
Wherein, above-mentioned facial image region can refer to the region, people's face position in image.The concrete number in facial image region is relevant with the scene photographing, and can be 1 region or a plurality of region.Facial image region can, after continuous shooting N opens image, be detected continuous shooting N by terminal and open the facial image region in every image in image.
Wherein, above-mentioned touch focusing area can refer to region corresponding on the click image of staff on touch-screen.Region corresponding on image can be scenery region, can be also people's face image-region.Wherein, on image, can using centered by touch point and a fritter rectangular area is set as touching focusing area in corresponding region.
It should be noted that, when shooting comprises people's scenery image, target area can be for when continuous shooting N opens image, staff or felt pen are clicked the region that scenery image is corresponding on touch-screen, then take N and open image, by people's face, detect afterwards, detect the image-region of people's face in image.In this case, target area comprises facial image region in image-region that touch focusing area every image, while opening image with continuous shooting N is corresponding and every image.
130,, according to the image features of target area, determine the picture quality of every image.
It should be noted that, the embodiment of the present invention is not done concrete restriction to the type of above-mentioned image features, this image features is relevant with the type of target area, for example, when target area is while touching focusing area, image features can comprise the one or more parameters in definition, brightness, contrast, noise and the saturation of target area.Again for example, when target area is facial image region, image features not only can comprise the one or more parameters in definition, brightness, contrast, noise and saturation, the facial parameters that can also comprise people's face, for example the extent index and smile extent index nictation of people's face, can also comprise that assignor's face is the parameter of just/side face etc.
140, according to N, open the picture quality of image, from N, open and image, select M and open image.
Particularly, can to N, open image according to quality and sort, then select front M and open image." selecting " in step 140 can refer to that the thumbnail of for example N being opened to image is all presented in the screen of terminal for user recommends, and then identifies above-mentioned M and opens image for user oneself selection.Alternatively, also can refer to that terminal opens the picture quality of image according to N, from N, open and image, select M and open image.Wherein, image, select M and open after image opening from N, can be by user or remaining image of terminal deletion.
In the embodiment of the present invention, by by facial image region or touch the target area that focusing area is defined as continuous shooting image, and according to the image features of target area, determine the quality of continuous shooting image, and then from multiple images of continuous shooting, select image according to the quality of continuous shooting image, the picture quality of selecting is like this relatively good.In addition, the quality based on continuous shooting image is selected image from multiple images of continuous shooting, can improve user and experience.
Should be understood that target area can be the central area of image.For example, when user takes personage's scene, at continuous shooting N, open after image, the target area in definite N opens image in the image features of the target area of every image can be the central area of image, and image features can be the picture quality in picture centre region.When user takes personage's scene or user and adopts the mode taking moving scene that touches focusing, people's face or the image-region that touches focusing can embody the region that user takes intention, so the quality of this part image is user, are concerned about the most.The shooting that more meets user when the image of selecting according to the picture quality of facial image region or touch focusing area is intended to, and the image of selecting is also more effective.
Alternatively, as an embodiment, the method for Fig. 1 can be carried out by terminal, and terminal can be for having the terminal of shoot function, also can be for having shoot function and having the terminal of focus function.
Be understandable that, target area is every image, image-region corresponding to touch focusing area while opening image with continuous shooting N selected the method for image from continuous shooting image, comprising:
110 can comprise: the touch focusing area that obtains user's selection of above-mentioned terminal; According to touching focusing area continuous shooting N, open image.
120, determine that N opens the image features of the target area of every image in image, image features is used to indicate the picture quality of target area;
130,, according to the image features of target area, determine the picture quality of every image;
140, according to N, open the picture quality of image, from N, open and image, select M and open image.
The in the situation that of above-mentioned target area behaviour face image-region, the embodiment of the present invention is not done concrete restriction to definite mode in facial image region in step 120, alternatively, adopts the mode of Fig. 2 to carry out the detection of people's face.
210, off-line training grader.
The method of off-line training grader has a variety of, as the AadaBboost method based on Hhaar feature, and SVM (SVMs, Support Vector Machine) method etc.Particularly, first, build training dataset and test data set, these data sets all will comprise positive sample (as people's face sample) and negative sample (as non-face sample); Then, to building the optimizing of training set traversal, find the best training aids of classifying quality or the combination of training aids; Finally, by test data set, test the accuracy rate of the training aids that test set checking searches out, can use after reaching requirement; If do not met the demands, adjust sample and parameter in the first step and second step, be constantly cycled to repeat whole process until finally reach requirement.
The function of this grader is the characteristic value of the subwindow of computed image, when the characteristic value of this subwindow meets threshold condition (this threshold condition can obtain by off-line training), can judge in this subwindow and comprise people's face, and vice versa.The subwindow of above-mentioned image refers to a part for image-region, and the size of this subwindow can preset, for example, be 20 * 20 (pixels).
While 220, detecting, image is dwindled according to a certain percentage, obtain the zoomed image of different scale.
In image, because the size of people's face can be arbitrary dimension, so people's face that the zoomed image of different scale detects different size can be set.
230, on each yardstick of this image, use detection of classifier whether to comprise people's face.
For example, from the image upper left corner, whether the subwindow that uses detection of classifier to be positioned at 20 * 20 (pixels) in the upper left corner comprises people's face; Whether the residue subwindow that then, detects successively this image according to the preferential mode of row (column) comprises people's face.The image that it should be noted that each yardstick can all need to carry out this step 230, thereby avoids the people's face in undetected this image.
Concrete detection mode is to utilize the characteristic value of each subwindow of detection of classifier, and when characteristic value meets threshold condition, the image detecting in this subwindow comprises people's face.
240, the testing result in statistic procedure 230.
Because every image can have multiple people's faces, and the size of people's face can be different, and step 240 can be added up the face images region detecting.
Alternatively, as an embodiment, target area comprises facial image region, and the image features of target area comprises the facial parameters of people's face in target area, wherein facial parameters is used in reference to position and/or the expression of the face of leting others have a look at, and step 120 can comprise: from target area, extract facial parameters.
Particularly, above-mentioned facial parameters can comprise people's face nictation extent index and smile extent index indicate the information of human face expression, also can comprise that position and the people face of people's face in image is the information that assignor's face position is come in front or side.
It should be noted that the embodiment of the present invention is not construed as limiting the above-mentioned concrete mode of extracting facial parameters from target area, different facial parameters can be extracted in different ways.
Illustrate, people's face nictation extent index and smile extent index can adopt the mode shown in Fig. 3 to determine.
310, preliminary treatment is carried out in target area.
As carry out illumination correction, proportional zoom etc.
320, the characteristic point of people's face in localizing objects region.
The characteristic point of people's face can comprise eye position, as position of the position at canthus, eye center point etc.; The position of eyebrow, as the position on the position at eyebrow center, eyebrow both sides etc.; The position of nose; The position of mouth, as the position at the position of the corners of the mouth, mouth center etc.
330, obtain the image-region of characteristic point part, to these extracted region features, composition characteristic vector.
340, the characteristic vector obtaining is classified and returned, the degree numerical value that calculates nictation or smile.
Above-mentioned people's face is that definite mode of front or side can be in the following way, first two detectors of off-line training: front detector and side detector; In people's face testing process, utilize respectively front detector and side detector to detect each subwindow, when people's face is detected by front detector, this person's face is positive, otherwise is side.
Alternatively, as another embodiment, the image features of target area also can comprise at least one parameter in the following parameter of target area: definition, brightness, contrast, noise and saturation, step 120 can comprise: from target area, obtain at least one parameter.
Particularly, in this embodiment, not only the quality of above-mentioned every image can be determined according to the facial parameters of people's face in target area, the quality of above-mentioned every image can also be determined according to parameters such as the definition of the image of target area, brightness, contrast, noise and saturations.Further, when determining the quality of every image, can consider one or more in above-mentioned parameter, the embodiment of the present invention is not done concrete restriction to this.
It should be noted that, the concrete obtain manner of above-mentioned parameter can have multiple, and for example, the definition of image can be used edge detection operator (as sobel operator) to calculate edge, target area summation, and column direction average is as the definition values of output.Wherein, edge detection operator is generally the matrix of 3*3, is respectively transverse matrix and longitudinal matrix, and the image of this matrix and target area is made to planar convolution, by the result summation the computation of mean values that obtain after convolution, is the definition values of image.The matrix of Sobel operator can be as follows: longitudinal matrix: - 1 0 + 1 - 2 0 + 2 - 1 0 + 1 , Transverse matrix: + 1 + 2 + 1 0 0 0 - 1 - 2 - 1 .
The brightness of image can obtain by the average of gray value or the average of block image of computed image pixel;
Picture contrast can obtain by the histogrammic statistics of computed image;
Picture noise can obtain by the PSNR (Y-PSNR, Peak Signal to Noise Ratio) of computed image, wherein the mean square deviation mistake of the gray value that wherein MSE is image pixel, the difference of each grey scale pixel value of image and average square mean value.
The saturation of image can be calculated by the following method:
Take RGB image as example, calculate respectively maximum max and the minimum value min of R, G, B, then the saturation S=of image (max-min)/max.
Alternatively, as another embodiment, before step 130, the method of Fig. 1 also can comprise: the image features of determining the background area of every image, wherein background area is other regions except target area in every image, and the image features of background area is used to indicate the picture quality of background area; Step 130 can comprise: according to the image features of the image features of target area and background area, determine the picture quality of every image.
Whether the image features in above-mentioned background region can comprise the definition of background area and/or swarm into thing indication parameter, wherein swarm into thing indication parameter and be used to indicate background area and exist and swarm into object.Further, the image features of background area also can comprise: the parameters such as the definition of the image of background area, brightness, contrast, noise and saturation.
Particularly, the definition of background image can be calculated according to above-mentioned definition algorithm.Whether background area exists the judgment mode of swarming into nothing can adopt flow process as shown in Figure 4.
410, input image data.
420, calculate after a sub-picture with respect to the global motion of last sub-picture, and according to motion vector by front and back two width figure registrations.
Conventionally, during cellphone subscriber's continuous shooting, at front piece image and a rear sub-picture shooting interval, mobile phone may be owing to controlling shakiness or shake causes movement, so need to be according to global motion registration two width images.When adopting tripod continuous shooting, can not carry out this step.
It should be noted that whether the first width figure does not exist swarms into thing judgement, since the second width figure, calculates.
430, by image-region piecemeal, as the area dividing of image periphery, the residual sum similarity before and after calculating between image-region piece.
440, judging whether image exists swarms into thing.
If the residual error of image is enough little and similarity large (threshold value of predeterminable residual sum similarity), before and after this, match block is similar, is judged as and does not swarm into object; Otherwise, swarm into object.
Alternatively, as another embodiment, step 130 can comprise: according to the numerical value of parameters in image features and default code of points, determine the score of parameters; According to the weight of the score of parameters and default parameters, determine the score of every image; Step 140 can comprise: according to N, open the score of image, from N, open and image, select score and come the image that front M opens.
Image features in this embodiment can comprise the image features of target area, the image features that can also comprise background area, choosing of this characteristic parameter can be that system is self-defining, also can be that system provides option, user selects the characteristic parameter of oneself being concerned about according to actual conditions, the embodiment of the present invention is not done concrete restriction to this.
Above-mentioned code of points can be for predefined, and for example, degree full marks nictation of people's face are 10 minutes, nictation extent index value [0,1], when the nictation calculating, extent index was 0.5, nictation extent index must be divided into 5 minutes, in like manner can determine the score of all the other parameters.
The weight of above-mentioned parameters can be predefined, and can be also user selects according to the preference of oneself, and whether people's face that for example user's major concern is taken smiles, and the weight of smile parameter can be compared to larger that the weight of other parameters arranges.
It should be noted that, be that the in the situation that of touching focusing area, the image features of target area also can comprise the parameters such as definition, brightness, contrast, noise and saturation in target area.
Below in conjunction with object lesson, the embodiment of the present invention is described in further detail.The example that it should be noted that Fig. 5 is only used to help skilled in the art to understand the embodiment of the present invention, and leaves no choice but the embodiment of the present invention to be limited to illustrated concrete numerical value or concrete scene.Those skilled in the art, according to the example of given Fig. 5, obviously can carry out modification or the variation of various equivalences, and such modification or variation also fall in the scope of the embodiment of the present invention.
Fig. 5 selects the flow chart of the method for image according to the embodiment of the present invention from continuous shooting image.The method of Fig. 5 can be carried out by the smart mobile phone with continuous shooting function, and this mobile phone has touch focus function.This mobile phone continuous shooting N open image, following steps are the handling processes for an image, and N opens every image in image all can be according to following flow processing, but the embodiment of the present invention is opened image to N and carried out the order of following flow process and do not do concrete restriction, can parallel processing, also can sequentially carry out.
510, judge whether cellphone subscriber has used touch focus function.
520, people's face detects.
When cellphone subscriber does not use and touches focus function in step 510, carry out this step, detect in present image whether comprise facial image region.
530, judge whether to detect people's face.
540, face partition.
When step 530 detects people's face, execution step 540, using facial image region as target area, carry out face partition, for example, according to the smile degree of people's face, nictation degree, mark in the positive side of people's face etc.
550, Offered target region.
When in step 510, when cellphone subscriber has used touch focus function, the Offered target region of step 550 specifically can refer to touch focusing area to be set as target area.
When people's face not detected in step 530, the Offered target region of step 550 can be that the central area of present image is set as to target area, and the size of this central area can preset.
560, target area scoring.
Can mark according to the definition of target area, brightness, contrast, noise, saturation etc.The scoring that it should be noted that above parameters can walk abreast and carry out, and also can sequentially carry out.
570, background area scoring.
Whether the definition of background area etc. is marked, can also exist and swarm into thing and judge background area, the result of judgement is a part for region scoring as a setting also.If the scoring of target area is more or less the same between image, can just select according to the scoring of background area the image of better quality.The scoring that it should be noted that background area parameters can walk abreast and carry out, and also can sequentially carry out.Further, the scoring between target area and background area can walk abreast and carry out, and also can sequentially carry out.
580, other scorings.
Can also carry out other scorings to present image, for example the aesthetic feeling of integral image, if other marks are more or less the same before image, can select according to the aesthetic feeling of image the image of better quality.
590, scoring weighted sum.
Each scoring is weighted to summation according to the weight setting in advance, draws the gross score of this image.
595, Output rusults.
In the embodiment of the present invention, by by facial image region or touch the target area that focusing area is defined as continuous shooting image, and according to the image features of target area, determine the quality of continuous shooting image, and then from multiple images of continuous shooting, select image according to the quality of continuous shooting image, the picture quality of selecting is like this relatively good, in addition, the quality based on continuous shooting image is selected image from multiple images of continuous shooting, can improve user and experience.Further, the embodiment of the present invention is according to user's shooting intention, will touch focusing area or facial image region division is target area, improved the validity of image selection, and then improved the validity of the image of selecting.
Above in conjunction with Fig. 1 to Fig. 5, describe in detail according to the embodiment of the present invention from continuous shooting image, select the method for image, below in conjunction with Fig. 6 to Fig. 7, describe in detail according to the terminal of the embodiment of the present invention.
Fig. 6 is according to the schematic block diagram of the terminal of the embodiment of the present invention.The terminal 600 of Fig. 6 comprises: continuous shooting unit 610, the first determining unit 620, the second determining unit 630 and selected cell 640.
Should be understood that terminal 600 can realize each step of being carried out by terminal in Fig. 1 to Fig. 5, for avoiding repetition, is not described in detail.
Continuous shooting unit 610, for continuous shooting, N opens image;
The first determining unit 620, for determining that the N that continuous shooting unit 610 is taken opens the image features of the target area of every image of image, wherein target area be every image, image-region corresponding to touch focusing area while opening image with continuous shooting N, and/or target area comprises the facial image region in every image, image features is used to indicate the picture quality of target area;
The second determining unit 630, for according to the image features of the definite target area of the first determining unit 620, determines the picture quality of every image;
Selected cell 640, for the picture quality of opening image according to the definite N of the second determining unit 630, opens and image, selects M and open image from N.
In the embodiment of the present invention, by by facial image region or touch the target area that focusing area is defined as continuous shooting image, and according to the image features of target area, determine the quality of continuous shooting image, and then from multiple images of continuous shooting, select image according to the quality of continuous shooting image, the picture quality of selecting is like this relatively good, in addition, the quality based on continuous shooting image is selected image from multiple images of continuous shooting, can improve user and experience.Further, the embodiment of the present invention is according to user's shooting intention, will touch focusing area or facial image region division is target area, improved the validity of image selection, and then improved the validity of the image of selecting.
Alternatively, as an embodiment, terminal 600 can be for having the terminal of shoot function, also can be for having shoot function and having the terminal of focus function., target area is every image, image-region corresponding to touch focusing area while opening image with continuous shooting N, the touch focusing area that continuous shooting unit 610 is selected specifically for obtaining the user of terminal 600; According to touching focusing area continuous shooting N, open image.
Alternatively, as another embodiment, target area comprises the facial image region in every image, the image features of target area comprises the facial parameters of people's face in target area, wherein facial parameters is used in reference to position and/or the expression of the face of leting others have a look at, and the first determining unit 620 specifically for extracting facial parameters from target area.
Alternatively, as another embodiment, facial parameters comprises the extent index and smile extent index nictation of people's face.
Alternatively, as another embodiment, the image features of target area also comprises at least one parameter in the following parameter of target area: definition, brightness, contrast, noise and saturation, the first determining unit 620 specifically for obtaining at least one parameter from target area.
Alternatively, as another embodiment, terminal 600 also comprises: the 3rd determining unit, for determining the image features of the background area of every image, wherein background area is other regions except target area in every image, and the image features of background area is used to indicate the picture quality of background area; The second determining unit 630, specifically for according to the image features of the image features of target area and background area, is determined the picture quality of every image.
Whether alternatively, as another embodiment, the image features of background area comprises the definition of background area and/or swarms into thing indication parameter, wherein swarm into thing indication parameter and be used to indicate background area and exist and swarm into object.
Alternatively, as another embodiment, the second determining unit 630, specifically for according to the numerical value of parameters in image features and default code of points, is determined the score of parameters; According to the weight of the score of parameters and default parameters, determine the score of every image; The score of selected cell 640 specifically for opening image according to the definite N of the second determining unit 630, opens and image, selects the image that score comes front M from N.
Fig. 7 is according to the schematic block diagram of the terminal of the embodiment of the present invention.The terminal 700 of Fig. 7 can comprise: camera 750 and processor device 760.
Should be understood that terminal 700 can realize each step of being carried out by terminal in Fig. 1 to Fig. 5, for avoiding repetition, is not described in detail.Wherein,
Camera 750, for continuous shooting, N opens image;
Processor 760, for determining that the N that camera 750 is taken opens the image features of the target area of every image of image, wherein said target area is image-region corresponding to touch focusing area described every image, while opening image with N described in continuous shooting, and/or described target area comprises the facial image region in described every image, described image features is used to indicate the picture quality of described target area; According to the image features of target area, determine the picture quality of every image; According to N, open the picture quality of image, from N, open and image, select M and open image.
Wherein, camera 750 can be opened image by continuous shooting N under the control of processor.Processor can be processor 760, can be also other processors.
In the embodiment of the present invention, by by facial image region or touch the target area that focusing area is defined as continuous shooting image, and according to the image features of target area, determine the quality of continuous shooting image, and then from multiple images of continuous shooting, select image according to the quality of continuous shooting image, the picture quality of selecting is like this relatively good, in addition, the quality based on continuous shooting image is selected image from multiple images of continuous shooting, can improve user and experience.Further, the embodiment of the present invention is according to user's shooting intention, will touch focusing area or facial image region division is target area, improved the validity of image selection, and then improved the validity of the image of selecting.
It should be noted that, terminal can also comprise RF (radio frequency, Radio Frequency) circuit 710, memory 720, input unit 730, display unit 740, voicefrequency circuit 770, WiFi module 780 and power supply 790.Further, input unit 730 can comprise touch panel 731, and this touch panel 731 also can be called touch-screen.With other input equipments 732; Display unit 740 can comprise display floater 741.Wherein, contact panel 731 can cover display floater 741, form touch display screen, when touch display screen detect thereon or near touch operation after, send processor 760 to determine the type of touch event, with preprocessor 760, according to the type of touch event, on touch display screen, provide corresponding vision output.User can carry out touch operation on touch display screen.Concrete, when continuous shooting N opens image, user for example,, by certain position or the region of hand or other objects (felt pen) point touching screen, can select to touch focusing area like this.
Terminal 700 can be for having the terminal of shoot function, also can be for thering is shoot function and thering is the terminal of focus function, and also can be for thering is the terminal that touches focusing shoot function.
Alternatively, as an embodiment, terminal 700 has and touches focusing shoot function, target area is every image, image-region corresponding to touch focusing area while opening image with continuous shooting N, and camera 750 can be for obtaining the touch focusing area of user's selection of terminal 700; According to touching focusing area continuous shooting N, open image.
Alternatively, as another embodiment, target area comprises the facial image region in every image, the image features of target area comprises the facial parameters of people's face in target area, wherein facial parameters is used in reference to position and/or the expression of the face of leting others have a look at, and processor 760 specifically for extracting facial parameters from target area.
Alternatively, as another embodiment, facial parameters comprises the extent index and smile extent index nictation of people's face.
Alternatively, as another embodiment, the image features of target area also comprises at least one parameter in the following parameter of target area: definition, brightness, contrast, noise and saturation, processor 760 specifically for obtaining at least one parameter from target area.
Alternatively, as another embodiment, processor 760 is also for determining the image features of the background area of every image, and wherein background area is other regions except target area in every image, and the image features of background area is used to indicate the picture quality of background area; Processor 760, specifically for according to the image features of the image features of target area and background area, is determined the picture quality of every image.
Whether alternatively, as another embodiment, the image features of background area comprises the definition of background area and/or swarms into thing indication parameter, wherein swarm into thing indication parameter and be used to indicate background area and exist and swarm into object.
Alternatively, as another embodiment, processor 760, specifically for according to the numerical value of parameters in image features and default code of points, is determined the score of parameters; According to the weight of the score of parameters and default parameters, determine the score of every image; According to N, open the score of image, from N, open and image, select the image that score comes front M.
It should be noted that, in above-described embodiment, the N of continuous shooting opens image, and N can for example, can be set as 2 according to need to the setting voluntarily of user, or 5, or 10 etc.From N, open and image, select M and open image, M also can according to user need set, but the N that can not surpass continuous shooting open image, for example, N is that 2, M can be set as 1; N is that 3, M can be set as 3; N is that 5, M can be set as 3; Or N is that 10, M can be set as 5.
Those of ordinary skills can recognize, unit and the algorithm steps of each example of describing in conjunction with embodiment disclosed herein, can realize with the combination of electronic hardware or computer software and electronic hardware.These functions are carried out with hardware or the mode of software and hardware actually, depend on application-specific and the design constraint of technical scheme.Those of ordinary skill in the art can specifically should be used for realizing described function with distinct methods to each, but this realization should not thought and exceeds scope of the present invention.
Those skilled in the art can be well understood to, and for convenience and simplicity of description, the specific works process of the system of foregoing description, device and unit, can, with reference to the corresponding process in preceding method embodiment, not repeat them here.
In the several embodiment that provide in the application, should be understood that disclosed system, apparatus and method can realize by another way.For example, device embodiment described above is only schematic, for example, the division of described unit, be only that a kind of logic function is divided, during actual realization, can have other dividing mode, for example a plurality of unit or assembly can in conjunction with or can be integrated into another system, or some features can ignore, or do not carry out.Another point, shown or discussed coupling each other or direct-coupling or communication connection can be by some interfaces, indirect coupling or the communication connection of device or unit can be electrically, machinery or other form.
The described unit as separating component explanation can or can not be also physically to separate, and the parts that show as unit can be or can not be also physical locations, can be positioned at a place, or also can be distributed in a plurality of network element.Can select according to the actual needs some or all of unit wherein to realize the object of the present embodiment scheme.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, can be also that the independent physics of unit exists, and also can be integrated in a unit two or more unit.
If described function usings that the form of SFU software functional unit realizes and during as production marketing independently or use, can be stored in a computer read/write memory medium.Understanding based on such, the part that technical scheme of the present invention contributes to prior art in essence in other words or the part of this technical scheme can embody with the form of software product, this computer software product is stored in a storage medium, comprise that some instructions are with so that a computer equipment (can be personal computer, server, or the network equipment etc.) carry out all or part of step of method described in each embodiment of the present invention.And aforesaid storage medium comprises: USB flash disk, portable hard drive, read-only memory (ROM, Read-Only Memory), the various media that can be program code stored such as random access memory (RAM, Random Access Memory), magnetic disc or CD.
The above; be only the specific embodiment of the present invention, but protection scope of the present invention is not limited to this, is anyly familiar with those skilled in the art in the technical scope that the present invention discloses; can expect easily changing or replacing, within all should being encompassed in protection scope of the present invention.Therefore, protection scope of the present invention should be as the criterion by the described protection range with claim.

Claims (24)

1. from continuous shooting image, select a method for image, it is characterized in that, comprising:
Continuous shooting N opens image;
Determine that described N opens the image features of the target area of every image in image, wherein said target area is image-region corresponding to touch focusing area described every image, while opening image with N described in continuous shooting, and/or described target area comprises the facial image region in described every image, described image features is used to indicate the picture quality of described target area;
According to the image features of described target area, determine the picture quality of described every image;
According to described N, open the picture quality of image, from described N, open and image, select M and open image.
2. the method for claim 1, is characterized in that, described method is carried out by having the terminal that touches focus function, described target area is described every image, image-region corresponding to touch focusing area while opening image with N described in continuous shooting,
Described continuous shooting N opens image, comprising:
Obtain the described touch focusing area of user's selection of described terminal;
According to N described in described touch focusing area continuous shooting, open image.
3. method as claimed in claim 1 or 2, it is characterized in that, described target area comprises the facial image region in described every image, the image features of described target area comprises the facial parameters of people's face in described target area, wherein said facial parameters is used to indicate position and/or the expression of described people's face
Described definite described N opens the image features of the target area of every image in image, comprising:
From described target area, extract described facial parameters.
4. method as claimed in claim 3, is characterized in that, described facial parameters comprises the extent index and smile extent index nictation of described people's face.
5. the method as described in claim 3 or 4, is characterized in that, the image features of described target area also comprises at least one parameter in the following parameter of described target area: definition, brightness, contrast, noise and saturation,
Described definite described N opens the image features of the target area of every image in image, comprising:
From described target area, obtain described at least one parameter.
6. the method as described in any one in claim 1-5, is characterized in that,
Described, according to the image features of described target area, before determining the picture quality of described every image, described method also comprises:
Determine the image features of the background area of described every image, wherein said background area is other regions except described target area in described every image, and the image features of described background area is used to indicate the picture quality of described background area;
The described image features according to described target area is determined the picture quality of described every image, comprising:
According to the image features of the image features of described target area and described background area, determine the picture quality of described every image.
7. method as claimed in claim 6, it is characterized in that, the image features of described background area comprises the definition of described background area and/or swarms into thing indication parameter, and whether the wherein said thing indication parameter of swarming into is used to indicate described background area and exists and swarm into object.
8. the method as described in claim 6 or 7, is characterized in that,
The described image features according to the image features of described target area and described background area is determined the picture quality of described every image, comprising:
According to the numerical value of parameters in described image features and default code of points, determine the score of described parameters;
According to the weight of the score of described parameters and default described parameters, determine the score of described every image;
Described picture quality of opening image according to described N is opened and image, is selected M and open image from described N, comprising:
According to described N, open the score of image, from described N, open and image, select the image that score comes front M.
9. a terminal, is characterized in that, comprising:
Continuous shooting unit, for continuous shooting, N opens image;
The first determining unit, for determining that the described N of described continuous shooting unit photographs opens the image features of the target area of every image of image, wherein said target area is image-region corresponding to touch focusing area described every image, while opening image with N described in continuous shooting, and/or described target area comprises the facial image region in described every image, described image features is used to indicate the picture quality of described target area;
The second determining unit, for according to the image features of the definite described target area of described the first determining unit, determines the picture quality of described every image;
Selected cell, the picture quality of opening image for the described N determining according to described the second determining unit, opens and image, selects M and open image from described N.
10. terminal as claimed in claim 9, is characterized in that, described terminal has touch focus function, described target area is described every image, image-region corresponding to touch focusing area while opening image with N described in continuous shooting,
The described touch focusing area that described continuous shooting unit is selected specifically for obtaining the user of described terminal; According to N described in described touch focusing area continuous shooting, open image.
11. terminals as described in claim 9 or 10, it is characterized in that, described target area comprises the facial image region in described every image, the image features of described target area comprises the facial parameters of people's face in described target area, wherein said facial parameters is used to indicate position and/or the expression of described people's face
Described the first determining unit specifically for extracting described facial parameters from described target area.
12. terminals as claimed in claim 11, is characterized in that, described facial parameters comprises the extent index and smile extent index nictation of described people's face.
13. terminals as described in claim 11 or 12, it is characterized in that, the image features of described target area also comprises at least one parameter in the following parameter of described target area: definition, brightness, contrast, noise and saturation, described the first determining unit specifically for obtaining described at least one parameter from described target area.
14. terminals as described in any one in claim 9-13, is characterized in that, described terminal also comprises:
The 3rd determining unit, for determining the image features of the background area of described every image, wherein said background area is other regions except described target area in described every image, and the image features of described background area is used to indicate the picture quality of described background area;
Described the second determining unit, specifically for according to the image features of the image features of described target area and described background area, is determined the picture quality of described every image.
15. terminals as claimed in claim 14, it is characterized in that, the image features of described background area comprises the definition of described background area and/or swarms into thing indication parameter, and whether the wherein said thing indication parameter of swarming into is used to indicate described background area and exists and swarm into object.
16. terminals as described in claims 14 or 15, is characterized in that,
Described the second determining unit, specifically for according to the numerical value of parameters in described image features and default code of points, is determined the score of described parameters; According to the weight of the score of described parameters and default described parameters, determine the score of described every image;
The score that described selected cell is opened image specifically for the described N determining according to described the second determining unit, opens and image, selects the image that score comes front M from described N.
17. 1 kinds of terminals, is characterized in that, comprising:
Camera, for continuous shooting, N opens image;
Processor, for determining that the described N that described camera is taken opens the image features of the target area of every image of image, wherein said target area is image-region corresponding to touch focusing area described every image, while opening image with N described in continuous shooting, and/or described target area comprises the facial image region in described every image, described image features is used to indicate the picture quality of described target area; According to the image features of described target area, determine the picture quality of described every image; According to described N, open the picture quality of image, from described N, open and image, select M and open image.
18. terminals as claimed in claim 17, is characterized in that, described terminal has touch focus function, described target area is described every image, image-region corresponding to touch focusing area while opening image with N described in continuous shooting,
The described touch focusing area that described camera is selected specifically for obtaining the user of described terminal; According to N described in described touch focusing area continuous shooting, open image.
19. terminals as described in claim 17 or 18, it is characterized in that, described target area comprises the facial image region in described every image, the image features of described target area comprises the facial parameters of people's face in described target area, wherein said facial parameters is used to indicate position and/or the expression of described people's face
Described processor specifically for extracting described facial parameters from described target area.
20. terminals as claimed in claim 19, is characterized in that, described facial parameters comprises the extent index and smile extent index nictation of described people's face.
21. terminals as described in claim 19 or 20, it is characterized in that, the image features of described target area also comprises at least one parameter in the following parameter of described target area: definition, brightness, contrast, noise and saturation, described processor specifically for obtaining described at least one parameter from described target area.
22. terminals as described in any one in claim 17-21, it is characterized in that, described processor is also for determining the image features of the background area of described every image, wherein said background area is other regions except described target area in described every image, and the image features of described background area is used to indicate the picture quality of described background area; Described processor, specifically for according to the image features of the image features of described target area and described background area, is determined the picture quality of described every image.
23. terminals as claimed in claim 22, it is characterized in that, the image features of described background area comprises the definition of described background area and/or swarms into thing indication parameter, and whether the wherein said thing indication parameter of swarming into is used to indicate described background area and exists and swarm into object.
24. terminals as described in claim 22 or 23, is characterized in that,
Described processor, specifically for according to the numerical value of parameters in described image features and default code of points, is determined the score of described parameters; According to the weight of the score of described parameters and default described parameters, determine the score of described every image; According to described N, open the score of image, from described N, open and image, select the image that score comes front M.
CN201380003176.6A 2013-10-23 2013-10-23 Method and terminal selecting image from continuous captured image Pending CN104185981A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2013/085824 WO2015058381A1 (en) 2013-10-23 2013-10-23 Method and terminal for selecting image from continuous images

Publications (1)

Publication Number Publication Date
CN104185981A true CN104185981A (en) 2014-12-03

Family

ID=51966048

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201380003176.6A Pending CN104185981A (en) 2013-10-23 2013-10-23 Method and terminal selecting image from continuous captured image

Country Status (2)

Country Link
CN (1) CN104185981A (en)
WO (1) WO2015058381A1 (en)

Cited By (49)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463895A (en) * 2014-12-26 2015-03-25 韩哲 Earth surface monitoring image processing method based on SAR
CN104680478A (en) * 2015-02-15 2015-06-03 青岛海信移动通信技术股份有限公司 Selection method and device for target image data
CN104967786A (en) * 2015-07-10 2015-10-07 广州三星通信技术研究有限公司 Image selection method and device
CN105467741A (en) * 2015-12-16 2016-04-06 魅族科技(中国)有限公司 Panoramic shooting method and terminal
CN105487774A (en) * 2015-11-27 2016-04-13 小米科技有限责任公司 Image grouping method and device
CN105635567A (en) * 2015-12-24 2016-06-01 小米科技有限责任公司 Shooting method and device
CN105654463A (en) * 2015-11-06 2016-06-08 乐视移动智能信息技术(北京)有限公司 Image processing method applied to continuous shooting process and apparatus thereof
CN105827928A (en) * 2015-01-05 2016-08-03 中兴通讯股份有限公司 Focusing area selection method and focusing area selection device
CN105893578A (en) * 2016-03-31 2016-08-24 青岛海信移动通信技术股份有限公司 Method and device for selecting photos
CN105894031A (en) * 2016-03-31 2016-08-24 青岛海信移动通信技术股份有限公司 Photo selection method and photo selection device
CN105913052A (en) * 2016-06-08 2016-08-31 Tcl集团股份有限公司 Photograph classification management method and system thereof
CN106250916A (en) * 2016-07-22 2016-12-21 西安酷派软件科技有限公司 A kind of screen the method for picture, device and terminal unit
CN106303235A (en) * 2016-08-11 2017-01-04 广东小天才科技有限公司 Take pictures processing method and processing device
CN106548113A (en) * 2015-09-16 2017-03-29 上海市公安局刑事侦查总队 Image-recognizing method and system
CN106570028A (en) * 2015-10-10 2017-04-19 比亚迪股份有限公司 Mobile terminal, fuzzy image deletion method and fuzzy picture deletion device
CN106570110A (en) * 2016-10-25 2017-04-19 北京小米移动软件有限公司 De-overlapping processing method and apparatus of image
CN106572303A (en) * 2016-10-17 2017-04-19 努比亚技术有限公司 Picture processing method and terminal
CN107241504A (en) * 2017-06-08 2017-10-10 努比亚技术有限公司 A kind of image processing method, mobile terminal and computer-readable recording medium
CN107454305A (en) * 2016-05-31 2017-12-08 宇龙计算机通信科技(深圳)有限公司 A kind of automatic photographing method and electronic equipment
CN107454267A (en) * 2017-08-31 2017-12-08 维沃移动通信有限公司 The processing method and mobile terminal of a kind of image
CN107483834A (en) * 2015-02-04 2017-12-15 广东欧珀移动通信有限公司 A kind of image processing method, continuous shooting method and device and related media production
CN107659722A (en) * 2017-09-25 2018-02-02 维沃移动通信有限公司 A kind of image-selecting method and mobile terminal
CN107743200A (en) * 2017-10-31 2018-02-27 广东欧珀移动通信有限公司 Method, apparatus, computer-readable recording medium and the electronic equipment taken pictures
CN107832452A (en) * 2017-11-23 2018-03-23 苏州亿科赛卓电子科技有限公司 A kind of photo management method and device
CN108009277A (en) * 2017-12-20 2018-05-08 珠海格力电器股份有限公司 A kind of image-erasing method and terminal device
CN108229240A (en) * 2016-12-09 2018-06-29 杭州海康威视数字技术股份有限公司 A kind of method and device of determining picture quality
CN108513068A (en) * 2018-03-30 2018-09-07 广东欧珀移动通信有限公司 Choosing method, device, storage medium and the electronic equipment of image
CN108665510A (en) * 2018-05-14 2018-10-16 Oppo广东移动通信有限公司 Rendering intent, device, storage medium and the terminal of continuous shooting image
CN108881714A (en) * 2018-05-24 2018-11-23 太仓鸿策创达广告策划有限公司 A kind of image processing system
CN108876782A (en) * 2018-06-27 2018-11-23 Oppo广东移动通信有限公司 Recall video creation method and relevant apparatus
CN108920591A (en) * 2018-06-27 2018-11-30 Oppo广东移动通信有限公司 Recall video creation method and relevant apparatus
CN108960097A (en) * 2018-06-22 2018-12-07 维沃移动通信有限公司 A kind of method and device obtaining face depth information
CN109389019A (en) * 2017-08-14 2019-02-26 杭州海康威视数字技术股份有限公司 Facial image selection method, device and computer equipment
CN109902189A (en) * 2018-11-30 2019-06-18 华为技术有限公司 A kind of picture selection method and relevant device
WO2019119826A1 (en) * 2017-12-21 2019-06-27 格力电器(武汉)有限公司 Image processing method and apparatus
CN110139021A (en) * 2018-02-09 2019-08-16 北京三星通信技术研究有限公司 Auxiliary shooting method and terminal device
CN110379118A (en) * 2019-07-26 2019-10-25 中车青岛四方车辆研究所有限公司 Fire prevention intelligent monitor system and method under train vehicle
WO2020038254A1 (en) * 2018-08-23 2020-02-27 杭州海康威视数字技术股份有限公司 Image processing method and apparatus for target recognition
CN110895802A (en) * 2018-08-23 2020-03-20 杭州海康威视数字技术股份有限公司 Image processing method and device
CN110990607A (en) * 2019-11-25 2020-04-10 成都市喜爱科技有限公司 Game photo screening method, device, server and computer-readable storage medium
WO2020155052A1 (en) * 2019-01-31 2020-08-06 华为技术有限公司 Method for selecting images based on continuous shooting and electronic device
CN111767757A (en) * 2019-03-29 2020-10-13 杭州海康威视数字技术股份有限公司 Identity information determination method and device
CN112036209A (en) * 2019-06-03 2020-12-04 Tcl集团股份有限公司 Portrait photo processing method and terminal
CN112188075A (en) * 2019-07-05 2021-01-05 杭州海康威视数字技术股份有限公司 Snapshot, image processing device and image processing method
CN112580400A (en) * 2019-09-29 2021-03-30 华为技术有限公司 Image optimization method and electronic equipment
CN112770011A (en) * 2016-06-17 2021-05-07 微软技术许可有限责任公司 Suggesting image files for deletion based on image file parameters
CN113472994A (en) * 2020-03-30 2021-10-01 北京小米移动软件有限公司 Photographing method and device, mobile terminal and storage medium
CN115209052A (en) * 2022-07-08 2022-10-18 维沃移动通信(深圳)有限公司 Image screening method and device, electronic equipment and storage medium
CN115328357A (en) * 2022-08-15 2022-11-11 北京达佳互联信息技术有限公司 Captured image processing method and device, electronic device and storage medium

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107734253B (en) * 2017-10-13 2020-01-10 Oppo广东移动通信有限公司 Image processing method, image processing device, mobile terminal and computer-readable storage medium
CN109151338B (en) * 2018-07-10 2021-06-25 Oppo广东移动通信有限公司 Image processing method and related product
CN109448069B (en) * 2018-10-30 2023-07-18 维沃移动通信有限公司 Template generation method and mobile terminal

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004135029A (en) * 2002-10-10 2004-04-30 Fuji Photo Film Co Ltd Digital camera
CN102263896A (en) * 2010-05-31 2011-11-30 索尼公司 image processing unit, image processing method and program
CN102377905A (en) * 2010-08-18 2012-03-14 佳能株式会社 Image pickup apparatus and control method therefor
CN102663745A (en) * 2012-03-23 2012-09-12 北京理工大学 Color fusion image quality evaluation method based on vision task.
CN102799877A (en) * 2012-09-11 2012-11-28 上海中原电子技术工程有限公司 Method and system for screening face images

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102622740B (en) * 2011-01-28 2016-07-20 鸿富锦精密工业(深圳)有限公司 Anti-eye closing portrait system and method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004135029A (en) * 2002-10-10 2004-04-30 Fuji Photo Film Co Ltd Digital camera
CN102263896A (en) * 2010-05-31 2011-11-30 索尼公司 image processing unit, image processing method and program
CN102377905A (en) * 2010-08-18 2012-03-14 佳能株式会社 Image pickup apparatus and control method therefor
CN102663745A (en) * 2012-03-23 2012-09-12 北京理工大学 Color fusion image quality evaluation method based on vision task.
CN102799877A (en) * 2012-09-11 2012-11-28 上海中原电子技术工程有限公司 Method and system for screening face images

Cited By (79)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463895A (en) * 2014-12-26 2015-03-25 韩哲 Earth surface monitoring image processing method based on SAR
CN104463895B (en) * 2014-12-26 2017-10-24 青岛博恒康信息技术有限公司 A kind of earth's surface monitoring image processing method based on SAR
CN105827928A (en) * 2015-01-05 2016-08-03 中兴通讯股份有限公司 Focusing area selection method and focusing area selection device
CN107483834A (en) * 2015-02-04 2017-12-15 广东欧珀移动通信有限公司 A kind of image processing method, continuous shooting method and device and related media production
CN107483834B (en) * 2015-02-04 2020-01-14 Oppo广东移动通信有限公司 Image processing method, continuous shooting method and device and related medium product
CN104680478A (en) * 2015-02-15 2015-06-03 青岛海信移动通信技术股份有限公司 Selection method and device for target image data
CN104680478B (en) * 2015-02-15 2018-08-21 青岛海信移动通信技术股份有限公司 A kind of choosing method and device of destination image data
CN104967786B (en) * 2015-07-10 2019-03-12 广州三星通信技术研究有限公司 Image-selecting method and device
CN104967786A (en) * 2015-07-10 2015-10-07 广州三星通信技术研究有限公司 Image selection method and device
CN106548113A (en) * 2015-09-16 2017-03-29 上海市公安局刑事侦查总队 Image-recognizing method and system
CN106570028A (en) * 2015-10-10 2017-04-19 比亚迪股份有限公司 Mobile terminal, fuzzy image deletion method and fuzzy picture deletion device
CN106570028B (en) * 2015-10-10 2020-12-25 比亚迪股份有限公司 Mobile terminal and method and device for deleting blurred image
WO2017076040A1 (en) * 2015-11-06 2017-05-11 乐视控股(北京)有限公司 Image processing method and device for use during continuous shooting operation
CN105654463A (en) * 2015-11-06 2016-06-08 乐视移动智能信息技术(北京)有限公司 Image processing method applied to continuous shooting process and apparatus thereof
CN105487774B (en) * 2015-11-27 2019-04-19 小米科技有限责任公司 Image group technology and device
CN105487774A (en) * 2015-11-27 2016-04-13 小米科技有限责任公司 Image grouping method and device
CN105467741A (en) * 2015-12-16 2016-04-06 魅族科技(中国)有限公司 Panoramic shooting method and terminal
CN105635567A (en) * 2015-12-24 2016-06-01 小米科技有限责任公司 Shooting method and device
CN105893578B (en) * 2016-03-31 2019-06-18 青岛海信移动通信技术股份有限公司 A kind of method and device of photo selection
CN105894031A (en) * 2016-03-31 2016-08-24 青岛海信移动通信技术股份有限公司 Photo selection method and photo selection device
CN105893578A (en) * 2016-03-31 2016-08-24 青岛海信移动通信技术股份有限公司 Method and device for selecting photos
CN107454305A (en) * 2016-05-31 2017-12-08 宇龙计算机通信科技(深圳)有限公司 A kind of automatic photographing method and electronic equipment
CN105913052A (en) * 2016-06-08 2016-08-31 Tcl集团股份有限公司 Photograph classification management method and system thereof
CN112770011A (en) * 2016-06-17 2021-05-07 微软技术许可有限责任公司 Suggesting image files for deletion based on image file parameters
CN112770011B (en) * 2016-06-17 2023-06-20 微软技术许可有限责任公司 System, method, and computer-readable medium for suggesting image files for deletion
CN106250916B (en) * 2016-07-22 2020-02-21 西安酷派软件科技有限公司 Method and device for screening pictures and terminal equipment
CN106250916A (en) * 2016-07-22 2016-12-21 西安酷派软件科技有限公司 A kind of screen the method for picture, device and terminal unit
CN106303235A (en) * 2016-08-11 2017-01-04 广东小天才科技有限公司 Take pictures processing method and processing device
CN106572303B (en) * 2016-10-17 2020-02-18 努比亚技术有限公司 Picture processing method and terminal
CN106572303A (en) * 2016-10-17 2017-04-19 努比亚技术有限公司 Picture processing method and terminal
CN106570110A (en) * 2016-10-25 2017-04-19 北京小米移动软件有限公司 De-overlapping processing method and apparatus of image
CN108229240A (en) * 2016-12-09 2018-06-29 杭州海康威视数字技术股份有限公司 A kind of method and device of determining picture quality
CN107241504A (en) * 2017-06-08 2017-10-10 努比亚技术有限公司 A kind of image processing method, mobile terminal and computer-readable recording medium
CN107241504B (en) * 2017-06-08 2020-03-27 努比亚技术有限公司 Image processing method, mobile terminal and computer readable storage medium
CN109389019A (en) * 2017-08-14 2019-02-26 杭州海康威视数字技术股份有限公司 Facial image selection method, device and computer equipment
CN109389019B (en) * 2017-08-14 2021-11-05 杭州海康威视数字技术股份有限公司 Face image selection method and device and computer equipment
CN107454267A (en) * 2017-08-31 2017-12-08 维沃移动通信有限公司 The processing method and mobile terminal of a kind of image
CN107659722A (en) * 2017-09-25 2018-02-02 维沃移动通信有限公司 A kind of image-selecting method and mobile terminal
CN107743200A (en) * 2017-10-31 2018-02-27 广东欧珀移动通信有限公司 Method, apparatus, computer-readable recording medium and the electronic equipment taken pictures
CN107832452A (en) * 2017-11-23 2018-03-23 苏州亿科赛卓电子科技有限公司 A kind of photo management method and device
CN108009277A (en) * 2017-12-20 2018-05-08 珠海格力电器股份有限公司 A kind of image-erasing method and terminal device
WO2019119826A1 (en) * 2017-12-21 2019-06-27 格力电器(武汉)有限公司 Image processing method and apparatus
CN110139021A (en) * 2018-02-09 2019-08-16 北京三星通信技术研究有限公司 Auxiliary shooting method and terminal device
CN110139021B (en) * 2018-02-09 2023-01-13 北京三星通信技术研究有限公司 Auxiliary shooting method and terminal equipment
CN108513068A (en) * 2018-03-30 2018-09-07 广东欧珀移动通信有限公司 Choosing method, device, storage medium and the electronic equipment of image
CN108513068B (en) * 2018-03-30 2021-03-02 Oppo广东移动通信有限公司 Image selection method and device, storage medium and electronic equipment
CN108665510B (en) * 2018-05-14 2022-02-08 Oppo广东移动通信有限公司 Rendering method and device of continuous shooting image, storage medium and terminal
CN108665510A (en) * 2018-05-14 2018-10-16 Oppo广东移动通信有限公司 Rendering intent, device, storage medium and the terminal of continuous shooting image
CN108881714A (en) * 2018-05-24 2018-11-23 太仓鸿策创达广告策划有限公司 A kind of image processing system
CN108960097A (en) * 2018-06-22 2018-12-07 维沃移动通信有限公司 A kind of method and device obtaining face depth information
CN108920591A (en) * 2018-06-27 2018-11-30 Oppo广东移动通信有限公司 Recall video creation method and relevant apparatus
CN108876782A (en) * 2018-06-27 2018-11-23 Oppo广东移动通信有限公司 Recall video creation method and relevant apparatus
WO2020038254A1 (en) * 2018-08-23 2020-02-27 杭州海康威视数字技术股份有限公司 Image processing method and apparatus for target recognition
CN110858286B (en) * 2018-08-23 2023-08-11 杭州海康威视数字技术股份有限公司 Image processing method and device for target recognition
CN110895802B (en) * 2018-08-23 2023-09-01 杭州海康威视数字技术股份有限公司 Image processing method and device
CN110895802A (en) * 2018-08-23 2020-03-20 杭州海康威视数字技术股份有限公司 Image processing method and device
US11487966B2 (en) 2018-08-23 2022-11-01 Hangzhou Hikvision Digital Technology Co., Ltd. Image processing method and apparatus for target recognition
CN110858286A (en) * 2018-08-23 2020-03-03 杭州海康威视数字技术股份有限公司 Image processing method and device for target recognition
US11758285B2 (en) 2018-11-30 2023-09-12 Huawei Technologies Co., Ltd. Picture selection method and related device
CN109902189A (en) * 2018-11-30 2019-06-18 华为技术有限公司 A kind of picture selection method and relevant device
WO2020155052A1 (en) * 2019-01-31 2020-08-06 华为技术有限公司 Method for selecting images based on continuous shooting and electronic device
US12003850B2 (en) 2019-01-31 2024-06-04 Huawei Technologies Co., Ltd. Method for selecting image based on burst shooting and electronic device
CN112425156B (en) * 2019-01-31 2022-03-11 华为技术有限公司 Method for selecting images based on continuous shooting and electronic equipment
CN112425156A (en) * 2019-01-31 2021-02-26 华为技术有限公司 Method for selecting images based on continuous shooting and electronic equipment
CN111767757B (en) * 2019-03-29 2023-11-17 杭州海康威视数字技术股份有限公司 Identity information determining method and device
CN111767757A (en) * 2019-03-29 2020-10-13 杭州海康威视数字技术股份有限公司 Identity information determination method and device
CN112036209A (en) * 2019-06-03 2020-12-04 Tcl集团股份有限公司 Portrait photo processing method and terminal
CN112188075A (en) * 2019-07-05 2021-01-05 杭州海康威视数字技术股份有限公司 Snapshot, image processing device and image processing method
CN112188075B (en) * 2019-07-05 2023-04-18 杭州海康威视数字技术股份有限公司 Snapshot, image processing device and image processing method
CN110379118A (en) * 2019-07-26 2019-10-25 中车青岛四方车辆研究所有限公司 Fire prevention intelligent monitor system and method under train vehicle
CN112580400B (en) * 2019-09-29 2022-08-05 荣耀终端有限公司 Image optimization method and electronic equipment
CN112580400A (en) * 2019-09-29 2021-03-30 华为技术有限公司 Image optimization method and electronic equipment
WO2021057752A1 (en) * 2019-09-29 2021-04-01 华为技术有限公司 Image preferential selection method and electronic device
CN110990607A (en) * 2019-11-25 2020-04-10 成都市喜爱科技有限公司 Game photo screening method, device, server and computer-readable storage medium
US11490006B2 (en) 2020-03-30 2022-11-01 Beijing Xiaomi Mobile Software Co., Ltd. Photographing method and device, mobile terminal and storage medium
CN113472994A (en) * 2020-03-30 2021-10-01 北京小米移动软件有限公司 Photographing method and device, mobile terminal and storage medium
CN115209052B (en) * 2022-07-08 2023-04-18 维沃移动通信(深圳)有限公司 Image screening method and device, electronic equipment and storage medium
CN115209052A (en) * 2022-07-08 2022-10-18 维沃移动通信(深圳)有限公司 Image screening method and device, electronic equipment and storage medium
CN115328357A (en) * 2022-08-15 2022-11-11 北京达佳互联信息技术有限公司 Captured image processing method and device, electronic device and storage medium

Also Published As

Publication number Publication date
WO2015058381A1 (en) 2015-04-30

Similar Documents

Publication Publication Date Title
CN104185981A (en) Method and terminal selecting image from continuous captured image
AU2017261537B2 (en) Automated selection of keeper images from a burst photo captured set
US8432357B2 (en) Tracking object selection apparatus, method, program and circuit
US10235587B2 (en) Method and system for optimizing an image capturing boundary in a proposed image
US20230206685A1 (en) Decreasing lighting-induced false facial recognition
JP4840426B2 (en) Electronic device, blurred image selection method and program
KR102580474B1 (en) Systems and methods for continuous auto focus (caf)
CN106165391B (en) Enhanced image capture
EP2768214A2 (en) Method of tracking object using camera and camera system for object tracking
CN107771391B (en) Method and apparatus for determining exposure time of image frame
US10015374B2 (en) Image capturing apparatus and photo composition method thereof
CN106295638A (en) Certificate image sloped correcting method and device
CN109691080B (en) Image shooting method and device and terminal
EP2915333A1 (en) Depth map generation from a monoscopic image based on combined depth cues
CN111182212B (en) Image processing method, image processing apparatus, storage medium, and electronic device
CN108924427A (en) A kind of video camera focus method, device and video camera
CN111935479B (en) Target image determination method and device, computer equipment and storage medium
US9838594B2 (en) Irregular-region based automatic image correction
CN106056117A (en) Image processing method and device for rectangular object
CN113688820A (en) Stroboscopic stripe information identification method and device and electronic equipment
US20180205877A1 (en) Information processing apparatus, information processing method, system, and non-transitory computer-readable storage medium
EP3793186A1 (en) Method and electronic device for capturing regions of interest (roi)
CN105450921A (en) Image-acquiring device and automatic focusing compensation method thereof
WO2022001733A1 (en) Method and device for displaying photographing object, storage medium, and terminal
KR20130123316A (en) Apparatus and method for controlling mobile terminal based on face recognization result

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20171027

Address after: 523808, Guangdong province Shenzhen Songshan high tech Industrial Development Zone New Town Avenue No. 2 South factory building (phase I) project B2 district production workshop -5

Applicant after: HUAWEI terminal (Dongguan) Co., Ltd.

Address before: 518129 Longgang District, Guangdong, Bantian HUAWEI base B District, building 2, building No.

Applicant before: Huawei Device Co., Ltd.

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20141203