CN108513068A - Choosing method, device, storage medium and the electronic equipment of image - Google Patents

Choosing method, device, storage medium and the electronic equipment of image Download PDF

Info

Publication number
CN108513068A
CN108513068A CN201810276595.1A CN201810276595A CN108513068A CN 108513068 A CN108513068 A CN 108513068A CN 201810276595 A CN201810276595 A CN 201810276595A CN 108513068 A CN108513068 A CN 108513068A
Authority
CN
China
Prior art keywords
image
pending
value
facial image
facial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810276595.1A
Other languages
Chinese (zh)
Other versions
CN108513068B (en
Inventor
何新兰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810276595.1A priority Critical patent/CN108513068B/en
Publication of CN108513068A publication Critical patent/CN108513068A/en
Application granted granted Critical
Publication of CN108513068B publication Critical patent/CN108513068B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body

Abstract

This application discloses a kind of choosing method of image, device, storage medium and electronic equipments.This method includes:Obtain the pending image for including facial image;Obtain the clarity of each facial image in each pending image;The predetermined patterns value of each facial image in each pending image is obtained, which is the numerical value of the predetermined patterns size for indicating facial image;According to the clarity and predetermined patterns value of each facial image in each pending image, the image for being handled is chosen from the pending image.The present embodiment can improve the flexibility for the image that terminal is chosen from pending image for being handled.

Description

Choosing method, device, storage medium and the electronic equipment of image
Technical field
The application belongs to image technique field more particularly to a kind of choosing method of image, device, storage medium and electronics Equipment.
Background technology
It is a basic function of terminal to take pictures.With being constantly progressive for the hardware such as camera module and image processing algorithm, The shooting function of terminal is stronger and stronger.User also more and more continually take pictures by using terminal, for example user often makes With terminal taking personage photograph etc..In the related technology, terminal can acquire multiple image, then choose and be used for from this multiple image The image handled.However, when choosing the image for being handled from multiple image, terminal chooses the flexible of image Property is poor.
Invention content
The embodiment of the present application provides a kind of choosing method of image, device, storage medium and electronic equipment, can improve end Hold the flexibility that the image for being handled is chosen from pending image.
The embodiment of the present application provides a kind of choosing method of image, including:
Obtain the pending image for including facial image;
Obtain the clarity of each facial image in each pending image;
Obtain the predetermined patterns value of each facial image in each pending image, the predetermined patterns value be for Indicate the numerical value of the predetermined patterns size of facial image;
According to the clarity and predetermined patterns value of each facial image in each pending image, from described wait for Manage the image chosen in image for being handled.
The embodiment of the present application provides a kind of selecting device of image, including:
First acquisition module, for obtaining the pending image for including facial image;
Second acquisition module, the clarity for obtaining each facial image in each pending image;
Third acquisition module, the predetermined patterns value for obtaining each facial image in each pending image, institute State the numerical value that predetermined patterns value is the predetermined patterns size for indicating facial image;
Module is chosen, for the clarity and predetermined patterns according to each facial image in each pending image Value chooses the image for being handled from the pending image.
The embodiment of the present application provides a kind of storage medium, computer program is stored thereon with, when the computer program exists When being executed on computer so that the computer executes method provided by the embodiments of the present application.
The embodiment of the present application also provides a kind of electronic equipment, including memory, and processor, the processor is by calling institute The computer program stored in memory is stated, for executing method provided by the embodiments of the present application.
In the present embodiment, terminal can be according to the clarity of each facial image in the pending image of each frame and each The size of the predetermined patterns value of each facial image in the pending image of frame, to be chosen from the pending image of multiframe for carrying out The image of processing.Therefore, the present embodiment can improve the spirit for the image that terminal is chosen from pending image for being handled Activity.
Description of the drawings
Below in conjunction with the accompanying drawings, it is described in detail by the specific implementation mode to the present invention, technical scheme of the present invention will be made And advantage is apparent.
Fig. 1 is the flow diagram of the choosing method of image provided by the embodiments of the present application.
Fig. 2 is another flow diagram of the choosing method of image provided by the embodiments of the present application.
Fig. 3 is the schematic diagram of a scenario of the choosing method of image provided by the embodiments of the present application.
Fig. 4 is the structural schematic diagram of the selecting device of image provided by the embodiments of the present application.
Fig. 5 is another structural schematic diagram of the selecting device of image provided by the embodiments of the present application.
Fig. 6 is the structural schematic diagram of mobile terminal provided by the embodiments of the present application.
Fig. 7 is the structural schematic diagram of electronic equipment provided by the embodiments of the present application.
Specific implementation mode
Schema is please referred to, wherein identical component symbol represents identical component, the principle of the present invention is to implement one It is illustrated in computing environment appropriate.The following description be based on illustrated by the specific embodiment of the invention, should not be by It is considered as the limitation present invention other specific embodiments not detailed herein.
It is understood that the executive agent of the embodiment of the present application can be the end of smart mobile phone or tablet computer etc. End equipment.
Referring to Fig. 1, Fig. 1 is the flow diagram of the choosing method of image provided by the embodiments of the present application, flow can be with Including:
In step S101, the pending image for including facial image is obtained.
It is a basic function of terminal to take pictures.With being constantly progressive for the hardware such as camera module and image processing algorithm, The shooting function of terminal is stronger and stronger.User also more and more continually take pictures by using terminal, for example user often makes With terminal taking personage photograph etc..In the related technology, terminal can acquire multiple image, then choose and be used for from this multiple image The image handled.However, when choosing the image for being handled from multiple image, terminal chooses the flexible of image Property is poor.
In the step S101 of the embodiment of the present application, terminal can first obtain the pending image comprising facial image. In a kind of embodiment, these pending images can be the terminal multiple image that continuous, Quick Acquisition arrives under Same Scene.
In step s 102, the clarity of each facial image in each pending image is obtained.
For example, after getting the pending image that multiframe includes facial image, it is pending that terminal can obtain each frame The clarity of each facial image in image.
In step s 103, the predetermined patterns value of each facial image in each pending image, the predetermined patterns are obtained Value is the numerical value of the predetermined patterns size for indicating facial image.
For example, terminal can obtain the predetermined patterns value of each facial image in the pending image of each frame.Wherein, this is pre- If position value is the numerical value of the predetermined patterns size for indicating facial image.
In some embodiments, which can be intended to indicate that the numerical value of the size of predetermined patterns. Alternatively, the predetermined patterns value can also be intended to indicate that predetermined patterns vertical direction the numerical value of height, etc..It is appreciated that , the restriction not constituted to the present embodiment of illustrating herein.
In one embodiment, predetermined patterns can be the face position of eye or mouth etc..
In step S104, according to the clarity and predetermined patterns value of each facial image in each pending image, The image for being handled is chosen from pending image.
For example, the clarity of each facial image and the pending figure of each frame in getting the pending image of each frame As in after the predetermined patterns value of each facial image, terminal can be according to these parameter values, from the pending image of multiframe Select the image for being handled.
It is understood that in the present embodiment, terminal can be according to each facial image in the pending image of each frame The size of the predetermined patterns value of each facial image in clarity and the pending image of each frame, from the pending image of multiframe Image of the middle selection for being handled.Therefore, the present embodiment can improve terminal and be chosen from pending image for carrying out The flexibility of the image of processing.
Referring to Fig. 2, Fig. 2 is another flow diagram of the choosing method of image provided by the embodiments of the present application, flow May include:
In step s 201, terminal obtains the pending image for including facial image.
For example, terminal can first obtain the image that the multiframe that continuous, Quick Acquisition arrives under Same Scene includes face, this Multiple image is pending image.
In step S202, terminal obtains the image area of pending image and the area of human face region, and obtaining should The area of human face region accounts for the area ratio of the image area.
For example, after getting the pending image of multiframe, terminal can obtain the image surface of the pending image of a wherein frame The human face region area of some user in product and the pending image of this frame.Then, terminal can obtain the face region area The shared ratio in the image area.
For example, pending image A, B, C, D, E, F are the group photo image of the first and second the third three people, then terminal can obtain wherein The area of the human face region of first in the pending image of one frame, such as the image area of pending image A and pending image A, Then terminal can obtain the human face region of first area account for pending image A image area ratio.
After the ratio for the image area that the area for the human face region for getting first accounts for pending image A, terminal can be examined Survey whether the ratio reaches preset ratio threshold value.
If detecting, the ratio is not up to preset ratio threshold value, i.e., the area shared by face part in pending image compared with Small, in this case, terminal can execute other operations, such as according to the eye size of user in pending image come from more The image for being handled is chosen in the pending image of frame.
If detecting, the ratio reaches preset ratio threshold value, i.e., the area shared by face part in pending image compared with Greatly, then entering step S203.
In step S203, if the area ratio reaches preset ratio threshold value, terminal obtains in each pending image The clarity of each facial image.
For example, the image area of pending image A is 100, and the area of the human face region of first is 10, then the face of first The ratio that the area 12 in region accounts for the image area 100 of pending image A is 10%.And preset ratio threshold value is 8%.Therefore, The ratio that the area of the human face region of first accounts for the image area of pending image A has been more than preset ratio threshold value.At this point, terminal can To obtain the clarity of each facial image in the pending image of each frame.
For example, pending image A, B, C, D, E, F be first, second, the third three people group photo image, then terminal can be obtained first In pending image A the clarity of the facial image of the clarity, second of the facial image of first and third facial image it is clear Degree, in obtaining pending image B, C, D, E, F successively first, second, third facial image clarity.For example, table 1 is please referred to, Table 1 be first, second in pending image A, B, C, D, E, F, the third three people facial image clarity numerical value.
Table 1
First Second Third
A 90 91 89
B 91 92 92
C 96 97 94
D 94 95 96
E 95 93 93
F 92 93 93
As shown in Table 1, in pending image A, the clarity of the facial image of first is 90, the clarity of the facial image of second It is 91, the clarity of third facial image is 89.
In pending image B, the clarity of the facial image of first is 91, and the clarity of the facial image of second is 92, third The clarity of facial image is 92.
In pending image C, the clarity of the facial image of first is 96, and the clarity of the facial image of second is 97, third The clarity of facial image is 94.
In pending image D, the clarity of the facial image of first is 94, and the clarity of the facial image of second is 95, third The clarity of facial image is 96.
In pending image E, the clarity of the facial image of first is 95, and the clarity of the facial image of second is 93, third The clarity of facial image is 93.
In pending image F, the clarity of the facial image of first is 92, and the clarity of the facial image of second is 93, third The clarity of facial image is 93.
In step S204, terminal obtains the predetermined patterns value of each facial image in each pending image, this is default Position value is the numerical value of the predetermined patterns size for indicating facial image.
For example, in getting each pending image after the clarity of each facial image, terminal can obtain often The predetermined patterns value of each facial image in one pending image, the predetermined patterns value are the default portion for indicating facial image The numerical value of position size.
For example, the predetermined patterns are eye, then terminal can obtain each facial image in each pending image Eye value, the eye value are the numerical value of the eye size for indicating facial image.
In one embodiment, eye value can be intended to indicate that the numerical value of eye size, or be used for table Show the numerical value of height, etc. of eye in the vertical direction.It is understood that the limit not constituted to the present embodiment of illustrating herein It is fixed.
For example, please refer to table 2, table 2 is first, second in pending image A, B, C, D, E, F, the eye value of the third three people.
Table 2
First Second Third
A 79 81 83
B 82 82 84
C 83 88 84
D 85 87 85
E 85 87 85
F 84 86 86
As shown in Table 2, in pending image A, the eye value of the facial image of first is 79, the eye value of the facial image of second It is 81, the eye value of third facial image is 83.
In pending image B, the eye value of the facial image of first is 82, and the eye value of the facial image of second is 82, third The eye value of facial image is 84.
In pending image C, the eye value of the facial image of first is 83, and the eye value of the facial image of second is 88, third The eye value of facial image is 84.
In pending image D, the eye value of the facial image of first is 85, and the eye value of the facial image of second is 87, third The eye value of facial image is 85.
In pending image E, the eye value of the facial image of first is 85, and the eye value of the facial image of second is 87, third The eye value of facial image is 85.
In pending image F, the eye value of the facial image of first is 84, and the eye value of the facial image of second is 86, third The eye value of facial image is 86.
In step S205, terminal obtains the first weight and the second weight.
In step S206, in each pending image, terminal is according to first weight to the clear of each facial image Clear degree is weighted, and obtains the face clarity after the weighting of each facial image, and according to second weight to each face The predetermined patterns value of image is weighted, and obtains the predetermined patterns value after the weighting of each facial image.
For example, step S205 and S206 may include:
In getting each pending image after the clarity and eye value of each facial image, terminal can obtain First weight and the second weight.
Then, for each pending image, terminal can carry out the clarity of each face according to first weight Weighting, obtains the face clarity after the weighting of each facial image.
For example, the first weight is 40%, the second weight is 60%.It is understood that in practical applications, the first weight It can be adjusted as needed with the numerical value of the second weight.It illustrates herein the restriction not constituted to the present embodiment.
For example, in pending image A, the clarity after the weighting of first is 36, and the clarity after second weighting is 36.4, third Clarity after weighting is 35.6.
In pending image B, the clarity after the weighting of first is 36.4, and the clarity after second weighting is 36.8, and third adds Clarity after power is 36.8.
In pending image C, the clarity after the weighting of first is 38.4, and the clarity after second weighting is 38.8, and third adds Clarity after power is 37.6.
In pending image D, the clarity after the weighting of first is 37.6, and the clarity after second weighting is 38, the third weighting Clarity afterwards is 38.4.
In pending image E, the clarity after the weighting of first is 38, and the clarity after second weighting is 37.2, the third weighting Clarity afterwards is 37.2.
In pending image F, the clarity after the weighting of first is 36.8, and the clarity after second weighting is 37.2, and third adds Clarity after power is 37.2.
Then, terminal can be weighted the eye value of each facial image according to the second weight, obtain each face Eye value after the weighting of image.
In pending image A, the eye value after the weighting of first is 47.4, and the eye value after second weighting is 48.6, and third adds Eye value after power is 49.8.
In pending image B, the eye value after the weighting of first is 49.2, and the eye value after second weighting is 49.2, and third adds Eye value after power is 50.4.
In pending image C, the eye value after the weighting of first is 49.8, and the eye value after second weighting is 52.8, and third adds Eye value after power is 50.4.
In pending image D, the eye value after the weighting of first is 51, and the eye value after second weighting is 52.2, the third weighting Eye value afterwards is 51.
In pending image E, the eye value after the weighting of first is 51, and the eye value after second weighting is 52.2, the third weighting Eye value afterwards is 51.
In pending image F, the eye value after the weighting of first is 50.4, and the eye value after second weighting is 51.6, and third adds Eye value after power is 51.6.
In step S207, in each pending image, terminal obtains the target value of each facial image, the target Numerical value be weighting after face clarity and weighting after predetermined patterns value and value.
In step S208, terminal determines the corresponding facial image of maximum value in the target value of each user For the target facial image of user.
For example, step S207 and S208 may include:
The eye after face clarity and weighting in obtaining each pending image after the weighting of each facial image After value, terminal can obtain the target value of each facial image in each pending image.Wherein, which is to add Face clarity after power and the eye value after weighting and value.
For example, in pending image A, the target value of first is 36+47.4=83.4, and second target value is 36.4+ 48.6=85 the third target value is 35.6+49.8=85.4.
In pending image B, the target value of first is 36.4+49.2=85.6, and second target value is 36.8+49.2= 86, the third target value is 36.8+50.4=87.2.
In pending image C, the target value of first is 38.4+49.8=88.2, and second target value is 38.8+52.8= 91.6, the third target value is 37.6+50.4=88.
In pending image D, the target value of first is 37.6+51=88.6, and second target value is 38+52.2= 90.2, the third target value is 38.4+51=89.4.
In pending image E, the target value of first is 38+51=89, and second target value is 37.2+52.2=89.4, Third target value is 37.2+51=88.2.
In pending image F, the target value of first is 36.8+50.4=87.2, and second target value is 37.2+51.6= 88.8, the third target value is 37.2+51.6=88.8.
In obtaining each pending image after the target value of each facial image, terminal can be by the mesh of each user The corresponding facial image of maximum value in mark numerical value is determined as the target facial image of the user.
For example, please referring to table 3, for user's first, the maximum value in the target value of facial image is 89, the number Value 89 is the target value of the facial image of first in pending image E, and therefore, terminal can be by the people of first in pending image E Face image is determined as the target facial image of user's first.
Similarly, for user's second, the maximum value in the target value of facial image is 91.6, which is The target value of the facial image of second in pending image C, therefore, terminal can be by the facial images of second in pending image C It is determined as the target facial image of user's second.
For user third, the maximum value in the target value of facial image is 89.4, which is to wait locating The target value of third facial image in image D is managed, therefore, terminal can determine in pending image D third facial image For the target facial image of user third.
Table 3
First Second Third
A 83.4 85 85.4
B 85.6 86 87.2
C 88.2 91.6 88
D 88.6 90.2 89.4
E 89 89.4 88.2
F 87.2 88.8 88.8
In step S209, the pending image of the target facial image comprising user is chosen for alternative image by terminal.
In step S210, the largest number of pending images comprising target facial image are chosen for foundation drawing by terminal Picture.
For example, step S209 and S210 may include:
After determining the target facial image of each user, terminal can wait for the target facial image comprising user Processing image is chosen for alternative image.
Include user for example, the target facial image due to including user's second in pending image C, in pending image D Third target facial image includes the target facial image of user's first in pending image E, therefore terminal can be by pending figure As C, D, E are chosen for alternative image.
After selecting alternative image in pending image, terminal can will include that target facial image is the largest number of Pending image is chosen for base image.
For example, since, only comprising 1 target facial image, terminal will can alternatively be schemed in alternative image C, D, E As any image in C, D, E is chosen for base image.
In one embodiment, the present embodiment can also include the following steps:
Determine that facial image to be replaced, the facial image to be replaced are non-targeted facial image from base image;
The target facial image for replacing each facial image to be replaced is obtained from alternative image, it is each described Facial image to be replaced and its facial image that corresponding target facial image is same subscriber;
In the base image, corresponding facial image to be replaced is replaced using the target facial image, obtain through Cross the base image that image replaces processing.
For example, alternative image C is chosen for base image by terminal, after this, terminal can be determined from base image C Go out facial image to be replaced, the wherein facial image to be replaced is the non-targeted facial image of user.
For example, the facial image of first is not the target facial image of first in base image C, third facial image is nor third Target facial image, therefore terminal can by base image C first and third facial image be determined as face figure to be replaced Picture.
Then, terminal can be obtained from the alternative image outside base image for replacing each facial image to be replaced Target facial image, wherein each facial image to be replaced and its corresponding target facial image are the face of same subscriber Image.
For example, terminal can obtain third target facial image from alternative image D, first is obtained from alternative image E Target facial image.
Then, terminal can use in alternative image D third target facial image replace in base image C third and wait replacing Substitution face image replaces the face figure to be replaced of first in base image C using the target facial image of first in alternative image E Picture, to obtain replacing the base image C of processing by image.
It is understood that replacing first, second in the base image C of processing by image, the facial image of the third three people is Respective target facial image, and the target facial image of the first and second the third three people be that respective eye is larger and facial image compared with Clearly image.
In one embodiment, after described the step of obtaining replacing the base image of processing by image, this reality Applying example can also include the following steps:
According to the pending image, image noise reduction processing is carried out to the base image for replacing processing by image.
For example, obtaining after image replaces the base image C of processing, terminal can be according to other pending images pair The base image C that processing is replaced by image carries out image noise reduction processing.
For example, terminal can obtain the 4 frame images arrived comprising the continuous acquisition including image C, to by image replacement The base image C of reason carries out multiframe noise reduction process.For example, terminal can obtain image D, E, F, and according to image D, E, F to warp Cross the base image C progress multiframe noise reduction process that image replaces processing.
In one embodiment, when carrying out multiframe noise reduction, image C, D, E, F can be first aligned by terminal, and be obtained The pixel value of each group of snap to pixels in image.If the pixel value of same group of snap to pixels is not much different, then terminal can be counted The pixel value mean value of this group of snap to pixels is calculated, then replaces with the pixel value mean value pixel value of the respective pixel of image C.If same The pixel value difference of one group of snap to pixels is more, then can not be adjusted to the pixel value in image C.
For example, pixel P2, the pixel P3 in image E and the pixel in image F in pixel P1, image D in image C P4 is one group of pixel being mutually aligned, and the pixel value that the pixel value that wherein pixel value of P1 is 101, P2 is 102, P3 is 103, P4 Pixel value be 104, then the pixel value mean value for the pixel that this group is mutually aligned be 102.5, then terminal can be by image C In the pixel values of P1 pixels be adjusted to 102.5 by 101, to carry out noise reduction process to the P1 pixels in image C.If the picture of P1 The pixel value that the pixel value that the pixel value that element value is 103, P2 is 83, P3 is 90, P4 is 80, then due to their pixel value phase Difference is more, and terminal can not adjust the pixel value of P1 at this time, i.e. the pixel value holding 101 of P1 is constant.
In another embodiment, the terminal in executing S202 obtains the image area and face of pending image When the step for the area ratio that the area in region and the area for obtaining the human face region account for the image area, it can also use as follows Mode:For example, pending image A is the group photo of the first and second the third three people, then terminal can obtain the face of first in pending image A The area M1 in region, the human face region of second area M2 and third human face region area M3, then terminal calculate the first and second the third The area of the average human face region of three people, i.e. (M1+M2+M3)/3.Later, terminal can calculate the area of the average human face region The ratio of the image area of pending image A is accounted for, and detects whether the ratio reaches preset ratio threshold value.If not up to, eventually End can execute other operations.If reaching, the clarity for obtaining each facial image in each pending image can be triggered.
Referring to Fig. 3, figure is the schematic diagram of a scenario of the choosing method of image provided by the embodiments of the present application.
In the present embodiment, after the preview interface for entering camera, if detecting terminal in acquisition facial image, eventually End can acquire current environmental parameter, and according to collected at least two frame facial images, determine a target frame number.It should Environmental parameter can be environmental light brightness.
If terminal according to collecting at least two frame facial images, determine the face in image be not subjected to displacement (or Displacement very little), and it is currently at light environment, then target frame number can be determined as 8 frames by terminal.If terminal is according to acquisition To at least two frame facial images, determine that the face in image is not subjected to displacement (or displacement very little), and be currently at Half-light environment, then target frame number can be determined as 6 frames by terminal.If terminal according to collecting at least two frame facial images, Determine that the face in image is subjected to displacement, then target frame number can be determined as 4 frames by terminal.
In one embodiment, whether the face that can in the following way come in detection image is subjected to displacement:It is obtaining After getting four frame images of acquisition, terminal can generate a coordinate system, and then terminal can be in a like fashion by each frame Image is put into the coordinate system.Later, terminal can obtain the features of human face images in each frame image in the coordinate system Coordinate.After the characteristic point coordinate in the coordinate system of the facial image in obtaining each frame image, terminal can compare Whether the coordinate compared with the same features of human face images in different images is identical.If identical, it may be considered that the face in image Image is not subjected to displacement.If it is different, it may be considered that the facial image in image is subjected to displacement.If detecting facial image Displacement, then terminal can obtain specific shift value.If the specific shift value is within the scope of default value, can To think that the facial image displacement in image is smaller.If the specific shift value is in outside default value range, then can be with Think that the facial image displacement in image is larger.
In the present embodiment, the image collected can be saved in buffer queue by terminal.The buffer queue can be fixed length Queue, such as the buffer queue can preserve the newest collected 10 frame image of terminal.
For example, first, second, third, four people of fourth travels outdoors, and prepares to take pictures by landscape at one.Ding Weijia, second, the third shooting Group photo.For example, after the preview interface for entering camera, terminal detects in collected 4 frame image the face of the first and second the third three people Position of the image in picture is not subjected to displacement, and is currently at light environment.Based on this, terminal determines that target frame number is 8 frames.Before fourth is pressed and takes pictures button, terminal camera can continuously, rapidly acquire image, and the image collected is saved in In buffer queue.
Hereafter, after fourth, which is pressed, takes pictures button, terminal can obtain nearest collected 8 frame about first from buffer queue The image of second third.For example, according to time order and function, this 8 frame image is respectively A, B, C, D, E, F, G, H.It is understood that this 8 Frame image is pending image.
After obtaining this pending image of 8 frame, terminal can obtain the image definition of the pending image of each frame, then The apparent poor image of clarity is removed from pending image.For example, pending image A, B, C, D, E, F, G, H's is clear Degree is respectively 90,91,93,96,95,94,80,79.Since the clarity of pending image G and H is obviously poor, terminal can To delete pending image G and H, i.e., terminal is only chosen for being handled from pending image A, B, C, D, E, F Image.
In one embodiment, getting after image to be handled, terminal can obtain the pending figure of each frame The image definition of picture.Then, terminal can obtain the maximum value in image definition.For example, the value model of image definition It is 0~100 to enclose, wherein the maximum value in the clarity of pending image is 96.Later, terminal can obtain preset percentage, Such as preset percentage is 90%.Later, terminal can obtain maximum value and preset percentage product in image definition, example Such as it is 96*90%=86.4.So, if there are the images that image definition is less than 86.4 in pending image, then can recognize It is unintelligible for the image, terminal can exclude it from pending image.
Later, terminal can obtain the image surface of the pending image of any one frame from pending image A, B, C, D, E, F The average value of the area of the human face region of all users in product and the pending image.
For example, the image area that terminal gets pending image A is 100, the first and second the third three people in the pending image The average value of the area of human face region is 12, then terminal can get being averaged for the area of human face region in pending image A The ratio that value 12 accounts for image area 100 is 12%, which is beyond that preset ratio threshold value 8%.
In this case, terminal can obtain the clarity of the human face region in the pending image of each frame again.For example, The human face region can be the region of the facial image comprising all users.For example, the face of pending image A, B, C, D, E, F The face clarity in region is respectively 92,92,94,94,94,93.
Later, terminal can obtain third weight and the 4th weight, and according to third weight to each pending image Image definition is weighted, and obtains the image definition after the weighting of each pending image, and according to the 4th weight pair The face clarity of each pending image is weighted, and obtains the face clarity after the weighting of each pending image.
For example, third weight is 40%, the 4th weight is 60%.So, the image clearly after the weighting of pending image A Degree is 90*40%=36, and the face clarity after weighting is 92*60%=55.2.Image after the weighting of pending image B is clear Clear degree is 91*40%=36.4, and the face clarity after weighting is 92*60%=55.2.Figure after the weighting of pending image C Image sharpness is 93*40%=37.2, and the face clarity after weighting is 94*60%=56.4.After the weighting of pending image D Image definition be 96*40%=38.4, face clarity after weighting is 94*60%=56.4.Pending image E's adds Image definition after power is 95*40%=38, and the face clarity after weighting is 94*60%=56.4.Pending image F's Image definition after weighting is 94*40%=37.6, and the face clarity after weighting is 93*60%=55.8.
Later, terminal can obtain the whole clarity of each pending image, which is the figure after weighting The sum of face clarity after image sharpness and weighting.For example, the whole clarity of pending image A is 36+55.2=91.2, The whole clarity of pending image B is 36.4+55.2=91.6, and the whole clarity of pending image C is 37.2+56.4= The whole clarity of 93.6, pending image D are 38.4+56.4=94.8, and the whole clarity of pending image E is 38+ The whole clarity of 56.4=94.4, pending image F are 37.6+55.8=93.4.
Later, terminal can obtain the maximum value in whole clarity.For example, the whole clarity of pending image is most Big value is 94.8.Then, terminal can obtain the first ratio, such as the first ratio is 95%, and obtain in whole clarity The product of maximum value 94.8 and the first ratio 95%, i.e. 94.8*95%=90.06.
Then, terminal can obtain the pending image that whole clarity reaches 90.06 from pending image.For example, By image to be handled whole clarity more than 90.06, terminal can obtain pending image A, B, C, D, E、F.It is considered that these pending images are all clear images.
Then, terminal can obtain the eye value of each face in each pending image.For example, pending A, B, C, D, E, first, second in F, the eye value of the third three people are as shown in table 2.
Terminal can obtain the maximum eye value of first, the maximum eye value of second and third maximum eye value.For example, first Maximum eye value is 85, and the maximum eye value of second is 88, and third maximum eye value is 86.
Then, terminal can obtain the first ratio, such as the second ratio is 95%, and obtain the maximum eye value of each user With the product of the second ratio.That is, for first, the product 85*95%=80.75 of maximum eye value and the second ratio.It is right For second, the product 88*95%=83.6 of maximum eye value and the second ratio.For first, maximum eye value with The product 86*95%=81.7 of second ratio.
Later, terminal can choose base image from pending image.For example, terminal can first detect whether exist completely The image of the following condition of foot:The whole clarity of image is more than 90.06, and the eye value of each face is more than its maximum eye in image The product of portion's value and the second ratio, i.e., the eye value of first is more than 80.75 in image, and the eye value of second is more than 83.6, third eye Value is more than 81.7.
After testing, pending image C, D, E, F are satisfied by above-mentioned condition, then terminal can choose pending image C, D, E, any frame image in F is as basic image.For example, it is basic image that terminal, which chooses pending image C,.
Then, terminal can detect the facial image that whether there is target user in base image, wherein target user people The eye value of face image is less than the product of maximum the eye value and the second ratio of the target user.If there are the faces of target user Image is then determined as facial image to be replaced, and the eye value for obtaining from other pending images the target user is big In the target facial image of its maximum eye value and the product of the second ratio, and replaced in base image using target facial image Facial image to be replaced.
For example, in the present embodiment due to the first and second the third eye value in base image C be all higher than respectively maximum eye value with The product of second ratio, therefore facial image to be replaced is not present in base image C.In this case, terminal can be by base Become photo in the C storages to photograph album of plinth image.
It is understood that the photo obtained in the present embodiment is the big eye of the first and second the third three people, clear pictures.
Referring to Fig. 4, Fig. 4 is the structural schematic diagram of the selecting device of image provided by the embodiments of the present application.The choosing of image The device 300 is taken to may include:First acquisition module 301, the second acquisition module 302, third acquisition module 303 choose module 304。
First acquisition module 301, for obtaining the pending image for including facial image.
Second acquisition module 302, the clarity for obtaining each facial image in each pending image.
Third acquisition module 303, the predetermined patterns value for obtaining each facial image in each pending image, The predetermined patterns value is the numerical value of the predetermined patterns size for indicating facial image.
Module 304 is chosen, is used for according to the clarity of each facial image in each pending image and presets Position value chooses the image for being handled from the pending image.
In one embodiment, second acquisition module 302 is additionally operable to:
Obtain the image area of the pending image and the area of human face region;
The area for obtaining the human face region accounts for the area ratio of described image area;
If the area ratio reaches preset ratio threshold value, each facial image in each pending image is obtained Clarity.
In one embodiment, the selection module 304 is used for:
Obtain the first weight and the second weight;
In each pending image, according to first weight, the clarity of each facial image is added Power, obtains the face clarity after the weighting of each facial image, and according to second weight, to the pre- of each facial image If position value is weighted, the predetermined patterns value after the weighting of each facial image is obtained;
In each pending image, the target value of each facial image is obtained, the target value is weighting Predetermined patterns value after rear face clarity and weighting and value;
By the corresponding facial image of maximum value in the target value of each user, it is determined as the target face of user Image;
The pending image of target facial image comprising user is chosen for alternative image.
In one embodiment, the selection module 304 is used for:
The largest number of pending images comprising target facial image are chosen for base image.
Please refer to fig. 5, Fig. 5 is another structural schematic diagram of the selecting device of image provided by the embodiments of the present application. In one embodiment, the selecting device 300 of image can also include:Replacement module 305 and processing module 306.
Replacement module 305, for determining that facial image to be replaced, the facial image to be replaced be from base image Non-targeted facial image;The target facial image for replacing each facial image to be replaced is obtained from alternative image, Each facial image to be replaced and its facial image that corresponding target facial image is same subscriber;In the foundation drawing As in, corresponding facial image to be replaced is replaced using the target facial image, obtains the basis for replacing processing by image Image.
Processing module 306, for according to the pending image, to it is described by image replace the base image of processing into The processing of row image noise reduction.
The embodiment of the present application provides a kind of computer-readable storage medium, computer program is stored thereon with, when described When computer program executes on computers so that the computer executes in the choosing method such as image provided in this embodiment The step of.
The embodiment of the present application also provides a kind of electronic equipment, including memory, and processor, the processor is by calling institute The computer program stored in memory is stated, the step in choosing method for executing image provided in this embodiment.
For example, above-mentioned electronic equipment can be the mobile terminals such as tablet computer or smart mobile phone.Referring to Fig. 6, Fig. 6 is the structural schematic diagram of mobile terminal provided by the embodiments of the present application.
The mobile terminal 400 may include the components such as camera module 401, memory 402, processor 403.Art technology Personnel are appreciated that mobile terminal structure shown in Fig. 6 does not constitute the restriction to mobile terminal, may include than illustrating more More or less component either combines certain components or different components arrangement.
Camera module 401 may include single camera module and double camera modules.
Memory 402 can be used for storing application program and data.Include that can hold in the application program that memory 402 stores Line code.Application program can form various functions module.Processor 403 is stored in the application journey of memory 402 by operation Sequence, to perform various functions application and data processing.
Processor 403 is the control centre of mobile terminal, utilizes each of various interfaces and the entire mobile terminal of connection A part by running or execute the application program being stored in memory 402, and is called and is stored in memory 402 Data execute the various functions and processing data of mobile terminal, to carry out integral monitoring to mobile terminal.
In the present embodiment, the processor 403 in mobile terminal can be according to following instruction, will be one or more The corresponding executable code of process of application program is loaded into memory 402, and is stored in storage by processor 403 to run Application program in device 402, to realize step:
Obtain the pending image for including facial image;Obtain the clear of each facial image in each pending image Clear degree;The predetermined patterns value of each facial image in each pending image is obtained, the predetermined patterns value is for table The numerical value of the predetermined patterns size for face image of leting others have a look at;According to the clarity of each facial image in each pending image with And predetermined patterns value, the image for being handled is chosen from the pending image.
The embodiment of the present invention also provides a kind of electronic equipment.Above-mentioned electronic equipment includes image processing circuit, at image Managing circuit can utilize hardware and or software component to realize, it may include define ISP (Image Signal Processing, figure As signal processing) the various processing units of pipeline.Fig. 7 is the structural schematic diagram of image processing circuit in one embodiment.Such as Fig. 7 It is shown, for purposes of illustration only, only showing the various aspects with the relevant image processing techniques of the embodiment of the present invention.
As shown in fig. 7, image processing circuit includes image-signal processor 540 and control logic device 550.Imaging device 510 image datas captured are handled by image-signal processor 540 first, and image-signal processor 540 carries out image data Analysis is to capture the image statistics for the one or more control parameters that can be used for determining and/or imaging device 510.Imaging is set Standby 510 may include the camera with one or more lens 511 and imaging sensor 512.Imaging sensor 512 may include color Color filter array (such as Bayer filters), imaging sensor 512 can be obtained to be captured with each imaging pixel of imaging sensor 512 Luminous intensity and wavelength information, and provide one group of raw image data being handled by image-signal processor 540.Sensor 520 can be supplied to image-signal processor 540 based on 520 interface type of sensor raw image data.520 interface of sensor SMIA (Standard Mobile Imaging Architecture, Standard Mobile Imager framework) interface, other can be utilized The combination of serial or parallel camera interface or above-mentioned interface.
Image-signal processor 540 handles raw image data pixel by pixel in various formats.For example, each image slices Element can carry out one or more with the bit depth of 8,10,12 or 14 bits, image-signal processor 540 to raw image data The statistical information of a image processing operations, collection about image data.Wherein, image processing operations can be by identical or different position Depth accuracy carries out.
Image-signal processor 540 can also receive pixel data from video memory 530.For example, from 520 interface of sensor Raw pixel data is sent to video memory 530, the raw pixel data in video memory 530 is available to image letter Number processor 540 is for processing.Video memory 530 can be in a part, storage device or electronic equipment for memory device Independent private memory, and may include DMA (Direct Memory Access, direct direct memory access (DMA)) feature.
When receiving the raw image data from 520 interface of sensor or from video memory 530, picture signal Processor 540 can carry out one or more image processing operations, such as time-domain filtering.Treated, and image data can be transmitted to image Memory 530, to carry out other processing before shown.Image-signal processor 540 is received from video memory 530 Data are handled, and the image real time transfer in original domain and in RGB and YCbCr color spaces is carried out to the processing data. Image data that treated may be output to display 570, so that user watches and/or by graphics engine or GPU (Graphics Processing Unit, graphics processor) it is further processed.In addition, the output of image-signal processor 540 also can be transmitted to Video memory 530, and display 570 can read image data from video memory 530.In one embodiment, image Memory 530 can be configured as realizing one or more frame buffers.In addition, the output of image-signal processor 540 is transmittable To encoder/decoder 560, so as to encoding/decoding image data.The image data of coding can be saved, and aobvious being shown in It is decompressed before showing in 570 equipment of device.Encoder/decoder 560 can be realized by CPU or GPU or coprocessor.
The statistical data that image-signal processor 540 determines can be transmitted to control logic device 550.For example, statistical data can It is passed including the images such as automatic exposure, automatic white balance, automatic focusing, flicker detection, black level compensation, 511 shadow correction of lens 512 statistical information of sensor.Control logic device 550 may include the processor for executing one or more routines (such as firmware) and/or micro- Controller, one or more routines according to the statistical data of reception, can determine imaging device 510 control parameter and control Parameter.For example, control parameter may include 520 control parameter of sensor (such as time of integration of gain, spectrum assignment), camera The combination for the control parameter, 511 control parameter of lens (such as focusing or zoom focal length) or these parameters of glistening.ISP control ginsengs Number may include the gain level and color correction matrix for automatic white balance and color adjustment (for example, during RGB processing), And 511 shadow correction parameter of lens.
It is the step of realizing the processing method of image provided in this embodiment with image processing techniques in Fig. 7 below:
Obtain the pending image for including facial image;Obtain the clear of each facial image in each pending image Clear degree;The predetermined patterns value of each facial image in each pending image is obtained, the predetermined patterns value is for table The numerical value of the predetermined patterns size for face image of leting others have a look at;According to the clarity of each facial image in each pending image with And predetermined patterns value, the image for being handled is chosen from the pending image.
In one embodiment, after the step of acquisition includes the pending image of facial image, electronics is set It is standby to can also be performed:Obtain the image area of the pending image and the area of human face region;Obtain the human face region Area account for the area ratio of described image area.
So, electronic equipment executes the step for obtaining the clarity of each facial image in each pending image When rapid, it can execute:If the area ratio reaches preset ratio threshold value, each people in each pending image is obtained The clarity of face image.
In one embodiment, electronic equipment executes described according to each facial image in each pending image Clarity and predetermined patterns value when choosing the step for the image handled from the pending image, can hold Row:Obtain the first weight and the second weight;In each pending image, according to first weight, to each face The clarity of image is weighted, and obtains the face clarity after the weighting of each facial image, and according to second weight, The predetermined patterns value of each facial image is weighted, the predetermined patterns value after the weighting of each facial image is obtained;Every In the one pending image, the target value of each facial image is obtained, the target value is that the face after weighting is clear Degree and weighting after predetermined patterns value and value;By the corresponding face figure of maximum value in the target value of each user Picture is determined as the target facial image of user;The pending image of target facial image comprising user is chosen for alternatively scheming Picture.
In one embodiment, described that facial image corresponding with the maximum value in value described in each user is true After the step of being set to the target facial image of user, electronic equipment can also be performed:The number of target facial image will be included Most pending images are chosen for base image.
In one embodiment, electronic equipment can also be performed:Facial image to be replaced is determined from base image, The facial image to be replaced is non-targeted facial image;It is obtained from alternative image for replacing each face to be replaced The target facial image of image, each facial image to be replaced and its corresponding target facial image are the people of same subscriber Face image;In the base image, corresponding facial image to be replaced is replaced using the target facial image, obtain by Image replaces the base image of processing.
In one embodiment, after described the step of obtaining replacing the base image of processing by image, electronics Equipment can also be performed:According to the pending image, image drop is carried out to the base image for replacing processing by image It makes an uproar processing.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, there is no the portion being described in detail in some embodiment Point, the detailed description of the choosing method above with respect to image is may refer to, details are not described herein again.
The choosing method category of image in the selecting device and foregoing embodiments of described image provided by the embodiments of the present application In same design, any provided in the choosing method embodiment of described image can be run on the selecting device of described image Method, specific implementation process refer to the choosing method embodiment of described image, and details are not described herein again.
It should be noted that for the choosing method of the embodiment of the present application described image, those of ordinary skill in the art It is appreciated that realize all or part of flow of the choosing method of the embodiment of the present application described image, being can be by computer journey Sequence is completed to control relevant hardware, and the computer program can be stored in a computer read/write memory medium, such as deposit Storage in memory, and is executed by least one processor, may include in the process of implementation such as the choosing method of described image The flow of embodiment.Wherein, the storage medium can be magnetic disc, CD, read-only memory (ROM, Read Only Memory), random access memory (RAM, Random Access Memory) etc..
For the selecting device of the described image of the embodiment of the present application, each function module can be integrated in a processing Can also be that modules physically exist alone in chip, can also two or more modules be integrated in a module. The form that hardware had both may be used in above-mentioned integrated module is realized, can also be realized in the form of software function module.It is described If integrated module is realized in the form of software function module and when sold or used as an independent product, can also be stored In a computer read/write memory medium, the storage medium is for example read-only memory, disk or CD etc..
Choosing method, device, storage medium and the electronics of a kind of image provided above the embodiment of the present application are set Standby to be described in detail, principle and implementation of the present invention are described for specific case used herein, above The explanation of embodiment is merely used to help understand the method and its core concept of the present invention;Meanwhile for those skilled in the art Member, according to the thought of the present invention, there will be changes in the specific implementation manner and application range, in conclusion this explanation Book content should not be construed as limiting the invention.

Claims (10)

1. a kind of choosing method of image, which is characterized in that including:
Obtain the pending image for including facial image;
Obtain the clarity of each facial image in each pending image;
The predetermined patterns value of each facial image in each pending image is obtained, the predetermined patterns value is for indicating The numerical value of the predetermined patterns size of facial image;
According to the clarity and predetermined patterns value of each facial image in each pending image, from the pending figure The image for being handled is chosen as in.
2. the choosing method of image according to claim 1, which is characterized in that waited for comprising facial image in described obtain After the step of handling image, further include:
Obtain the image area of the pending image and the area of human face region;
The area for obtaining the human face region accounts for the area ratio of described image area;
Described the step of obtaining the clarity of each facial image in each pending image, including:If the area ratio Example reaches preset ratio threshold value, then obtains the clarity of each facial image in each pending image.
3. the choosing method of image according to claim 2, which is characterized in that described according to each pending image In each facial image clarity and predetermined patterns value the image for being handled is chosen from the pending image The step of, including:
Obtain the first weight and the second weight;
In each pending image, according to first weight, the clarity of each facial image is weighted, is obtained Face clarity to after the weighting of each facial image, and according to second weight, to the default portion of each facial image Place value is weighted, and obtains the predetermined patterns value after the weighting of each facial image;
In each pending image, the target value of each facial image is obtained, the target value is after weighting Face clarity and weighting after predetermined patterns value and value;
By the corresponding facial image of maximum value in the target value of each user, it is determined as the target face figure of user Picture;
The pending image of target facial image comprising user is chosen for alternative image.
4. the choosing method of image according to claim 3, which is characterized in that described in each user and will be worth described In the corresponding facial image of maximum value the step of being determined as the target facial image of user after, further include:
The largest number of pending images comprising target facial image are chosen for base image.
5. the choosing method of image according to claim 4, which is characterized in that the method further includes:
Determine that facial image to be replaced, the facial image to be replaced are non-targeted facial image from base image;
Obtain target facial image for replacing each facial image to be replaced from alternative image, it is each described to wait replacing Substitution face image and its facial image that corresponding target facial image is same subscriber;
In the base image, corresponding facial image to be replaced is replaced using the target facial image, is obtained by figure Base image as replacing processing.
6. the choosing method of image according to claim 5, which is characterized in that obtain handling by image replacement described Base image the step of after, further include:
According to the pending image, image noise reduction processing is carried out to the base image for replacing processing by image.
7. a kind of selecting device of image, which is characterized in that including:
First acquisition module, for obtaining the pending image for including facial image;
Second acquisition module, the clarity for obtaining each facial image in each pending image;
Third acquisition module, the predetermined patterns value for obtaining each facial image in each pending image are described pre- If position value is the numerical value of the predetermined patterns size for indicating facial image;
Module is chosen, clarity and predetermined patterns value according to each facial image in each pending image are used for, The image for being handled is chosen from the pending image.
8. the choosing method of image according to claim 7, which is characterized in that second acquisition module is additionally operable to:
Obtain the image area of the pending image and the area of human face region;
The area for obtaining the human face region accounts for the area ratio of described image area;
If the area ratio reaches preset ratio threshold value, the clear of each facial image in each pending image is obtained Clear degree.
9. a kind of storage medium, is stored thereon with computer program, which is characterized in that when the computer program on computers When execution so that the computer executes such as method according to any one of claims 1 to 6.
10. a kind of electronic equipment, including memory, processor, which is characterized in that the processor is by calling the memory The computer program of middle storage, for executing such as method according to any one of claims 1 to 6.
CN201810276595.1A 2018-03-30 2018-03-30 Image selection method and device, storage medium and electronic equipment Active CN108513068B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810276595.1A CN108513068B (en) 2018-03-30 2018-03-30 Image selection method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810276595.1A CN108513068B (en) 2018-03-30 2018-03-30 Image selection method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN108513068A true CN108513068A (en) 2018-09-07
CN108513068B CN108513068B (en) 2021-03-02

Family

ID=63379345

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810276595.1A Active CN108513068B (en) 2018-03-30 2018-03-30 Image selection method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN108513068B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110175980A (en) * 2019-04-11 2019-08-27 平安科技(深圳)有限公司 Image definition recognition methods, image definition identification device and terminal device
CN111444770A (en) * 2020-02-26 2020-07-24 北京大米未来科技有限公司 Image processing method, image processing apparatus, electronic device, and medium
CN111696051A (en) * 2020-05-14 2020-09-22 维沃移动通信有限公司 Portrait restoration method and electronic equipment

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008172395A (en) * 2007-01-10 2008-07-24 Sony Corp Imaging apparatus and image processing apparatus, method, and program
CN101617339A (en) * 2007-02-15 2009-12-30 索尼株式会社 Image processing apparatus and image processing method
CN102209196A (en) * 2010-03-30 2011-10-05 株式会社尼康 Image processing device and image estimating method
CN102377905A (en) * 2010-08-18 2012-03-14 佳能株式会社 Image pickup apparatus and control method therefor
CN102799877A (en) * 2012-09-11 2012-11-28 上海中原电子技术工程有限公司 Method and system for screening face images
CN103942525A (en) * 2013-12-27 2014-07-23 高新兴科技集团股份有限公司 Real-time face optimal selection method based on video sequence
CN104185981A (en) * 2013-10-23 2014-12-03 华为终端有限公司 Method and terminal selecting image from continuous captured image
CN105303161A (en) * 2015-09-21 2016-02-03 广东欧珀移动通信有限公司 Method and device for shooting multiple people
CN106161962A (en) * 2016-08-29 2016-11-23 广东欧珀移动通信有限公司 A kind of image processing method and terminal
CN106331504A (en) * 2016-09-30 2017-01-11 北京小米移动软件有限公司 Shooting method and device

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008172395A (en) * 2007-01-10 2008-07-24 Sony Corp Imaging apparatus and image processing apparatus, method, and program
CN101617339A (en) * 2007-02-15 2009-12-30 索尼株式会社 Image processing apparatus and image processing method
CN102209196A (en) * 2010-03-30 2011-10-05 株式会社尼康 Image processing device and image estimating method
CN102377905A (en) * 2010-08-18 2012-03-14 佳能株式会社 Image pickup apparatus and control method therefor
CN102799877A (en) * 2012-09-11 2012-11-28 上海中原电子技术工程有限公司 Method and system for screening face images
CN104185981A (en) * 2013-10-23 2014-12-03 华为终端有限公司 Method and terminal selecting image from continuous captured image
CN103942525A (en) * 2013-12-27 2014-07-23 高新兴科技集团股份有限公司 Real-time face optimal selection method based on video sequence
CN105303161A (en) * 2015-09-21 2016-02-03 广东欧珀移动通信有限公司 Method and device for shooting multiple people
CN106161962A (en) * 2016-08-29 2016-11-23 广东欧珀移动通信有限公司 A kind of image processing method and terminal
CN106331504A (en) * 2016-09-30 2017-01-11 北京小米移动软件有限公司 Shooting method and device

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110175980A (en) * 2019-04-11 2019-08-27 平安科技(深圳)有限公司 Image definition recognition methods, image definition identification device and terminal device
CN111444770A (en) * 2020-02-26 2020-07-24 北京大米未来科技有限公司 Image processing method, image processing apparatus, electronic device, and medium
CN111696051A (en) * 2020-05-14 2020-09-22 维沃移动通信有限公司 Portrait restoration method and electronic equipment

Also Published As

Publication number Publication date
CN108513068B (en) 2021-03-02

Similar Documents

Publication Publication Date Title
CN111028189B (en) Image processing method, device, storage medium and electronic equipment
CN108520493A (en) Processing method, device, storage medium and the electronic equipment that image is replaced
CN110766621B (en) Image processing method, image processing device, storage medium and electronic equipment
CN113992861B (en) Image processing method and image processing device
CN107395991B (en) Image synthesis method, image synthesis device, computer-readable storage medium and computer equipment
WO2014099284A1 (en) Determining exposure times using split paxels
CN108419028A (en) Image processing method, device, computer readable storage medium and electronic equipment
CN110728705B (en) Image processing method, image processing device, storage medium and electronic equipment
CN107704798B (en) Image blurring method and device, computer readable storage medium and computer device
CN107563979B (en) Image processing method, image processing device, computer-readable storage medium and computer equipment
CN108024054A (en) Image processing method, device and equipment
CN108401110B (en) Image acquisition method and device, storage medium and electronic equipment
CN110266954A (en) Image processing method, device, storage medium and electronic equipment
CN110198418A (en) Image processing method, device, storage medium and electronic equipment
CN108717530A (en) Image processing method, device, computer readable storage medium and electronic equipment
WO2014093048A1 (en) Determining an image capture payload burst structure
CN108574803A (en) Choosing method, device, storage medium and the electronic equipment of image
US8995784B2 (en) Structure descriptors for image processing
CN108513068A (en) Choosing method, device, storage medium and the electronic equipment of image
CN110445986A (en) Image processing method, device, storage medium and electronic equipment
CN110740266B (en) Image frame selection method and device, storage medium and electronic equipment
CN110278375A (en) Image processing method, device, storage medium and electronic equipment
CN108401109A (en) Image acquiring method, device, storage medium and electronic equipment
CN107180417B (en) Photo processing method and device, computer readable storage medium and electronic equipment
CN108520036A (en) Choosing method, device, storage medium and the electronic equipment of image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant