CN106204435A - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN106204435A
CN106204435A CN201610479905.0A CN201610479905A CN106204435A CN 106204435 A CN106204435 A CN 106204435A CN 201610479905 A CN201610479905 A CN 201610479905A CN 106204435 A CN106204435 A CN 106204435A
Authority
CN
China
Prior art keywords
image
face
image set
replaced
target image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610479905.0A
Other languages
Chinese (zh)
Inventor
张涛
万韶华
汪平仄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN201610479905.0A priority Critical patent/CN106204435A/en
Publication of CN106204435A publication Critical patent/CN106204435A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/169Holistic features and representations, i.e. based on the facial image taken as a whole
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

Present disclose provides a kind of image processing method and device, belong to field of terminal technology.Method includes: determine benchmark image from target image set;Object to be replaced is determined from described benchmark image;In other images beyond described benchmark image from described target image set, determine the destination object that described object to be replaced is corresponding;Replace the object to be replaced in described benchmark image with described destination object, obtain target image.The disclosure is by determining that from this benchmark image being in face is in occlusion state and is in the face to be replaced of closed-eye state, from this target image set, other images in addition to this benchmark image determine target face according to face recognition technology, and replace this face to be replaced with this target face, the target image obtained can at utmost ensure that the expression of each face is in optimum state, thus improves the image effect after process.

Description

Image processing method and device
Technical field
It relates to field of terminal technology, particularly relate to a kind of image processing method and device.
Background technology
Along with the development of terminal technology and popularizing of mobile terminal, increasing user selects to use mobile terminal The camera function shooting image provided, in order to meet the use demand of user, the camera function of mobile terminal such as mobile phone etc. also exists The most perfect.When shooting many people group photo, everyone table being often difficult to accomplish in shooting process to allow in group photo Feelings are in optimum state, such as, in the group photo image photographed, it will usually have one or two people to be at closed-eye state or face Situation in occlusion state.
In the related, in order to avoid there is above-mentioned situation, the same a group in Same Scene is often participated in by user The people of group photo is continuously shot multiple images, then he from these multiple images, manually chooses eye closing number and face to block number minimum Image, and it is defined as final group photo image.
Final group photo image is determined, in the case of group photo number is more by the above-mentioned method manually chosen, it is impossible to Everyone expression taken a group photo in image determined by ensureing finally is in optimum state, that is to say and cannot guarantee finally to determine Group photo image shooting effect.
Summary of the invention
For overcoming problem present in correlation technique, the disclosure provides a kind of image processing method and device.
First aspect according to disclosure embodiment, it is provided that a kind of image processing method, including:
From target image set, determine that benchmark image, described target image set include multiple images, every image Including identical reference object and shooting background;
Object to be replaced is determined from described benchmark image;
In other images beyond described benchmark image from described target image set, determine described object to be replaced Corresponding destination object;
Replace the object to be replaced in described benchmark image with described destination object, obtain target image.
In the first possible implementation of the first aspect of the disclosure, described method also includes:
The similarity of shooting time, the number of reference object and shooting background according to every image, determines from picture library Described target image set.
In the possible implementation of the second of the first aspect of the disclosure, the described shooting time according to every image, The number of reference object and the similarity of shooting background, determine that from picture library described target image set includes:
According to the shooting time of every image in described picture library, shooting time is spaced in multiple in the range of preset duration Image is defined as candidate image set;
When the picture number in described candidate image set is more than the first preset number, obtain described candidate image set In the number of reference object in every image;
When the number of the reference object in every image in described candidate image set is more than the second preset number, obtain The similarity of the shooting background of every two images in described candidate image set;
Described target image set is determined, every two figures in described target image set from described candidate image set The similarity of the shooting background of picture is all higher than predetermined threshold value.
The third of first aspect in the disclosure described may determine benchmark in implementation from target image set Image, including:
When described reference object is personage, obtain the human face region in every image in described target image set;
Described human face region is detected, blocks determining in described target image set in every image that face is in State and the face total number being in closed-eye state;
Face total number face in described target image set being in occlusion state and be in closed-eye state is minimum Image is defined as described benchmark image.
In the 4th kind of possible implementation of the first aspect of the disclosure, described human face region is detected, with really In fixed described target image set, in every image, face is in occlusion state and the face total number being in closed-eye state includes:
For each face in every image, determine the occlusion area ratio of described face, described occlusion area ratio The ratio of the viewing area of described face is accounted for for the occlusion area of described face;
Determine that the first number of described every image, described first number are that in described every image, occlusion area ratio is big Number in the face of preset ratio;
Determine that the second number of described every image, described second number are to be in closed-eye state in described every image The number of face;
By the first number of every image and the second number and value, be defined as face in described every image and be in and block State and the face total number being in closed-eye state.
In the 5th kind of possible implementation of the first aspect of the disclosure, described with the described destination object described base of replacement Object to be replaced in quasi-image, obtains target image and includes:
When described reference object is personage, the image at described destination object place is carried out image segmentation, obtain described The human face region of destination object and body region;
The people of the object to be replaced in described benchmark image is replaced with the human face region of described destination object and body region Face region and body region.
In the 6th kind of possible implementation of the first aspect of the disclosure, replace described reference map with described destination object After object to be replaced in Xiang, described method also includes:
Adjust brightness or the brightness of described benchmark image of the viewing area of described destination object, so that described target pair The brightness of the viewing area of elephant is consistent with the brightness of described benchmark image.
Second aspect according to disclosure embodiment, it is provided that a kind of image processing apparatus, including:
Benchmark image determines module, for determining benchmark image from target image set, in described target image set Including multiple images, every image includes identical reference object and shooting background;
Object to be replaced determines module, for determining object to be replaced from described benchmark image;
Destination object determines module, other images beyond the described benchmark image from described target image set In, determine the destination object that described object to be replaced is corresponding;
Replacement module, for replacing the object to be replaced in described benchmark image with described destination object, obtains target figure Picture.
In the first possible implementation of the second aspect of the disclosure, described device also includes:
Target image set determines module, for according to the shooting time of every image, the number of reference object and shooting The similarity of background, determines described target image set from picture library.
In the possible implementation of the second of the second aspect of the disclosure, described target image set determines that module is used In:
According to the shooting time of every image in described picture library, shooting time is spaced in multiple in the range of preset duration Image is defined as candidate image set;
When the picture number in described candidate image set is more than the first preset number, obtain described candidate image set In the number of reference object in every image;
When the number of the reference object in every image in described candidate image set is more than the second preset number, obtain The similarity of the shooting background of every two images in described candidate image set;
Described target image set is determined, every two figures in described target image set from described candidate image set The similarity of the shooting background of picture is all higher than predetermined threshold value.
The disclosure second aspect the third may in implementation, described benchmark image determine module for:
When described reference object is personage, obtain the human face region in every image in described target image set;
Described human face region is detected, blocks determining in described target image set in every image that face is in State and the face total number being in closed-eye state;
Face total number face in described target image set being in occlusion state and be in closed-eye state is minimum Image is defined as described benchmark image.
The 4th kind of second aspect of the disclosure may in implementation, described benchmark image determine module for:
For each face in every image, determine the occlusion area ratio of described face, described occlusion area ratio The ratio of the viewing area of described face is accounted for for the occlusion area of described face;
Determine that the first number of described every image, described first number are that in described every image, occlusion area ratio is big Number in the face of preset ratio;
Determine that the second number of described every image, described second number are to be in closed-eye state in described every image The number of face;
By the first number of every image and the second number and value, be defined as face in described every image and be in and block State and the face total number being in closed-eye state.
In the 5th kind of possible implementation of the second aspect of the disclosure, described replacement module is used for:
When described reference object is personage, the image at described destination object place is carried out image segmentation, obtain described The human face region of destination object and body region;
The people of the object to be replaced in described benchmark image is replaced with the human face region of described destination object and body region Face region and body region.
In the 6th kind of possible implementation of the second aspect of the disclosure, described device also includes:
Adjusting module, for adjusting brightness or the brightness of described benchmark image of the viewing area of described destination object, with The brightness making the viewing area of described destination object is consistent with the brightness of described benchmark image.
The third aspect, additionally provides a kind of image processing apparatus, including:
Processor;
For storing the memorizer of the executable instruction of processor;
Wherein, this processor is configured to:
From target image set, determine that benchmark image, described target image set include multiple images, every image Including identical reference object and shooting background;
Object to be replaced is determined from described benchmark image;
In other images beyond described benchmark image from described target image set, determine described object to be replaced Corresponding destination object;
Replace the object to be replaced in described benchmark image with described destination object, obtain target image.
The technical scheme that disclosure embodiment provides has the benefit that
The disclosure by being in occlusion state and being in the face total number of closed-eye state by face in target image set Minimum image is defined as benchmark image, determines that being in face is in occlusion state and is in closed-eye state from this benchmark image Face to be replaced, according to face recognition technology from this target image set in other images in addition to this benchmark image really Set the goal face, and replaces this face to be replaced with this target face, and the target image obtained can at utmost ensure each The expression of face is in optimum state, that is to say and at utmost ensures that there is not face in this target image is in occlusion state With the face of closed-eye state, thus improve the image effect after process.
It should be appreciated that it is only exemplary and explanatory, not that above general description and details hereinafter describe The disclosure can be limited.
Accompanying drawing explanation
Accompanying drawing herein is merged in description and constitutes the part of this specification, it is shown that meet the enforcement of the disclosure Example, and for explaining the principle of the disclosure together with description.
Fig. 1 is the flow chart according to a kind of image processing method shown in an exemplary embodiment;
Fig. 2 is the flow chart according to a kind of image processing method shown in an exemplary embodiment;
Fig. 3 is according to a kind of image processing apparatus block diagram shown in an exemplary embodiment;
Fig. 4 is the block diagram according to a kind of image processing apparatus 400 shown in an exemplary embodiment.
Detailed description of the invention
For making the purpose of the disclosure, technical scheme and advantage clearer, below in conjunction with accompanying drawing to disclosure embodiment party Formula is described in further detail.
Here will illustrate exemplary embodiment in detail, its example represents in the accompanying drawings.Explained below relates to During accompanying drawing, unless otherwise indicated, the same numbers in different accompanying drawings represents same or analogous key element.Following exemplary embodiment Described in embodiment do not represent all embodiments consistent with the disclosure.On the contrary, they are only with the most appended The example of the apparatus and method that some aspects that described in detail in claims, the disclosure are consistent.
Fig. 1 is the flow chart according to a kind of image processing method shown in an exemplary embodiment, as it is shown in figure 1, image Processing method, in terminal, comprises the following steps.
In a step 101, from target image set, determine that benchmark image, described target image set include multiple figures Picture, every image includes identical reference object and shooting background.
In a step 102, from described benchmark image, object to be replaced is determined.
In step 103, in other images beyond described benchmark image from described target image set, institute is determined State the destination object that object to be replaced is corresponding.
At step 104, replace the object to be replaced in described benchmark image with described destination object, obtain target figure Picture.
The method that disclosure embodiment provides, by being in occlusion state and being in eye closing by face in target image set The image that the face total number of state is minimum is defined as benchmark image, determines that being in face is in and blocks shape from this benchmark image State and be in the face to be replaced of closed-eye state, according to face recognition technology from this target image set except this benchmark image with Determining target face in other outer images, and replace this face to be replaced with this target face, the target image obtained can At utmost ensure that the expression of each face is in optimum state, that is to say and at utmost ensure this target image does not exists Face is in the face of occlusion state and closed-eye state, thus improves the image effect after process.
In the first possible implementation of the disclosure, described method also includes:
The similarity of shooting time, the number of reference object and shooting background according to every image, determines from picture library Described target image set.
The second in the disclosure may be in implementation, the described shooting time according to every image, reference object Number and the similarity of shooting background, determine that from picture library described target image set includes:
According to the shooting time of every image in described picture library, shooting time is spaced in multiple in the range of preset duration Image is defined as candidate image set;
When the picture number in described candidate image set is more than the first preset number, obtain described candidate image set In the number of reference object in every image;
When the number of the reference object in every image in described candidate image set is more than the second preset number, obtain The similarity of the shooting background of every two images in described candidate image set;
Described target image set is determined, every two figures in described target image set from described candidate image set The similarity of the shooting background of picture is all higher than predetermined threshold value.
Implementation described may determine benchmark image from target image set at the third of the disclosure, including:
When described reference object is personage, obtain the human face region in every image in described target image set;
Described human face region is detected, blocks determining in described target image set in every image that face is in State and the face total number being in closed-eye state;
Face total number face in described target image set being in occlusion state and be in closed-eye state is minimum Image is defined as described benchmark image.
In the 4th kind of possible implementation of the disclosure, described human face region is detected, to determine described target In image collection, in every image, face is in occlusion state and the face total number being in closed-eye state includes:
For each face in every image, determine the occlusion area ratio of described face, described occlusion area ratio The ratio of the viewing area of described face is accounted for for the occlusion area of described face;
Determine that the first number of described every image, described first number are that in described every image, occlusion area ratio is big Number in the face of preset ratio;
Determine that the second number of described every image, described second number are to be in closed-eye state in described every image The number of face;
By the first number of every image and the second number and value, be defined as face in described every image and be in and block State and the face total number being in closed-eye state.
In the 5th kind of possible implementation of the disclosure, described with in the described destination object described benchmark image of replacement Object to be replaced, obtains target image and includes:
When described reference object is personage, the image at described destination object place is carried out image segmentation, obtain described The human face region of destination object and body region;
The people of the object to be replaced in described benchmark image is replaced with the human face region of described destination object and body region Face region and body region.
In the 6th kind of possible implementation of the disclosure, replace treating in described benchmark image with described destination object and replace After changing object, described method also includes:
Adjust brightness or the brightness of described benchmark image of the viewing area of described destination object, so that described target pair The brightness of the viewing area of elephant is consistent with the brightness of described benchmark image.
Above-mentioned all optional technical schemes, can use and arbitrarily combine the alternative embodiment forming the disclosure, at this no longer Repeat one by one.
Fig. 2 is the flow chart according to a kind of image processing method shown in an exemplary embodiment.The execution of this embodiment Main body can be terminal, and with reference to Fig. 2, this embodiment specifically includes:
In step 201, according to the shooting time of every image, the number of reference object and the similarity of shooting background, Target image set is determined from picture library.
This target image set includes that multiple images, every image include identical reference object and shooting background;Should Reference object refers to the independent individuality with limited bulk or area, such as personage, animal or scenery etc.;This photographed scene is Refer to the region in the picture in addition to reference object.
This picture library can include the photographic head acquired image that this picture library place terminal is configured by this terminal self, The image downloaded from network resources system, or the image received by instant communication client can also be included, it is also possible to bag Including the image being received by other approach or getting, this is not especially limited by disclosure embodiment.In disclosure embodiment In, as a example by reference object is as personage, the image processing method being provided the disclosure is described in detail.
Above-mentioned according to the shooting time of every image, the number of reference object and the similarity of shooting background, from picture library Determine that the process of target image set may comprise steps of 201a to step 201d:
Step 201a, according to the shooting time of every image in this picture library, shooting time is spaced in preset duration scope Multiple interior images are defined as candidate image set.
This step 201a can also be replaced by following methods: multiple images of terminal demonstration, obtains multiple bags that user selects Include the image of identical reference object and shooting background, and these multiple images are defined as this candidate image set.
Step 201b, when the picture number in this candidate image set more than the first preset number time, obtain this candidate figure The number of the reference object in every image in image set conjunction.
When reference object is personage, use the bat that Face datection algorithm obtains in this candidate image set in every image Taking the photograph the number of object, this Face datection algorithm can be Adaboost algorithm, that is to say and is obtained often by Adaboost grader Open the face number in image.
Specifically, any image is divided by this Adaboost grader, obtains multiple regions of this image, for drawing The each region got, according to default feature extraction algorithm, extracts the feature in this region, inputs the feature in this region to being somebody's turn to do Grader, based on this grader, calculates the feature in this region, obtains the output result of this grader, i.e. can be somebody's turn to do The classification results in region, this classification results is human face region or non-face region, obtains in this image according to classification results Face number.
Wherein, this Adaboost grader is made up of multiple Weak Classifiers, and the plurality of Weak Classifier is based on same training sample The training of this collection forms.For example, it is possible to obtain the weak feature of multiple sample image, such as rectangular characteristic etc., by each sample image Weak feature is as training sample, by multiple training sample composing training sample sets.Concentrate from this training sample and choose several instructions Practice sample, constitute the first training set, according to this first training set, train first Weak Classifier, then from this training sample set In choose several new training samples, the training sample structure of the training sample that this is chosen and first Weak Classifier misclassification Become the second training set, according to this second training set, train second Weak Classifier, then choose some from this training sample concentration Individual new training sample, the training sample that this is chosen and first Weak Classifier and the instruction of second equal misclassification of Weak Classifier Practice sample and constitute the 3rd training set, according to the 3rd training set, train the 3rd Weak Classifier, by that analogy, until mistake Rate is less than when presetting minimal error rate, and multiple Weak Classifiers one strong classifier of composition that will train, this strong classifier can be used In image is classified.
Step 201c, when the number of the reference object in every image in this candidate image set is more than the second preset number Time, obtain the similarity of the shooting background of every two images in this candidate image set.
In the disclosed embodiments, use the image similarity calculation method of distinguished point based, obtain this candidate image collection The similarity of the shooting background of every two images in conjunction, specifically, for every image in this candidate image set, uses SIFT (Scale Invariant Feature Transform, Scale invariant features transform) algorithm carries out SIFT feature point and carries Take, then carry out Feature Points Matching, from this candidate image set, determine target image set according to matching result, this target image The number of the SIFT feature point that can mate in any two images in set all exceedes preset number, and this preset number can be 20, it is also possible to for other numerical value, this is not construed as limiting by disclosure embodiment.
Step 201d, from this candidate image set, determine this target image set, every two in this target image set The similarity of the shooting background opening image is all higher than predetermined threshold value.
It should be noted that different according to similarity calculating method, this predetermined threshold value can be provided accordingly to difference Numerical value, the concrete method to set up of this predetermined threshold value is not construed as limiting by disclosure embodiment.
In sum, desired satisfied three conditions of target image set: one, the bat in every image of this target image set Take the photograph object number identical and be all higher than the second preset number;Two, two figures that the shooting time in this target image set is adjacent In the range of the shooting time of picture is spaced in preset duration;Three, the phase of the shooting background of every two images in this target image set It is all higher than predetermined threshold value like degree.
Wherein, this preset duration, the first preset number, the second preset number and predetermined threshold value all can use system to arrange Default value, or be configured as required by user, this is not especially limited by disclosure embodiment.
By determining from picture library at multiple images that identical reference object is gathered by identical shooting background, it is possible to have for Property these multiple images are carried out image procossing, for improve image processing speed lay the foundation.
In step 202., from target image set, benchmark image is determined.
In the disclosed embodiments, this benchmark image refers to that this goal set septum reset is in occlusion state and is in eye closing The image that the face of state is minimum, specifically, determines that from this target image set the method for benchmark image can be: when this bat Take the photograph object when being face, obtain the human face region in every image in this target image set;This human face region is detected, To determine that in this target image set, in every image, face is in occlusion state and is in the face total number of closed-eye state;Will In this target image set, face is in occlusion state and the image being in the face total number of closed-eye state minimum is defined as this Benchmark image.
Wherein it is determined that in every image, face is in occlusion state and is in the people of closed-eye state in this target image set The method of face total number can be: for each face in every image, determining the occlusion area ratio of this face, this blocks Regional percentage is the ratio that the occlusion area of this face accounts for the viewing area of this face;Determine the first number of this every image, This first number is the number of the face that occlusion area ratio is more than preset ratio in this every image;Determine this every image Second number, this second number is the number of the face being in closed-eye state in this every image;By the first number of every image Mesh and the second number and value, be defined as face in this every image and be in occlusion state and be in the face sum of closed-eye state Mesh.Wherein, this preset ratio can use the default value that system is arranged, it is also possible to is configured as required by user, these public affairs Open embodiment this is not especially limited.
Specifically, it is determined that the method for the occlusion area ratio of this face can be: use Adaboost algorithm to every figure After carrying out Face datection, obtain multiple human face region, then use flesh tone algorithms that each human face region is carried out colour of skin statistics, with Obtain the occlusion area ratio of this face.
Wherein, the method using flesh tone algorithms that each human face region carries out colour of skin statistics can be: to arbitrary face district Territory, when this human face region is RGB color figure, obtains the value of the R of each pixel in this human face region and the value of G, if arbitrary The value of the R of the pixel value more than G, it is determined that this pixel belongs to human face region, otherwise then determines that this pixel belongs to occlusion area, should Occlusion area is occlusion area ratio with the ratio of this human face region.
The method determining the number of the face being in closed-eye state in this every image can be: uses based on SVM (Support Vector Machine, support vector machine) and HOG (Histogram of Oriented Gradients, direction Histogram of gradients) Face datection algorithm, detect the number of the face being in closed-eye state in every image;Specifically, for Arbitrary image, obtains the histogram of gradients of the eyes of each face in this image, determines this figure further according to SVM classifier The number of the face of closed-eye state it is in Xiang.
It should be noted that the process of the above-mentioned occlusion area ratio determining this face, and determine in this every image It is in the process of the number of the face of closed-eye state, it is also possible to being respectively adopted additive method and realize, disclosure embodiment is to this not Make concrete restriction.
By face in this target image set being in occlusion state and to be in the face total number of closed-eye state minimum Image to be defined as being not in this benchmark image, i.e. this benchmark image the face number of closed-eye state and occlusion state most, from And follow-up number of objects to be replaced can be reduced as far as possible, and then accelerate image processing process, improve image processing efficiency.
In step 203, from this benchmark image, object to be replaced is determined.
When reference object is personage, this benchmark image will be in closed-eye state or be in the face of occlusion state, really It is set to object to be replaced.When reference object is other objects, other corresponding methods can be used true from this benchmark image Fixed object to be replaced, this is not especially limited by disclosure embodiment.
In step 204, from this target image set in other images in addition to this benchmark image, determine that this waits to replace Change the destination object that object is corresponding.
When this object to be replaced is to be in closed-eye state or be in the face to be replaced of occlusion state, this destination object is i.e. For other images in addition to this benchmark image in this target image set are the people of same person with this face to be replaced Face, and this face had both been not in occlusion state and has also been not in closed-eye state.
Specifically, it is determined that the method for target face corresponding to this face to be replaced can be: use face recognition technology, from In this target image set in other images in addition to this benchmark image, determine the target person corresponding with this face to be replaced Face;Wherein, this face recognition technology can be deep learning (degree of depth study) technology, and degree of depth study is that machine learning is ground A new field in studying carefully, its motivation is the neutral net set up, simulation human brain is analyzed, learns, and degree of depth study is logical Cross the mechanism of simulation human brain to carry out recognition of face.
The concrete grammar using degree of depth learning art to be identified face can be: this face to be replaced is carried out pre-place Reason, and use local shape factor algorithm to extract the local feature information of face to be replaced, by this local feature information and this mesh In logo image set, in other images in addition to this benchmark image, the identical local feature information of each face is mated, as The local feature information of arbitrary face and the local feature information matches success of this face to be replaced, then be defined as this face Target face.
If it should be noted that from this target image set other images beyond this benchmark image, determined Multiple any one object in the plurality of object can be defined as to be replaced with this with the object of this object matching to be replaced The destination object that object is corresponding.
Determining the process of the target face that this face to be replaced is corresponding, it would however also be possible to employ additive method realizes, the disclosure is real Execute example this is not especially limited.Additionally, according to the difference of object to be replaced, determine the destination object that this replacement object is corresponding Method can also be different, and this is also not construed as limiting by disclosure embodiment.
Owing to the image in this target image set is the figure gathered identical reference object under identical shooting background Picture, therefore by determining, from this target image set, the destination object that this object to be replaced is corresponding, it is possible at utmost ensure This destination object and the similarity of this object to be replaced such that it is able to avoid the image after replacing that the most inconsistent feelings occur Condition, improves image processing effect.
In step 205, replace the object to be replaced in this benchmark image with this destination object, obtain target image.
The method replacing the object to be replaced in this benchmark image with this destination object can be: from this destination object place Image in, obtain the viewing area of this destination object, and by the viewing area of this replacement object in this benchmark image and its His viewing area is split, and fills the benchmark image after segmentation with the viewing area of this destination object got, obtains mesh Mark object.
In another embodiment of the disclosure, when this reference object is face, the image at this destination object place is carried out Image is split, and obtains human face region and the body region of this destination object;Human face region and body region with this destination object Replace human face region and the body region of object to be replaced in this benchmark image.
Wherein, the image at this destination object place is carried out the process of image segmentation, grabcut algorithm can be used real Existing, grabcut algorithm utilize gauss hybrid models to describe the distribution of pixel, if two adjacent pixel differences are the least, then this Two pixels belong to the probability of same target or same background the most greatly, if two adjacent pixel differences are very big, that is said Bright the two pixel is likely in the marginal portion of target and background, then the probability being partitioned from is bigger;Pass through again Iteration algorithm realizes energy minimization, further determines that cut-off rule, to reach the purpose of more Accurate Segmentation image.Certainly, enter The segmentation of row image can also use other algorithms, and such as Graghcut algorithm etc., this is not especially limited by disclosure embodiment.
When this reference object is face, whole as one by the human face region and body region using destination object Body, replaces human face region and the body region of object to be replaced in this benchmark image, it is possible at utmost ensure that this shooting is right As face and the harmony of health, further function as the purpose improving image processing effect.
In the another embodiment of the disclosure, during step 205, adjust the brightness of the viewing area of this destination object Or the brightness of this benchmark image, so that the brightness of the viewing area of this destination object is consistent with the brightness of this benchmark image.
Adjust the brightness of the viewing area of this destination object or the process of the brightness of this benchmark image, affine change can be used Technology of changing realizes, and this affine transformation refers to determine the transformation parameter between image according to some similarity measurements so that this target pair The viewing area of object to be replaced in this benchmark image can be replaced in the viewing area of elephant.In the disclosed embodiments, this is similar Property tolerance may be defined as the mapping in the viewing area of this destination object and this object brightness to be replaced.Adjust the process of brightness also Additive method can be used to realize, and this is not especially limited by disclosure embodiment.
In another embodiment of the disclosure, it is also possible to utilize affine transformation technology to make the viewing area of this destination object more preferable Be mounted in this benchmark image in the viewing area of this object to be replaced, by this similarity measurement is defined as spatially Map so that the viewing area of this destination object can intactly be mounted in this benchmark image, and it can be avoided that to this benchmark Other reference objects in image cause and block.
The method that disclosure embodiment provides, by being in occlusion state and being in eye closing by face in target image set The image that the face total number of state is minimum is defined as benchmark image, determines that being in face is in and blocks shape from this benchmark image State and be in the face to be replaced of closed-eye state, according to face recognition technology from this target image set except this benchmark image with Determining target face in other outer images, and replace this face to be replaced with this target face, the target image obtained can At utmost ensure that the expression of each face is in optimum state, that is to say and at utmost ensure this target image does not exists Face is in the face of occlusion state and closed-eye state, thus improves the image effect after process;Further, by from picture library Middle determine at multiple images that identical reference object is gathered by identical shooting background, and these multiple images are defined as target image Set, it is possible to process the image in this target image set targetedly, establishes base for improving image processing speed Plinth.
Fig. 3 is according to a kind of image processing apparatus block diagram shown in an exemplary embodiment.With reference to Fig. 3, this device includes Benchmark image determines that module 301, object to be replaced determine module 302, and destination object determines module 303 and replacement module 304.
Benchmark image determines module 301, for determining benchmark image, described target image set from target image set Include that multiple images, every image include identical reference object and shooting background;
Object to be replaced determines module 302, for determining object to be replaced from described benchmark image;
Destination object determines module 303, is used for other beyond the described benchmark image from described target image set In image, determine the destination object that described object to be replaced is corresponding;
Replacement module 304, for replacing the object to be replaced in described benchmark image with described destination object, obtains target Image.
In the first possible implementation that the disclosure provides, described device also includes:
Target image set determines module, for according to the shooting time of every image, the number of reference object and shooting The similarity of background, determines described target image set from picture library.
The second provided in the disclosure may in implementation, described target image set determine module for:
According to the shooting time of every image in described picture library, shooting time is spaced in multiple in the range of preset duration Image is defined as candidate image set;
When the picture number in described candidate image set is more than the first preset number, obtain described candidate image set In the number of reference object in every image;
When the number of the reference object in every image in described candidate image set is more than the second preset number, obtain The similarity of the shooting background of every two images in described candidate image set;
Described target image set is determined, every two figures in described target image set from described candidate image set The similarity of the shooting background of picture is all higher than predetermined threshold value.
There is provided in the disclosure the third may in implementation, described benchmark image determine module 301 for:
When described reference object is personage, obtain the human face region in every image in described target image set;
Described human face region is detected, blocks determining in described target image set in every image that face is in State and the face total number being in closed-eye state;
Face total number face in described target image set being in occlusion state and be in closed-eye state is minimum Image is defined as described benchmark image.
The 4th kind provided in the disclosure may in implementation, described benchmark image determine module 301 for:
For each face in every image, determine the occlusion area ratio of described face, described occlusion area ratio The ratio of the viewing area of described face is accounted for for the occlusion area of described face;
Determine that the first number of described every image, described first number are that in described every image, occlusion area ratio is big Number in the face of preset ratio;
Determine that the second number of described every image, described second number are to be in closed-eye state in described every image The number of face;
By the first number of every image and the second number and value, be defined as face in described every image and be in and block State and the face total number being in closed-eye state.
In the 5th kind of possible implementation that the disclosure provides, described replacement module is used for:
When described reference object is personage, the image at described destination object place is carried out image segmentation, obtain described The human face region of destination object and body region;
The people of the object to be replaced in described benchmark image is replaced with the human face region of described destination object and body region Face region and body region.
In the 6th kind of possible implementation that the disclosure provides, described device also includes:
Adjusting module, for adjusting brightness or the brightness of described benchmark image of the viewing area of described destination object, with The brightness making the viewing area of described destination object is consistent with the brightness of described benchmark image.
About the device in above-described embodiment, wherein modules performs the concrete mode of operation in relevant the method Embodiment in be described in detail, explanation will be not set forth in detail herein.
Fig. 4 is the block diagram according to a kind of image processing apparatus 400 shown in an exemplary embodiment.Such as, device 400 can To be mobile phone, computer, digital broadcast terminal, messaging devices, game console, tablet device, armarium, strong Body equipment, personal digital assistant etc..
With reference to Fig. 4, device 400 can include following one or more assembly: processes assembly 402, memorizer 404, power supply Assembly 406, multimedia groupware 404, audio-frequency assembly 410, input/output (I/O) interface 412, sensor cluster 414, Yi Jitong Letter assembly 416.
Process assembly 402 and generally control the integrated operation of device 400, such as with display, call, data communication, phase The operation that machine operation and record operation are associated.Process assembly 402 and can include that one or more processor 420 performs to refer to Order, to complete all or part of step of above-mentioned method.Additionally, process assembly 402 can include one or more module, just Mutual in process between assembly 402 and other assemblies.Such as, process assembly 402 and can include multi-media module, many to facilitate Media component 408 and process between assembly 402 mutual.
Memorizer 404 is configured to store various types of data to support the operation at device 400.Showing of these data Example includes any application program for operation on device 400 or the instruction of method, contact data, telephone book data, disappears Breath, picture, video etc..Memorizer 404 can be by any kind of volatibility or non-volatile memory device or their group Close and realize, such as static RAM (SRAM), Electrically Erasable Read Only Memory (EEPROM), erasable compile Journey read only memory (EPROM), programmable read only memory (PROM), read only memory (ROM), magnetic memory, flash Device, disk or CD.
The various assemblies that power supply module 406 is device 400 provide electric power.Power supply module 406 can include power management system System, one or more power supplys, and other generate, manage and distribute, with for device 400, the assembly that electric power is associated.
The screen of one output interface of offer that multimedia groupware 408 is included between described device 400 and user.One In a little embodiments, screen can include liquid crystal display (LCD) and touch panel (TP).If screen includes touch panel, screen Curtain may be implemented as touch screen, to receive the input signal from user.Touch panel includes one or more touch sensing Device is with the gesture on sensing touch, slip and touch panel.Described touch sensor can not only sense touch or sliding action Border, but also detect the persistent period relevant to described touch or slide and pressure.In certain embodiments, many matchmakers Body assembly 408 includes a front-facing camera and/or post-positioned pick-up head.When device 400 is in operator scheme, such as screening-mode or During video mode, front-facing camera and/or post-positioned pick-up head can receive the multi-medium data of outside.Each front-facing camera and Post-positioned pick-up head can be a fixing optical lens system or have focal length and optical zoom ability.
Audio-frequency assembly 410 is configured to output and/or input audio signal.Such as, audio-frequency assembly 410 includes a Mike Wind (MIC), when device 400 is in operator scheme, during such as call model, logging mode and speech recognition mode, mike is joined It is set to receive external audio signal.The audio signal received can be further stored at memorizer 404 or via communication set Part 416 sends.In certain embodiments, audio-frequency assembly 410 also includes a speaker, is used for exporting audio signal.
I/O interface 412 provides interface for processing between assembly 402 and peripheral interface module, above-mentioned peripheral interface module can To be keyboard, put striking wheel, button etc..These buttons may include but be not limited to: home button, volume button, start button and lock Set button.
Sensor cluster 414 includes one or more sensor, for providing the state of various aspects to comment for device 400 Estimate.Such as, what sensor cluster 414 can detect device 400 opens/closed mode, the relative localization of assembly, such as described Assembly is display and the keypad of device 400, and sensor cluster 414 can also detect device 400 or 400 1 assemblies of device Position change, the presence or absence that user contacts with device 400, device 400 orientation or acceleration/deceleration and device 400 Variations in temperature.Sensor cluster 414 can include proximity transducer, is configured to when not having any physical contact detect The existence of neighbouring object.Sensor cluster 414 can also include optical sensor, such as CMOS or ccd image sensor, is used for becoming Use as in application.In certain embodiments, this sensor cluster 414 can also include acceleration transducer, gyro sensors Device, Magnetic Sensor, pressure transducer or temperature sensor.
Communications component 416 is configured to facilitate the communication of wired or wireless mode between device 400 and other equipment.Device 400 can access wireless network based on communication standard, such as WiFi, 2G or 3G, or combinations thereof.An exemplary enforcement In example, communications component 416 receives the broadcast singal from external broadcasting management system or broadcast related information via broadcast channel. In one exemplary embodiment, described communications component 416 also includes near-field communication (NFC) module, to promote junction service.Example As, can be based on RF identification (RFID) technology in NFC module, Infrared Data Association (IrDA) technology, ultra broadband (UWB) technology, Bluetooth (BT) technology and other technologies realize.
In the exemplary embodiment, device 400 can be by one or more application specific integrated circuits (ASIC), numeral letter Number processor (DSP), digital signal processing appts (DSPD), PLD (PLD), field programmable gate array (FPGA), controller, microcontroller, microprocessor or other electronic components realize, be used for performing above-mentioned image processing method.
In the exemplary embodiment, a kind of non-transitory computer-readable recording medium including instruction, example are additionally provided As included the memorizer 404 of instruction, above-mentioned instruction can have been performed said method by the processor 420 of device 400.Such as, Described non-transitory computer-readable recording medium can be ROM, random access memory (RAM), CD-ROM, tape, floppy disk With optical data storage devices etc..
In the exemplary embodiment, additionally provide a kind of non-transitory computer-readable recording medium, when described storage is situated between When instruction in matter is performed by the processor of mobile terminal so that mobile terminal is able to carry out above-mentioned image processing method.
Those skilled in the art, after considering description and putting into practice invention disclosed herein, will readily occur to its of the disclosure Its embodiment.The application is intended to any modification, purposes or the adaptations of the disclosure, these modification, purposes or Person's adaptations is followed the general principle of the disclosure and includes the undocumented common knowledge in the art of the disclosure Or conventional techniques means.Description and embodiments is considered only as exemplary, and the true scope of the disclosure and spirit are by following Claim is pointed out.
It should be appreciated that the disclosure is not limited to precision architecture described above and illustrated in the accompanying drawings, and And various modifications and changes can carried out without departing from the scope.The scope of the present disclosure is only limited by appended claim.

Claims (15)

1. an image processing method, it is characterised in that described method includes:
From target image set, determine that benchmark image, described target image set include that multiple images, every image include Identical reference object and shooting background;
Object to be replaced is determined from described benchmark image;
In other images beyond described benchmark image from described target image set, determine that described object to be replaced is corresponding Destination object;
Replace the object to be replaced in described benchmark image with described destination object, obtain target image.
Method the most according to claim 1, it is characterised in that described method also includes:
The similarity of shooting time, the number of reference object and shooting background according to every image, determines described from picture library Target image set.
Method the most according to claim 2, it is characterised in that the described shooting time according to every image, reference object Number and the similarity of shooting background, from picture library, determine that described target image set includes:
According to the shooting time of every image in described picture library, shooting time is spaced in multiple images in the range of preset duration It is defined as candidate image set;
When the picture number in described candidate image set is more than the first preset number, obtain in described candidate image set every The number of the reference object in image;
When the number of the reference object in every image in described candidate image set is more than the second preset number, obtain described The similarity of the shooting background of every two images in candidate image set;
Described target image set is determined from described candidate image set, every two images in described target image set The similarity of shooting background is all higher than predetermined threshold value.
Method the most according to claim 1, it is characterised in that described determine benchmark image from target image set, bag Include:
When described reference object is personage, obtain the human face region in every image in described target image set;
Described human face region is detected, to determine that in described target image set, in every image, face is in occlusion state With the face total number being in closed-eye state;
Face in described target image set is in occlusion state and is in the image that the face total number of closed-eye state is minimum It is defined as described benchmark image.
Method the most according to claim 4, it is characterised in that detect described human face region, to determine described mesh In logo image set, in every image, face is in occlusion state and the face total number being in closed-eye state includes:
For each face in every image, determine that the occlusion area ratio of described face, described occlusion area ratio are institute State the ratio that the occlusion area of face accounts for the viewing area of described face;
Determine that the first number of described every image, described first number are that in described every image, occlusion area ratio is more than pre- If the number of the face of ratio;
Determine that the second number of described every image, described second number are the face being in closed-eye state in described every image Number;
By the first number of every image and the second number and value, be defined as face in described every image and be in occlusion state With the face total number being in closed-eye state.
Method the most according to claim 1, it is characterised in that described with in the described destination object described benchmark image of replacement Object to be replaced, obtain target image and include:
When described reference object is personage, the image at described destination object place is carried out image segmentation, obtains described target The human face region of object and body region;
The face district of the object to be replaced in described benchmark image is replaced with the human face region of described destination object and body region Territory and body region.
Method the most according to claim 1, it is characterised in that replace treating in described benchmark image with described destination object After replacing object, described method also includes:
Adjust brightness or the brightness of described benchmark image of the viewing area of described destination object, so that described destination object The brightness of viewing area is consistent with the brightness of described benchmark image.
8. an image processing apparatus, it is characterised in that described device includes:
Benchmark image determines module, for determining that from target image set benchmark image, described target image set include Multiple images, every image includes identical reference object and shooting background;
Object to be replaced determines module, for determining object to be replaced from described benchmark image;
Destination object determines module, in other images beyond the described benchmark image from described target image set, Determine the destination object that described object to be replaced is corresponding;
Replacement module, for replacing the object to be replaced in described benchmark image with described destination object, obtains target image.
Device the most according to claim 8, it is characterised in that described device also includes:
Target image set determines module, for according to the shooting time of every image, the number of reference object and shooting background Similarity, from picture library, determine described target image set.
Device the most according to claim 9, it is characterised in that described target image set determine module for:
According to the shooting time of every image in described picture library, shooting time is spaced in multiple images in the range of preset duration It is defined as candidate image set;
When the picture number in described candidate image set is more than the first preset number, obtain in described candidate image set every The number of the reference object in image;
When the number of the reference object in every image in described candidate image set is more than the second preset number, obtain described The similarity of the shooting background of every two images in candidate image set;
Described target image set is determined from described candidate image set, every two images in described target image set The similarity of shooting background is all higher than predetermined threshold value.
11. devices according to claim 8, it is characterised in that described benchmark image determine module for:
When described reference object is personage, obtain the human face region in every image in described target image set;
Described human face region is detected, to determine that in described target image set, in every image, face is in occlusion state With the face total number being in closed-eye state;
Face in described target image set is in occlusion state and is in the image that the face total number of closed-eye state is minimum It is defined as described benchmark image.
12. devices according to claim 11, it is characterised in that described benchmark image determine module for:
For each face in every image, determine that the occlusion area ratio of described face, described occlusion area ratio are institute State the ratio that the occlusion area of face accounts for the viewing area of described face;
Determine that the first number of described every image, described first number are that in described every image, occlusion area ratio is more than pre- If the number of the face of ratio;
Determine that the second number of described every image, described second number are the face being in closed-eye state in described every image Number;
By the first number of every image and the second number and value, be defined as face in described every image and be in occlusion state With the face total number being in closed-eye state.
13. devices according to claim 8, it is characterised in that described replacement module is used for:
When described reference object is personage, the image at described destination object place is carried out image segmentation, obtains described target The human face region of object and body region;
The face district of the object to be replaced in described benchmark image is replaced with the human face region of described destination object and body region Territory and body region.
14. devices according to claim 8, it is characterised in that described device also includes:
Adjusting module, for adjusting brightness or the brightness of described benchmark image of the viewing area of described destination object, so that The brightness of the viewing area of described destination object is consistent with the brightness of described benchmark image.
15. 1 kinds of image processing apparatus, it is characterised in that including:
Processor;
For storing the memorizer of the executable instruction of processor;
Wherein, described processor is configured to:
From target image set, determine that benchmark image, described target image set include that multiple images, every image include Identical reference object and shooting background;
Object to be replaced is determined from described benchmark image;
In other images beyond described benchmark image from described target image set, determine that described object to be replaced is corresponding Destination object;
Replace the object to be replaced in described benchmark image with described destination object, obtain target image.
CN201610479905.0A 2016-06-27 2016-06-27 Image processing method and device Pending CN106204435A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610479905.0A CN106204435A (en) 2016-06-27 2016-06-27 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610479905.0A CN106204435A (en) 2016-06-27 2016-06-27 Image processing method and device

Publications (1)

Publication Number Publication Date
CN106204435A true CN106204435A (en) 2016-12-07

Family

ID=57460853

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610479905.0A Pending CN106204435A (en) 2016-06-27 2016-06-27 Image processing method and device

Country Status (1)

Country Link
CN (1) CN106204435A (en)

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107123081A (en) * 2017-04-01 2017-09-01 北京小米移动软件有限公司 image processing method, device and terminal
CN107507216A (en) * 2017-08-17 2017-12-22 北京觅己科技有限公司 The replacement method of regional area, device and storage medium in image
CN107622483A (en) * 2017-09-15 2018-01-23 深圳市金立通信设备有限公司 A kind of image combining method and terminal
CN108063884A (en) * 2017-11-15 2018-05-22 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN108095465A (en) * 2018-01-19 2018-06-01 京东方科技集团股份有限公司 A kind of image processing method and device
CN108156382A (en) * 2017-12-29 2018-06-12 上海爱优威软件开发有限公司 A kind of photo processing method and terminal
CN108259769A (en) * 2018-03-30 2018-07-06 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN108520493A (en) * 2018-03-30 2018-09-11 广东欧珀移动通信有限公司 Processing method, device, storage medium and the electronic equipment that image is replaced
CN108566516A (en) * 2018-05-14 2018-09-21 Oppo广东移动通信有限公司 Image processing method, device, storage medium and mobile terminal
CN108776784A (en) * 2018-05-31 2018-11-09 广东新康博思信息技术有限公司 A kind of mobile law enforcement system based on image recognition
CN108961158A (en) * 2017-05-17 2018-12-07 中国移动通信有限公司研究院 A kind of image composition method and device
CN109005354A (en) * 2018-09-25 2018-12-14 努比亚技术有限公司 Image pickup method, mobile terminal and computer readable storage medium
CN110059643A (en) * 2019-04-23 2019-07-26 王雪燕 A kind of more image feature comparisons and method, mobile terminal and the readable storage medium storing program for executing preferentially merged
CN110213476A (en) * 2018-02-28 2019-09-06 腾讯科技(深圳)有限公司 Image processing method and device
CN110378840A (en) * 2019-07-23 2019-10-25 厦门美图之家科技有限公司 Image processing method and device
CN110503703A (en) * 2019-08-27 2019-11-26 北京百度网讯科技有限公司 Method and apparatus for generating image
CN112085688A (en) * 2020-09-16 2020-12-15 蒋芳 Method and system for removing pedestrian shielding during photographing
WO2021057277A1 (en) * 2019-09-23 2021-04-01 华为技术有限公司 Photographing method in dark light and electronic device
CN113033344A (en) * 2021-03-10 2021-06-25 咪咕文化科技有限公司 Image processing method and device and electronic equipment
CN113052025A (en) * 2021-03-12 2021-06-29 咪咕文化科技有限公司 Training method of image fusion model, image fusion method and electronic equipment
CN113610034A (en) * 2021-08-16 2021-11-05 脸萌有限公司 Method, device, storage medium and electronic equipment for identifying person entity in video
WO2022247766A1 (en) * 2021-05-28 2022-12-01 维沃移动通信(杭州)有限公司 Image processing method and apparatus, and electronic device
CN117056547A (en) * 2023-10-13 2023-11-14 深圳博十强志科技有限公司 Big data classification method and system based on image recognition

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101617339A (en) * 2007-02-15 2009-12-30 索尼株式会社 Image processing apparatus and image processing method
CN103491299A (en) * 2013-09-17 2014-01-01 宇龙计算机通信科技(深圳)有限公司 Photographic processing method and device
CN104243818A (en) * 2014-08-29 2014-12-24 小米科技有限责任公司 Image processing method and device and image processing equipment
US20150071557A1 (en) * 2013-06-05 2015-03-12 Emotient Spatial organization of images based on emotion face clouds
CN105303161A (en) * 2015-09-21 2016-02-03 广东欧珀移动通信有限公司 Method and device for shooting multiple people

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101617339A (en) * 2007-02-15 2009-12-30 索尼株式会社 Image processing apparatus and image processing method
US20150071557A1 (en) * 2013-06-05 2015-03-12 Emotient Spatial organization of images based on emotion face clouds
CN103491299A (en) * 2013-09-17 2014-01-01 宇龙计算机通信科技(深圳)有限公司 Photographic processing method and device
CN104243818A (en) * 2014-08-29 2014-12-24 小米科技有限责任公司 Image processing method and device and image processing equipment
CN105303161A (en) * 2015-09-21 2016-02-03 广东欧珀移动通信有限公司 Method and device for shooting multiple people

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107123081A (en) * 2017-04-01 2017-09-01 北京小米移动软件有限公司 image processing method, device and terminal
CN108961158B (en) * 2017-05-17 2022-01-25 中国移动通信有限公司研究院 Image synthesis method and device
CN108961158A (en) * 2017-05-17 2018-12-07 中国移动通信有限公司研究院 A kind of image composition method and device
CN107507216B (en) * 2017-08-17 2020-06-09 北京觅己科技有限公司 Method and device for replacing local area in image and storage medium
CN107507216A (en) * 2017-08-17 2017-12-22 北京觅己科技有限公司 The replacement method of regional area, device and storage medium in image
CN107622483A (en) * 2017-09-15 2018-01-23 深圳市金立通信设备有限公司 A kind of image combining method and terminal
CN108063884A (en) * 2017-11-15 2018-05-22 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN108156382A (en) * 2017-12-29 2018-06-12 上海爱优威软件开发有限公司 A kind of photo processing method and terminal
CN108095465A (en) * 2018-01-19 2018-06-01 京东方科技集团股份有限公司 A kind of image processing method and device
CN110213476A (en) * 2018-02-28 2019-09-06 腾讯科技(深圳)有限公司 Image processing method and device
CN108520493A (en) * 2018-03-30 2018-09-11 广东欧珀移动通信有限公司 Processing method, device, storage medium and the electronic equipment that image is replaced
CN108259769B (en) * 2018-03-30 2020-08-14 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN108259769A (en) * 2018-03-30 2018-07-06 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN108566516A (en) * 2018-05-14 2018-09-21 Oppo广东移动通信有限公司 Image processing method, device, storage medium and mobile terminal
CN108566516B (en) * 2018-05-14 2020-07-31 Oppo广东移动通信有限公司 Image processing method, device, storage medium and mobile terminal
CN108776784A (en) * 2018-05-31 2018-11-09 广东新康博思信息技术有限公司 A kind of mobile law enforcement system based on image recognition
CN109005354A (en) * 2018-09-25 2018-12-14 努比亚技术有限公司 Image pickup method, mobile terminal and computer readable storage medium
CN110059643A (en) * 2019-04-23 2019-07-26 王雪燕 A kind of more image feature comparisons and method, mobile terminal and the readable storage medium storing program for executing preferentially merged
CN110378840A (en) * 2019-07-23 2019-10-25 厦门美图之家科技有限公司 Image processing method and device
CN110503703A (en) * 2019-08-27 2019-11-26 北京百度网讯科技有限公司 Method and apparatus for generating image
CN110503703B (en) * 2019-08-27 2023-10-13 北京百度网讯科技有限公司 Method and apparatus for generating image
WO2021057277A1 (en) * 2019-09-23 2021-04-01 华为技术有限公司 Photographing method in dark light and electronic device
CN112085688A (en) * 2020-09-16 2020-12-15 蒋芳 Method and system for removing pedestrian shielding during photographing
CN113033344A (en) * 2021-03-10 2021-06-25 咪咕文化科技有限公司 Image processing method and device and electronic equipment
CN113033344B (en) * 2021-03-10 2024-04-12 咪咕文化科技有限公司 Image processing method and device and electronic equipment
CN113052025A (en) * 2021-03-12 2021-06-29 咪咕文化科技有限公司 Training method of image fusion model, image fusion method and electronic equipment
WO2022247766A1 (en) * 2021-05-28 2022-12-01 维沃移动通信(杭州)有限公司 Image processing method and apparatus, and electronic device
CN113610034A (en) * 2021-08-16 2021-11-05 脸萌有限公司 Method, device, storage medium and electronic equipment for identifying person entity in video
CN113610034B (en) * 2021-08-16 2024-04-30 脸萌有限公司 Method and device for identifying character entities in video, storage medium and electronic equipment
CN117056547A (en) * 2023-10-13 2023-11-14 深圳博十强志科技有限公司 Big data classification method and system based on image recognition
CN117056547B (en) * 2023-10-13 2024-01-26 深圳博十强志科技有限公司 Big data classification method and system based on image recognition

Similar Documents

Publication Publication Date Title
CN106204435A (en) Image processing method and device
CN108764091B (en) Living body detection method and apparatus, electronic device, and storage medium
CN105631408B (en) Face photo album processing method and device based on video
CN106295511B (en) Face tracking method and device
CN106228168B (en) The reflective detection method of card image and device
CN109815844A (en) Object detection method and device, electronic equipment and storage medium
CN106548468B (en) The method of discrimination and device of image definition
CN105469356B (en) Face image processing process and device
CN106548145A (en) Image-recognizing method and device
CN105354543A (en) Video processing method and apparatus
CN104700353B (en) Image filters generation method and device
CN108154465B (en) Image processing method and device
CN106528879A (en) Picture processing method and device
CN106331504A (en) Shooting method and device
CN110503023A (en) Biopsy method and device, electronic equipment and storage medium
CN105528078B (en) The method and device of controlling electronic devices
CN107527053A (en) Object detection method and device
CN107798314A (en) Skin color detection method and device
CN107463903A (en) Face key independent positioning method and device
CN106446946A (en) Image recognition method and device
CN104408404A (en) Face identification method and apparatus
CN106131441A (en) Photographic method and device, electronic equipment
CN105574857A (en) Image analysis method and device
CN109615593A (en) Image processing method and device, electronic equipment and storage medium
CN112115894A (en) Training method and device for hand key point detection model and electronic equipment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20161207

RJ01 Rejection of invention patent application after publication