CN104170371A - Method of realizing self-service group photo and photographic device - Google Patents

Method of realizing self-service group photo and photographic device Download PDF

Info

Publication number
CN104170371A
CN104170371A CN201480000693.2A CN201480000693A CN104170371A CN 104170371 A CN104170371 A CN 104170371A CN 201480000693 A CN201480000693 A CN 201480000693A CN 104170371 A CN104170371 A CN 104170371A
Authority
CN
China
Prior art keywords
image
face
group photo
photo object
carried out
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201480000693.2A
Other languages
Chinese (zh)
Other versions
CN104170371B (en
Inventor
杜成
邓斌
罗巍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Device Co Ltd
Original Assignee
Huawei Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Device Co Ltd filed Critical Huawei Device Co Ltd
Publication of CN104170371A publication Critical patent/CN104170371A/en
Application granted granted Critical
Publication of CN104170371B publication Critical patent/CN104170371B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

The present invention provides a method of realizing self-service group photo and a photographic device for avoiding the incomplete persons in a finally synthetic image. For this purpose, the embodiment of the present invention provides the method of realizing the self-service group photo. The method comprises the steps of shooting a first image; shooting a second image; carrying out face detection on the first image, and determining the area where the face is detected in the first image as a first group photo object; carrying out the face detection on the second image, and determining the area where the face is detected in the second image as a second group photo object; splicing the first and second images, wherein a spliced stitch border keeps away from the first and second group photo objects. The present invention also provides a photographic device.

Description

Realize method and the camera installation of self-service group photo
Technical field
The present invention relates to technical field of image processing, particularly a kind of method and camera installation of realizing self-service group photo.
Background technology
When out on tours or many people get together, often have the demand of taking group photo.At this moment the people who participates in group photo often needs to look in addition a stranger to take pictures, and guarantee all takes everyone.But, separately look for stranger to take pictures often very inconvenient, for example travel abroad, may be difficult to because speech is obstructed find the people who helps and take pictures, the people who sometimes participates in group photo again can be worried to stranger's the skill of taking pictures.Therefore, find and a kind ofly do not need to look in addition people to help just to realize the method for taking a group photo, be necessary.
In the prior art, there is a kind of method that realizes the self-service group photo of many people, as shown in Figure 1, first, first is taken second, afterwards second and first switch, second is clapped a first again, and first algorithm cuts the border of two image stitchings of method searching of (seam carving) by finedraw afterwards.Pass through afterwards Poisson picture editting's (Poisson image editing) method along sewing up border by two image stitchings.
This scheme is owing to there is no to consider to take pictures middle people's information, thus in the time that people's clothes and background color, texture are approaching, sew up border probably through the people in photo, thus composograph that can not be generated.As shown in Figure 2, in such scene, because people's clothes texture and the background on the right are close, cause sewing up border and pass from the human arm on the right, produce the effect that is cut off an arm.
Summary of the invention
The embodiment of the present invention provides a kind of method and camera installation of realizing self-service group photo, avoids occurring incomplete people in final synthetic image.
In view of this, the embodiment of the present invention provides:
A method that realizes self-service group photo, comprising:
Take the first image;
Take the second image;
Described the first image is carried out to face detection, determine that the region that detects face in described the first image is the first group photo object;
Described the second image is carried out to face detection, determine that the region that detects face in described the second image is the second group photo object;
Described the first image and described the second image are spliced, and described the first group photo object and described the second group photo object are avoided in the stitching border of splicing.
Optionally, described described the first image and described the second image are spliced, described the first group photo object and described the second group photo object are avoided in the stitching border of splicing, specifically comprise:
Use finedraw cutting algorithm to splice described the first image and described the second image, wherein in the time determining described stitching border, improve the energy value of finedraw through the part of described the first group photo object and the second group photo object; Or,
Described the first image and described the second image are spliced, in the time finding at least one that can take a group photo in object through described the first group photo object and second on described stitching border, revise described stitching border its part is overlapped with the border of the first group photo object and/or the second group photo object.
Optionally, described described the first image and described the second image are spliced before, also comprise:
Described the first image and described the second image are carried out to image registration.
Optionally, described described the first image and described the second image are carried out to image registration, specifically comprise:
Extract the characteristic point in described the first image and described the second image;
Characteristic point in described the first image and described the second image is mated;
According to matching result, at least one in described the first image and the second image carried out to global change, so that the common ground similarity maximum of described the first image and described the second image.
Optionally, described described the first image and described the second image are carried out to image registration before, also comprise:
Described the first image is carried out to recognition of face;
Described the second image is carried out to recognition of face, determine common face in described the first image and described the second image;
Characteristic point in described the first image of described extraction and described the second image, comprising:
Extract the characteristic point in described the first image and described the second image, described at least one pair of, characteristic point is positioned at the region at described common face place.
Optionally, after described shooting the first image, before described shooting the second image, also comprise:
The picture of finding a view is carried out to recognition of face, determine find a view in picture and described the first image in common face, and common face or the region at described common face place described in mark.
Optionally, it is characterized in that, before described shooting the second image, described described the first image is carried out to face detection after, also comprise:
The picture of finding a view is carried out to face detection, in the described picture of finding a view, described in definite also mark, in the first image, detect the position of face.
Another embodiment of the present invention provides a kind of camera installation, comprising:
Take unit, for taking the first image and the second image;
Face detecting unit, for described the first image and described the second image are carried out to face detection, definite region that detects face is the first group photo object and the second group photo object;
Image Mosaics unit, for described the first image and described the second image are spliced, described the first group photo object and described the second group photo object are avoided in the stitching border of splicing.
Optionally, described Image Mosaics unit specifically for:
Use finedraw cutting algorithm to splice described the first image and described the second image, wherein in the time finding described stitching border, improve the energy value of finedraw through the part of described the first group photo object and the second group photo object; Or,
Described the first image and described the second image are spliced, in the time finding at least one that can take a group photo in object through described the first group photo object and second on described stitching border, revise described stitching border its part is overlapped with the border of the first group photo object and/or the second group photo object.
Optionally, described camera installation also comprises:
Image registration unit, for carrying out image registration to described the first image and described the second image.
Optionally, described image configurations unit comprises:
Feature point extraction unit, for extracting the characteristic point of described the first image and described the second image;
Feature Points Matching unit, for mating the characteristic point of described the first image and described the second image;
Global change unit, for according to matching result, carries out global change at least one in described the first image and the second image, so that the common ground similarity maximum of described the first image and described the second image.
Optionally, described camera installation also comprises:
Face identification unit, for described the first image and described the second image are carried out to recognition of face, determines common face in described the first image and described the second image;
Described feature point extraction unit, specifically for extracting the characteristic point in described the first image and described the second image, wherein described at least one pair of, characteristic point is positioned at the region at described common face place.
Optionally, described face identification unit, also for the picture of finding a view is carried out to recognition of face, determine find a view in picture and described the first image in common face, and common face or the region at described common face place described in mark.
Optionally, described face detecting unit also for the picture of finding a view is carried out to face detection, detects the position of face in the described picture of finding a view in the first image described in definite also mark.
Another embodiment of the present invention provides a kind of camera installation, comprising:
Camera, for taking the first image and the second image;
Processor, for described the first image and described the second image are carried out to face detection, definite region that detects face is the first group photo object and the second group photo object, and described the first image and described the second image are spliced, described the first group photo object and described the second group photo object are avoided in the stitching border of splicing.
Optionally, described described the first image and described the second image are spliced, described the first group photo object and described the second group photo object are avoided in the stitching border of splicing, specifically comprise:
Use finedraw cutting algorithm to splice described the first image and described the second image, wherein in the time finding described stitching border, improve the energy value of finedraw through the part of described the first group photo object and the second group photo object; Or,
Described the first image and described the second image are spliced, in the time finding at least one that can take a group photo in object through described the first group photo object and second on described stitching border, revise described stitching border its part is overlapped with the border of the first group photo object and/or the second group photo object.
Optionally, described processor, also, for before described the first image and the second image are spliced, carries out image registration to described the first image and described the second image.
Optionally, described described the first image and described the second image are carried out to image registration, specifically comprise:
Extract the characteristic point in described the first image and described the second image;
Characteristic point in described the first image and described the second image is mated;
According to matching result, at least one in described the first image and the second image carried out to global change, so that the common ground similarity maximum of described the first image and described the second image.
Optionally, described processor also, for described the first image and described the second image are carried out to recognition of face, is determined common face in described the first image and described the second image;
Characteristic point in described the first image of described extraction and described the second image, comprising:
Extract the characteristic point in described the first image and described the second image, wherein described at least one pair of, characteristic point is positioned at the region at described common face place.
Optionally, described camera installation also comprises:
Electronic viewfinding device, the input of described electronic viewfinding device and the output of described camera are coupled;
Described processor also carries out recognition of face for the 3rd image that described camera is outputed in real time to described electronic viewfinding device, determines common face in described the first image and described the 3rd image;
Described electronic viewfinding device is for common face described in mark.
Optionally, described camera installation also comprises:
Electronic viewfinding device, the input of described electronic viewfinding device and the output of described camera are coupled;
Described processor also carries out face detection for the 3rd image that described camera is outputed in real time to described electronic viewfinding device, determines the position that detects face in described the first image in described the 3rd image;
Described electronic viewfinding device for detecting the position of face in the first image described in described the 3rd image mark.
Optionally, described camera installation also comprises:
Memory, splices for storing described the first image, the second image and described processor 320 image obtaining.
Optionally, described camera comprises:
Camera lens, imageing sensor and mould/electric A/D change-over circuit, the input of the output of wherein said imageing sensor and described A/D change-over circuit is coupled, and the output of described A/D change-over circuit and described processor are coupled.
The embodiment of the present invention is by carrying out face detection to the two width images of taking, thereby determine the position of people in two width images, in the time splicing, sew up border and avoid detecting the region of face, can effectively avoid in the image being finally spliced into, occurring incomplete people, improved user's experience.
Brief description of the drawings
Fig. 1 is the schematic diagram of self-service group photo in prior art;
Fig. 2 sews up border while splicing in prior art through people's the example of participating in group photo;
Fig. 3 is the flow chart of an embodiment of the method for the present invention;
Fig. 4 is the flow chart of method for registering images in one embodiment of the invention;
Fig. 5 is the embodiment of the present invention at the schematic diagram of face position in mark the first image of finding a view in picture;
Fig. 6 is the embodiment of the present invention at the schematic diagram of common face in mark the first image of finding a view in picture;
Fig. 7 is the effect schematic diagram that the embodiment of the present invention is spliced;
Fig. 8 is the structural representation of a kind of camera installation of providing of the embodiment of the present invention;
Fig. 9 is the structure chart of the another kind of camera installation that provides of the embodiment of the present invention;
Figure 10 is the structure chart of camera in the embodiment of the present invention;
Figure 11 is the present invention at the flow chart of face position in mark the first image of finding a view in picture;
Figure 12 is in one embodiment of the invention, at the flow chart of face position in mark the first image of finding a view in picture.
Embodiment
Describe the specific embodiment of the present invention in detail below in conjunction with accompanying drawing.
Consult Fig. 3, the embodiment of the present invention provides a kind of method that realizes self-service group photo, and the method comprises:
100, take the first image;
110, take the second image;
120, described the first image is carried out to face detection, determine that the region that detects face in described the first image is the first group photo object;
130, described the second image is carried out to face detection, determine that the region that detects face in described the second image is the second group photo object;
It is the position that whether has face in detected image and determine face that face detects, the described region that detects face is a region on image, this region comprises one or more figures that are detected as face, and preferred, face figure is on " top " in this region.Here " top " word not necessarily refers to upper end, can be according to the direction of face, as situations such as standing upside down, couch is determined.Suppose that the direction along mouth in face to eye makes a vector, " top " in this region is in this vector direction pointed.This region can be a simple ellipse or rectangle, or the union of some ellipses or rectangle; Also can be a humanoid region, or the union in some humanoid regions.In one embodiment, determine described humanoid region by the mode of bounds checking.
Step 120 and 130 can be carried out after step 100 and 110, can first perform step 120 and perform step 130 again, also can first perform step 130 and perform step 120 again, or carry out simultaneously; Or intert and carry out with step 100,110, carry out according to 100 → 120 → 110 → 130 order.
140, described the first image and described the second image are spliced, described the first group photo object and described the second group photo object are avoided in the stitching border of splicing.
Here, when splicing, (seam carving) method of can cutting according to finedraw is spliced two images.Originally Seam carving algorithm is that a kind of design is used for the method for cutting, stretching image, it is to an energy function of image definition, by cutting or copy the image segments of energy lower, realize stretching or the constriction of image core content without distortion or micro-strain.This method also can be applied to the seamless spliced of two width images, in the time of splicing, finds the low spot of energy in two width images, then from energy low spot, two width images is split and is stitched together.
Below Seam Carving algorithm is introduced: first image I is considered as to a binary function, then utilizes an energy function of image gradient operator (Sobel operator) definition:
e ( I ) = | ∂ I ∂ x | + | ∂ I ∂ y |
Finedraw (seam) is defined as a line that passes through from top to bottom or from left to right entire image, can be straight line, curve or broken line.On mathematics, for width m × n image, finedraw can be used following sets definition:
s={S(x,i)|x=x(i),i=1,2,...,n}
Or s={S (i, y) | y=y (i), i=1,2 ..., m}
Here the coordinate of putting in S (x, i) and S (i, y) representative image, x (i) and y (i) they are all the functions about i, wherein x (i) be one from 1,2 ... n} to 1,2 ... the mapping of m}, and y (i) be one from { 1,2,, m} is to { 1,2,, the mapping of n}, and
|x(i)-x(i-1)|≤1
|y(j)-y(j-1)|≤1
Every finedraw is calculated to the energy value sum of each point on it, the finedraw of energy value sum minimum is chosen to be optimum finedraw, and splicing can be carried out at optimum finedraw place, also, described optimum finedraw is chosen to be to described stitching border.
According to the embodiment of the present invention, when identifying while having face in image, described the first group photo object and described the second group photo object are avoided in the stitching border of splicing, and this can realize in several ways:
In one embodiment, in the time finding that sewing up border can take a group photo object through the first group photo object and/or second, border is sewed up in amendment makes its part overlap with the first group photo object and/or the second border of taking a group photo object.
In another embodiment, in the time finding that sewing up border can take a group photo object through the first group photo object and/or second, finedraw is carried out to energy value adjustment through the part of described the first group photo object and/or the second group photo object, such as described finedraw all being mentioned to the highest (can certainly bring up to other values) through the energy value of the each point of the part of described the first group photo object and/or the second group photo object, so the energy value sum of this finedraw is enhanced, and just can not be chosen as and sew up border.Now, to the image seam carving algorithm that reruns after adjusting, find and sew up border.Here, finedraw is carried out to energy value adjustment through the part of described the first group photo object and/or the second group photo object, can, after completing recognition of face, determining the first group photo object and the second group photo object, start just to carry out before seam carving algorithm searching stitching border.
In some embodiment of this aspect, described seam carving algorithm is only implemented the part that has common background in two width images, can reduce like this operand.Here " background " refers to the part except described the first group photo object and the second group photo object in image, and to scheme as example, the part of wall, the ceiling etc. in figure except the first group photo object and the second group photo object is exactly background.Because the first image and the second image are taken respectively, be difficult to ensure that the background of two width images is in full accord, and the inconsistent part of background obviously can not be served as stitching border, therefore only to there being the part of common background to implement seam carving algorithm in two width images, can reduce invalid computing, improve operation efficiency.
The viewfinder range of the first image and the second image, shooting angle, shooting distance etc. generally can be not just the same, in this case, while splicing generally will these differences between two width images eliminate could realize seamless spliced.For this reason, in one embodiment of the invention, after step 110, before step 140, also comprise:
115, described the first image and described the second image are carried out to image registration.
So-called " image registration " (Image Registration), refers to two width or more image mapped under same coordinate, can realize the operation such as overlapping, target identification, stack exposure between different images.From mathematics, suppose given two width image I f(x, y) and I m(x, y), wherein (x, y) is two width image I f(x, y) and I mcertain in (x, y) a bit.The target of image registration algorithm is to find a kind of conversion T: Ω f→ Ω mmake to convert the similarity degree c (T of two width images after a certain figure; I f, I m) reach maximum.Similarity measure c is relevant with conversion and calculate by a two width view data function that is used for weighing similarity degree, such as it can be the error sum of squares of gradation of image value:
∫ Ω F [ I F ( x , y ) - I M ( x , y ) ] 2 dxdy .
The optimal solution that finally finds this function by a kind of optimization algorithm, converts T.
Image registration has many algorithms to realize, and in one embodiment of the invention, realizes by the method for Feature Points Matching.Characteristic point refers to geometrically the feature point set that can locate of special meaning (such as discontinuity point, the breakover point of figure, line crosspoint etc.).As shown in Figure 4, detailed process is as follows:
1151, extract the characteristic point in described the first image and described the second image;
As shown in Figure 5, characteristic point is generally chosen at texture in image and, than more rich part, generally can carrys out extract minutiae with Harris corner detection approach or SIFT algorithm here.
1152, the characteristic point in described the first image and described the second image is mated;
By the coupling to characteristic point, can find out the common ground of the first image and the second image, and can draw the first image and the second image difference at aspects such as viewfinder range, shooting angle, shooting distances.To the coupling of characteristic point, say to be exactly the one-to-one relationship between the characteristic point in characteristic point and the second image of setting up in the first image from mathematics.There is at present the coupling that multiple technologies can realization character point, in one embodiment, Feature Points Matching realizes by block matching algorithm, the similarity of the image fritter of each characteristic point representative in the image fritter that relatively First Characteristic point represents in the first image and the second image, gets characteristic point that in the second image, similarity the is the highest match point as described First Characteristic point.In another embodiment, the SIFT feature of the Second Characteristic point in First Characteristic point and the second image in Feature Points Matching algorithmic match the first image, this can be full figure search, also can not full figure search, and only in certain area coverage around Second Characteristic point, mate described First Characteristic point.
1153,, according to matching result, at least one in described the first image and the second image carried out to global change, so that the common ground similarity maximum of described the first image and described the second image.
Here, similarity can be weighed with similarity measure.Similarity measure as previously mentioned, is relevant with described global change and calculate by a two width view data function that is used for weighing similarity degree.Here, global change refers to entire image is all converted, instead of image part is converted.Conventionally the T of this global change can express with a perspective matrix:
T = m 11 m 12 m 13 m 21 m 22 m 23 m 31 m 32 1
Each element of this perspective matrix can calculate by random sampling consistency (Random Sample Consensus, RANSAC) algorithm.
The effect that uses the embodiment of the present invention to splice can be referring to Fig. 8.
By the technical scheme of above embodiment, the phenomenon that the present invention can there will not be the people to participating in group photo to cut in realizing two width group photo image seamless splicings, has improved the accuracy rate of splicing.
In the group photo that has at least three people to participate in, while taking the first image and the second image, often can allow wherein some people is motionless, other people take turns to join in shooting, this comes, and there will be some common people, because people's texture is abundanter in the first image and the second image, therefore characteristic point is chosen to the region at these common people places, can improves success rate and the efficiency of Feature Points Matching.For this reason, as shown in Figure 3, in one embodiment, after described step 100, also comprise:
101, described the first image is carried out to recognition of face;
112, described the second image is carried out to recognition of face, determine common face in described the first image and described the second image;
Here recognition of face is that face is recognized, can determine in two width images whether have common face by this technology.Here it is to be noted, in various embodiments of the present invention, face detects and recognition of face is for process, specially do not refer to a kind of algorithm or program, some algorithm may complete face simultaneously and detect and recognition of face, but among the present invention, be so also regarded as having carried out respectively face and detect and recognition of face.
Here step 101 and 112 can be carried out after step 130, can successively carry out also and can carry out simultaneously; Also can intert and carry out with step 100,110,120 and 130, carry out according to 100 → 101 → 110 → 112 → 120 → 130 order.Especially, in the time that step 101 and 112 is successively carried out, between step 100 and 110, can also comprise:
102, the picture of finding a view is carried out to recognition of face, determine find a view in picture and described the first image in common face, and common face or the region at described common face place described in mark;
Present digital photographing apparatus generally all adopts electronic viewfinding mode, and on a display screen, the image of image sensor real time record is helped user and found a view.Step 112 in the above-mentioned picture of finding a view, determine with described the first image in common face, and common face or the region at described common face place described in mark, can help user in the time taking the second image, to find a view as much as possible and the first image is consistent.Described mark can mark with one or more square frames or oval frame the region at described common face or described common face place.Fig. 7 has provided the example of a common face of mark.
Except can mark common face, also can carry out mark to the region that detects face, as shown in Figure 6, before taking the second image, can carry out mark to the region that detects face in the first image, described in described mark can mark with one or more square frames or oval frame, detect face position or described in the region at the face place that detects.Also, (the not shown 110 later steps of Figure 12) as shown in figure 12, before step 110, after step 120, also comprise:
105, the picture of finding a view is carried out to face detection, in the described picture of finding a view, described in definite also mark, in the first image, detect the position of face.
To finding a view in picture, in described the first image, mark is carried out in the position of face, can help user to adjust viewfinder range in the time taking the second image, avoids occurring the situation of the people's location overlap in the first image and the second image.
Described step 1151 can specifically comprise:
11510, extract the characteristic point in described the first image and described the second image, described at least one pair of, characteristic point is positioned at the region at described common face place.
Here, the region at common face place as previously mentioned, can be simple ellipse or rectangular area, or humanoid region, or the union in above-mentioned ellipse, rectangle or humanoid region.
For the two width images that have common face, because the texture on the person is abundanter, so characteristic point is chosen to the region at common face place, can improve efficiency and the accuracy rate of Feature Points Matching.
One embodiment of the present of invention also provide a kind of camera installation of realizing self-service group photo, and as shown in Figure 9, described camera installation comprises:
Take unit 210, for taking the first image and the second image, described the first image comprises the first group photo object, and described the second image comprises the second group photo object;
Here, described the first group photo is to liking the shared region of one or more people in the first image, and described the second group photo is to liking the shared region of one or more people in the second image.
Face detecting unit 220, for described the first image and described the second image are carried out to face detection, definite region that detects face is the first group photo object and the second group photo object;
Image Mosaics unit 230, for described the first image and described the second image are spliced, described the first group photo object and described the second group photo object are avoided in the stitching border of splicing.
Here, when splicing, (seam carving) method of can cutting according to finedraw is spliced two images.
Described camera installation can also comprise:
Image registration unit 240, for carrying out image registration to described the first image and described the second image.
In one embodiment, described image registration unit 240 comprises:
Feature point extraction unit 2401, for extracting the characteristic point of described the first image and described the second image;
Feature Points Matching unit 2402, for mating the characteristic point of described the first image and described the second image;
Global change unit 2403, for according to matching result, carries out global change at least one in described the first image and the second image, so that the common ground similarity maximum of described the first image and described the second image.
In another embodiment, described camera installation can also comprise:
Face identification unit 250, for described the first image and described the second image are carried out to recognition of face, determines common face in described the first image and described the second image;
Described feature point extraction unit 2401, specifically for extracting the characteristic point in described the first image and described the second image, wherein described at least one pair of, characteristic point is positioned at the region at described common face place.
In another embodiment, described face identification unit 250 is also for the picture of finding a view is carried out to recognition of face, determines common face in the picture of finding a view and in described the first image, and common face or the region at described common face place described in mark.
In yet another embodiment, face detecting unit 220, also for the picture of finding a view is carried out to face detection, detects the position of face in the described picture of finding a view in the first image described in definite also mark.
The realization of the function of above-mentioned each unit and module, all has detailed introduction in each embodiment of the method above, do not repeat them here.
An alternative embodiment of the invention also provides a kind of camera installation, as shown in figure 10, comprising:
Camera 310, for taking the first image and the second image, described the first image comprises the first group photo object, described the second image comprises the second group photo object;
Processor 320, for described the first image and described the second image are carried out to face detection, definite region that detects face is the first group photo object and the second group photo object, and described the first image and described the second image are spliced, described the first group photo object and described the second group photo object are avoided in the stitching border of splicing.
Optionally, described camera installation also comprises memory 330, splices for storing described the first image, the second image and described processor 320 image obtaining.
Described processor is specifically as follows general CPU (CPU), can be also special such as graphics processing unit of image processor (GPU) etc.
As shown in figure 11, conventionally camera 310 comprises camera lens 3101, imageing sensor 3102, mould/electricity (A/D) change-over circuit 3103, optionally also comprise Focusing mechanism 3104 and automatic focusing module 3105, wherein the input of the output of imageing sensor 3102 and A/D change-over circuit 3103 is coupled, and the output of A/D change-over circuit 3103 and described processor 320 are coupled.Optionally, the output of A/D change-over circuit 3103 and described memory 330 are coupled.
Optionally, processor 320 also, for before described the first image and the second image are spliced, carries out image registration to described the first image and described the second image.
Image registration has many algorithms to realize, and in one embodiment of the invention, realizes by the method for Feature Points Matching.Characteristic point refers to geometrically the feature point set that can locate of special meaning (such as discontinuity point, the breakover point of figure, line crosspoint etc.).As shown in the figure, detailed process is as follows:
1151, extract the characteristic point in described the first image and described the second image;
1152, the characteristic point in described the first image and described the second image is mated;
1153,, according to matching result, at least one in described the first image and the second image carried out to global change, so that the common ground similarity maximum of described the first image and described the second image.
In one embodiment, described processor 320 also, for described the first image and described the second image are carried out to recognition of face, is determined common face in described the first image and described the second image.Accordingly, processor 320 is in the time of the characteristic point of extracting in described the first image and described the second image, and described at least one pair of, characteristic point is positioned at the region at described common face place.
Optionally, described camera installation can also comprise electronic viewfinding device 340, and the output of the input of described electronic viewfinding device 340 and described camera 310 is coupled.In one embodiment, described processor 320 also carries out recognition of face for the 3rd image that described camera 310 is outputed in real time to described electronic viewfinding device 340, determine common face in described the first image and described the 3rd image, accordingly, described electronic viewfinding device 340 is for face common described in mark.In another embodiment, described processor 320 also carries out face detection for the 3rd image that described camera 310 is outputed in real time to described electronic viewfinding device 340, in described the 3rd image, determine the position that detects face in described the first image, accordingly, described electronic viewfinding device 340 for detecting the position of face in the first image described in described the 3rd image mark.Described electronic viewfinding device 340 is specifically as follows screen, electronic viewfinder etc.
Described camera installation is specifically as follows digital camera, with the mobile phone of camera function, panel computer, notebook computer, wearable electronic etc.
One of ordinary skill in the art will appreciate that all or part of step realizing in above-described embodiment method is can carry out the hardware that instruction is relevant by program to complete, described program can be stored in a kind of computer-readable recording medium, such as read-only memory, flash memory, disk or CD etc.
In addition; in each technology, system, device, method and the each embodiment illustrating respectively in above embodiment, the technical characterictic of explanation can combine respectively; thereby form other the module not departing within the spirit and principles in the present invention; method, device, system and technology; the module that these combine according to the record of the embodiment of the present invention; method, device, system and technology are all within protection scope of the present invention.
The menu control method and the electronic equipment that above the embodiment of the present invention are provided are described in detail, applied specific case herein principle of the present invention and execution mode are set forth, the explanation of above embodiment is just for helping to understand method of the present invention and core concept thereof; , for one of ordinary skill in the art, according to thought of the present invention, all will change in specific embodiments and applications, in sum, this description should not be construed as limitation of the present invention meanwhile.

Claims (23)

1. a method that realizes self-service group photo, comprising:
Take the first image;
Take the second image;
Described the first image is carried out to face detection, determine that the region that detects face in described the first image is the first group photo object;
Described the second image is carried out to face detection, determine that the region that detects face in described the second image is the second group photo object;
Described the first image and described the second image are spliced, and described the first group photo object and described the second group photo object are avoided in the stitching border of splicing.
2. the method for claim 1, is characterized in that, described described the first image and described the second image is spliced, and described the first group photo object and described the second group photo object are avoided in the stitching border of splicing, specifically comprise:
Use finedraw cutting algorithm to splice described the first image and described the second image, wherein in the time determining described stitching border, improve the energy value of finedraw through the part of described the first group photo object and the second group photo object; Or,
Described the first image and described the second image are spliced, in the time finding at least one that can take a group photo in object through described the first group photo object and second on described stitching border, revise described stitching border its part is overlapped with the border of the first group photo object and/or the second group photo object.
3. method as claimed in claim 1 or 2, is characterized in that, described described the first image and described the second image are spliced before, also comprise:
Described the first image and described the second image are carried out to image registration.
4. method as claimed in claim 3, is characterized in that, described described the first image and described the second image is carried out to image registration, specifically comprises:
Extract the characteristic point in described the first image and described the second image;
Characteristic point in described the first image and described the second image is mated;
According to matching result, at least one in described the first image and the second image carried out to global change, so that the common ground similarity maximum of described the first image and described the second image.
5. method as claimed in claim 4, is characterized in that, described described the first image and described the second image are carried out to image registration before, also comprise:
Described the first image is carried out to recognition of face;
Described the second image is carried out to recognition of face, determine common face in described the first image and described the second image;
Characteristic point in described the first image of described extraction and described the second image, comprising:
Extract the characteristic point in described the first image and described the second image, described at least one pair of, characteristic point is positioned at the region at described common face place.
6. the method as described in claim 1 to 5 any one, is characterized in that, after described shooting the first image, before described shooting the second image, also comprises:
The picture of finding a view is carried out to recognition of face, determine find a view in picture and described the first image in common face, and common face or the region at described common face place described in mark.
7. the method as described in claim 1 to 6 any one, is characterized in that, before described shooting the second image, described described the first image is carried out to face detection after, also comprise:
The picture of finding a view is carried out to face detection, in the described picture of finding a view, described in definite also mark, in the first image, detect the position of face.
8. a camera installation, comprising:
Take unit, for taking the first image and the second image;
Face detecting unit, for described the first image and described the second image are carried out to face detection, definite region that detects face is the first group photo object and the second group photo object;
Image Mosaics unit, for described the first image and described the second image are spliced, described the first group photo object and described the second group photo object are avoided in the stitching border of splicing.
9. camera installation as claimed in claim 8, is characterized in that, described Image Mosaics unit specifically for:
Use finedraw cutting algorithm to splice described the first image and described the second image, wherein in the time finding described stitching border, improve the energy value of finedraw through the part of described the first group photo object and the second group photo object; Or,
Described the first image and described the second image are spliced, in the time finding at least one that can take a group photo in object through described the first group photo object and second on described stitching border, revise described stitching border its part is overlapped with the border of the first group photo object and/or the second group photo object.
10. camera installation as claimed in claim 8 or 9, is characterized in that, also comprises:
Image registration unit, for carrying out image registration to described the first image and described the second image.
11. camera installations as claimed in claim 9, is characterized in that, described image configurations unit comprises:
Feature point extraction unit, for extracting the characteristic point of described the first image and described the second image;
Feature Points Matching unit, for mating the characteristic point of described the first image and described the second image;
Global change unit, for according to matching result, carries out global change at least one in described the first image and the second image, so that the common ground similarity maximum of described the first image and described the second image.
12. camera installations as claimed in claim 9, is characterized in that, also comprise:
Face identification unit, for described the first image and described the second image are carried out to recognition of face, determines common face in described the first image and described the second image;
Described feature point extraction unit, specifically for extracting the characteristic point in described the first image and described the second image, wherein described at least one pair of, characteristic point is positioned at the region at described common face place.
13. camera installations as described in claim 8 to 12 any one, is characterized in that:
Described face identification unit, also for the picture of finding a view is carried out to recognition of face, determine find a view in picture and described the first image in common face, and common face or the region at described common face place described in mark.
14. camera installations as described in claim 8 to 13 any one, is characterized in that:
Described face detecting unit also for the picture of finding a view is carried out to face detection, detects the position of face in the described picture of finding a view in the first image described in definite also mark.
15. 1 kinds of camera installations, comprising:
Camera, for taking the first image and the second image;
Processor, for described the first image and described the second image are carried out to face detection, definite region that detects face is the first group photo object and the second group photo object, and described the first image and described the second image are spliced, described the first group photo object and described the second group photo object are avoided in the stitching border of splicing.
16. camera installations as claimed in claim 15, is characterized in that, described described the first image and described the second image are spliced, and described the first group photo object and described the second group photo object are avoided in the stitching border of splicing, specifically comprise:
Use finedraw cutting algorithm to splice described the first image and described the second image, wherein in the time finding described stitching border, improve the energy value of finedraw through the part of described the first group photo object and the second group photo object; Or,
Described the first image and described the second image are spliced, in the time finding at least one that can take a group photo in object through described the first group photo object and second on described stitching border, revise described stitching border its part is overlapped with the border of the first group photo object and/or the second group photo object.
17. camera installations as described in claim 15 or 16, is characterized in that:
Described processor, also, for before described the first image and the second image are spliced, carries out image registration to described the first image and described the second image.
18. camera installations as claimed in claim 16, is characterized in that, described described the first image and described the second image are carried out to image registration, specifically comprise:
Extract the characteristic point in described the first image and described the second image;
Characteristic point in described the first image and described the second image is mated;
According to matching result, at least one in described the first image and the second image carried out to global change, so that the common ground similarity maximum of described the first image and described the second image.
19. electronic equipments as claimed in claim 17, is characterized in that:
Described processor also, for described the first image and described the second image are carried out to recognition of face, is determined common face in described the first image and described the second image;
Characteristic point in described the first image of described extraction and described the second image, comprising:
Extract the characteristic point in described the first image and described the second image, wherein described at least one pair of, characteristic point is positioned at the region at described common face place.
20. electronic equipments as described in claim 15 to 19 any one, is characterized in that, also comprise:
Electronic viewfinding device, the input of described electronic viewfinding device and the output of described camera are coupled;
Described processor also carries out recognition of face for the 3rd image that described camera is outputed in real time to described electronic viewfinding device, determines common face in described the first image and described the 3rd image;
Described electronic viewfinding device is for common face described in mark.
21. electronic equipments as described in claim 15 to 20 any one, is characterized in that, also comprise:
Electronic viewfinding device, the input of described electronic viewfinding device and the output of described camera are coupled;
Described processor also carries out face detection for the 3rd image that described camera is outputed in real time to described electronic viewfinding device, determines the position that detects face in described the first image in described the 3rd image;
Described electronic viewfinding device for detecting the position of face in the first image described in described the 3rd image mark.
22. electronic equipments as described in claim 15 to 21 any one, is characterized in that, also comprise:
Memory, splices for storing described the first image, the second image and described processor 320 image obtaining.
23. according to claim 15 to the electronic equipment described in 22 any one, it is characterized in that, camera comprises:
Camera lens, imageing sensor and mould/electric A/D change-over circuit, the input of the output of wherein said imageing sensor and described A/D change-over circuit is coupled, and the output of described A/D change-over circuit and described processor are coupled.
CN201480000693.2A 2014-01-03 2014-01-03 Realize the method and camera installation of self-service group photo Active CN104170371B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2014/070054 WO2015100723A1 (en) 2014-01-03 2014-01-03 Method and photographing device for implementing self-service group photo taking

Publications (2)

Publication Number Publication Date
CN104170371A true CN104170371A (en) 2014-11-26
CN104170371B CN104170371B (en) 2017-11-24

Family

ID=51912376

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201480000693.2A Active CN104170371B (en) 2014-01-03 2014-01-03 Realize the method and camera installation of self-service group photo

Country Status (2)

Country Link
CN (1) CN104170371B (en)
WO (1) WO2015100723A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104469157A (en) * 2014-12-08 2015-03-25 广东欧珀移动通信有限公司 Camera multi-person shooting method and device
CN105631804A (en) * 2015-12-24 2016-06-01 小米科技有限责任公司 Image processing method and device
CN106981048A (en) * 2017-03-31 2017-07-25 联想(北京)有限公司 A kind of image processing method and device
WO2018113203A1 (en) * 2016-12-24 2018-06-28 华为技术有限公司 Photographing method and mobile terminal
CN110290329A (en) * 2019-08-06 2019-09-27 珠海格力电器股份有限公司 A kind of image composition method
CN111050072A (en) * 2019-12-24 2020-04-21 Oppo广东移动通信有限公司 Method, equipment and storage medium for remote co-shooting
CN111466112A (en) * 2018-08-10 2020-07-28 华为技术有限公司 Image shooting method and electronic equipment
CN114205512A (en) * 2020-09-17 2022-03-18 华为技术有限公司 Shooting method and device

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10582119B2 (en) 2017-07-26 2020-03-03 Sony Corporation Image processing method and device for composite selfie image composition for remote users

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090322906A1 (en) * 2008-06-26 2009-12-31 Casio Computer Co., Ltd. Imaging apparatus, imaged picture recording method, and storage medium storing computer program
CN103186763A (en) * 2011-12-28 2013-07-03 富泰华工业(深圳)有限公司 Face recognition system and face recognition method
CN103491299A (en) * 2013-09-17 2014-01-01 宇龙计算机通信科技(深圳)有限公司 Photographic processing method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4655054B2 (en) * 2007-02-26 2011-03-23 富士フイルム株式会社 Imaging device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090322906A1 (en) * 2008-06-26 2009-12-31 Casio Computer Co., Ltd. Imaging apparatus, imaged picture recording method, and storage medium storing computer program
CN103186763A (en) * 2011-12-28 2013-07-03 富泰华工业(深圳)有限公司 Face recognition system and face recognition method
CN103491299A (en) * 2013-09-17 2014-01-01 宇龙计算机通信科技(深圳)有限公司 Photographic processing method and device

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104469157A (en) * 2014-12-08 2015-03-25 广东欧珀移动通信有限公司 Camera multi-person shooting method and device
CN104469157B (en) * 2014-12-08 2017-11-07 广东欧珀移动通信有限公司 The method and apparatus that a kind of many people of camera shoot
CN105631804A (en) * 2015-12-24 2016-06-01 小米科技有限责任公司 Image processing method and device
CN105631804B (en) * 2015-12-24 2019-04-16 小米科技有限责任公司 Image processing method and device
WO2018113203A1 (en) * 2016-12-24 2018-06-28 华为技术有限公司 Photographing method and mobile terminal
CN106981048A (en) * 2017-03-31 2017-07-25 联想(北京)有限公司 A kind of image processing method and device
CN106981048B (en) * 2017-03-31 2020-12-18 联想(北京)有限公司 Picture processing method and device
CN111466112A (en) * 2018-08-10 2020-07-28 华为技术有限公司 Image shooting method and electronic equipment
CN110290329A (en) * 2019-08-06 2019-09-27 珠海格力电器股份有限公司 A kind of image composition method
CN111050072A (en) * 2019-12-24 2020-04-21 Oppo广东移动通信有限公司 Method, equipment and storage medium for remote co-shooting
CN114205512A (en) * 2020-09-17 2022-03-18 华为技术有限公司 Shooting method and device
WO2022057384A1 (en) * 2020-09-17 2022-03-24 华为技术有限公司 Photographing method and device

Also Published As

Publication number Publication date
CN104170371B (en) 2017-11-24
WO2015100723A1 (en) 2015-07-09

Similar Documents

Publication Publication Date Title
CN104170371A (en) Method of realizing self-service group photo and photographic device
WO2020010979A1 (en) Method and apparatus for training model for recognizing key points of hand, and method and apparatus for recognizing key points of hand
EP3143596B1 (en) Method and apparatus for scanning and printing a 3d object
US9857589B2 (en) Gesture registration device, gesture registration program, and gesture registration method
US7349020B2 (en) System and method for displaying an image composition template
US10222877B2 (en) Method and apparatus for presenting panoramic photo in mobile terminal, and mobile terminal
CN107833219B (en) Image recognition method and device
CN105869113A (en) Panoramic image generation method and device
EP1890481B1 (en) Panorama photography method and apparatus capable of informing optimum photographing position
TW201915943A (en) Method, apparatus and system for automatically labeling target object within image
CN108830186B (en) Text image content extraction method, device, equipment and storage medium
US20220309836A1 (en) Ai-based face recognition method and apparatus, device, and medium
WO2018112788A1 (en) Image processing method and device
CN113076814B (en) Text area determination method, device, equipment and readable storage medium
CN109886208B (en) Object detection method and device, computer equipment and storage medium
US20210201532A1 (en) Image processing method and apparatus, and storage medium
CN106815809B (en) Picture processing method and device
CN110570460A (en) Target tracking method and device, computer equipment and computer readable storage medium
CN106981048B (en) Picture processing method and device
CN112749613A (en) Video data processing method and device, computer equipment and storage medium
KR20150058871A (en) Photographing device and stitching method of photographing image
WO2020244592A1 (en) Object pick and place detection system, method and apparatus
CN102012629A (en) Shooting method for splicing document images
WO2018042074A1 (en) A method, apparatus and computer program product for indicating a seam of an image in a corresponding area of a scene
CN110163192B (en) Character recognition method, device and readable medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20171116

Address after: Metro Songshan Lake high tech Industrial Development Zone, Guangdong Province, Dongguan City Road 523808 No. 2 South Factory (1) project B2 -5 production workshop

Patentee after: HUAWEI terminal (Dongguan) Co., Ltd.

Address before: 518129 Longgang District, Guangdong, Bantian HUAWEI base B District, building 2, building No.

Patentee before: Huawei Device Co., Ltd.

TR01 Transfer of patent right
CP01 Change in the name or title of a patent holder

Address after: 523808 Southern Factory Building (Phase I) Project B2 Production Plant-5, New Town Avenue, Songshan Lake High-tech Industrial Development Zone, Dongguan City, Guangdong Province

Patentee after: Huawei Device Co., Ltd.

Address before: 523808 Southern Factory Building (Phase I) Project B2 Production Plant-5, New Town Avenue, Songshan Lake High-tech Industrial Development Zone, Dongguan City, Guangdong Province

Patentee before: HUAWEI terminal (Dongguan) Co., Ltd.

CP01 Change in the name or title of a patent holder