CN104104867A - Method for controlling image photographing device for photographing and device thereof - Google Patents
Method for controlling image photographing device for photographing and device thereof Download PDFInfo
- Publication number
- CN104104867A CN104104867A CN201410176036.5A CN201410176036A CN104104867A CN 104104867 A CN104104867 A CN 104104867A CN 201410176036 A CN201410176036 A CN 201410176036A CN 104104867 A CN104104867 A CN 104104867A
- Authority
- CN
- China
- Prior art keywords
- camera head
- face region
- human face
- coverage
- face
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Abstract
The invention provides a method for controlling an image photographing device for photographing and a device thereof. The method comprises the steps that (1) human faces in a photographing range of the image photographing device are identified; (b) whether the identified human faces are all in the photographing range is detected; (c) when the identified human faces are all in the photographing range, a step (d) that area of a human face region is calculated is performed; (e) consistency of area of the human face region and the preset area is judged; (f) if area of the human face region and the preset area are not consistent, focal length of the image photographing device is changed, and the step (d) is performed via returning; and (g) when area of the human face region and the preset area are consistent, a step (h) that the image photographing device photographs through control is performed. With application of the method for controlling the image photographing device for photographing and the device thereof, focal length of the image photographing device is changed so that the photographing range of the photographed person relative to the image photographing device is great in proportion.
Description
Technical field
The invention belongs to digital photographing field, more particularly, relate to a kind of method and apparatus that camera head is taken of controlling.
Background technology
When user uses camera or mobile phone to take pictures at present, (for example need the shooting angle of artificial adjustment camera head, photographer shift position or the person of being taken shift position), make the correct position in the coverage of the person of being taken in camera head, there is certain error owing to manually adjusting, therefore the not necessarily correct position of position in the coverage of the person of being taken who finds by artificial adjustment in camera head, and can not make the Body proportion of the person of being taken in the coverage of camera head reach suitable ratio by artificial adjustment, user experiences poor.
Summary of the invention
The object of the present invention is to provide a kind of method and apparatus of controlling camera head adjusting shooting angle, so that the optimum position in the coverage of one be shooted in camera head.
An aspect of of the present present invention provides a kind of method that camera head is taken of controlling, and described method comprises: (a) face in the coverage of identification camera head; (b) whether the face that detects identification is all within coverage; (c) in the time that the face of identifying is all within coverage, execution step (d): the area that calculates human face region; (e) judge that whether the area of human face region is consistent with preset area; (f) when the area of human face region and preset area are when inconsistent, change the focal length of camera head, and return to execution step (d); (g) in the time that the area of human face region is consistent with preset area, execution step (h): control camera head and take.
Alternatively, method of the present invention can also comprise step (i): in the time that the face of identifying is not all within coverage, change the shooting orientation of camera head, return to execution step (a).
Alternatively, step (i) comprising: in the time that the face of identifying is not all within coverage, and the relative position of the major organs of the face in detection coverage in coverage; The shooting orientation of the Relative position determination camera head according to the major organs of face in coverage is with respect to the offset direction of face; Change the shooting orientation of camera head according to described offset direction.
Alternatively, the step that changes the shooting orientation of camera head comprises: by controlling the The Cloud Terrace rotation being connected with camera head, change the shooting orientation of camera head.
Alternatively, the step in the shooting orientation of change camera head comprises: prompting user manually changes the shooting orientation of camera head.
Alternatively, the step that changes the shooting orientation of camera head comprises: control described The Cloud Terrace rotation preset angles, change the shooting orientation of camera head.
Alternatively, the step that changes the shooting orientation of camera head comprises: when controlling angle value of reaching capacity of described The Cloud Terrace rotation, and face is not still all within coverage time, prompting user manually changes the shooting orientation of camera head, wherein, described limiting value is the maximum of the angular range of described The Cloud Terrace rotation.
Alternatively, method of the present invention can also comprise: in the time that the area of human face region is consistent with preset area, and execution step: (j) detect human face region and whether overlap with predeterminable area; (k), in the time that human face region does not overlap with predeterminable area, change the shooting orientation of camera head, and return to execution step (j); (l) in the time that human face region overlaps with predeterminable area, execution step (h).
Alternatively, the step that changes the shooting orientation of camera head comprises: by controlling the The Cloud Terrace rotation being connected with camera head, change the shooting orientation of camera head.
Alternatively, the step in the shooting orientation of change camera head comprises: prompting user manually changes the shooting orientation of camera head.
Alternatively, the step that changes the shooting orientation of camera head comprises: the first distance on the coordinate of the first reference point on calculating human face region and predeterminable area between the coordinate of the second corresponding reference point, obtain the angle of The Cloud Terrace rotation according to the first distance, control The Cloud Terrace and rotate corresponding angle, change the shooting orientation of camera head, wherein, the relative position of the first reference point in human face region is identical with the relative position of the second reference point in predeterminable area.
Alternatively, according to first apart from the step of the angle that obtains The Cloud Terrace rotation be: control The Cloud Terrace rotation the 3rd preset angles; Second distance on the coordinate of calculating first reference point on human face region after control The Cloud Terrace rotation the 3rd preset angles and predeterminable area between the coordinate of the second corresponding reference point; Calculating the first distance is poor with second distance; The first distance is multiplied by the 3rd preset angles with the business of described difference and deducts again the 3rd preset angles, obtain the angle of The Cloud Terrace rotation.
Alternatively, the step that changes the shooting orientation of camera head comprises: when controlling angle value of reaching capacity of described The Cloud Terrace rotation, and when human face region does not still overlap with predeterminable area, prompting user manually changes the shooting orientation of camera head, wherein, described limiting value is the maximum of the angular range of described The Cloud Terrace rotation.
Alternatively, detecting human face region comprises with the step whether predeterminable area overlaps: when a face detected in coverage time, whether the coordinate by detecting the first reference point on human face region is consistent with the coordinate of the second reference point corresponding on predeterminable area, judges whether human face region overlaps with predeterminable area.
Alternatively, detecting human face region comprises with the step whether predeterminable area overlaps: when multiple face detected in coverage time, using the union of the face region of multiple faces as human face region, (m) the default face of identification from human face region; (n) human face region is divided into multiple subregions; (o) determine the subregion that default face is positioned at; (p) whether unanimously detect the coordinate of the coordinate of the first reference point of the subregion that default face is positioned at the second reference point corresponding with predeterminable area, judge whether human face region overlaps with predeterminable area.
Alternatively, by taking or obtaining default face by the picture of choosing local storage, and default face is stored in local memory cell.
Another aspect of the present invention provides a kind of device that camera head is taken of controlling, and described device comprises: the first face identification unit, the face in the coverage of identification camera head; The first detecting unit, detects the face of identification whether all within coverage; The first computing unit, in the time that the face of identifying is all within coverage, the first computing unit calculates the area of human face region; Judging unit, judges that whether the area of human face region is consistent with preset area; Zoom unit, when the area of human face region and preset area are when inconsistent, zoom unit changes the focal length of camera head, and then the first computing unit calculates the area of the human face region after the focal length that changes camera head; Camera head, in the time that the area of human face region is consistent with preset area, camera head is taken.
Alternatively, device of the present invention can also comprise the first rotary unit, in the time that the face of identifying is not all within coverage, the first rotary unit changes the shooting orientation of camera head, and then the first face identification unit identification changes the face in the coverage of taking behind orientation.
Alternatively, in the time that the face of identifying is not all within coverage, the relative position of the major organs that the first rotary unit can detect the face in coverage in coverage, the shooting orientation of the Relative position determination camera head according to the major organs of face in coverage, with respect to the offset direction of face, changes the shooting orientation of camera head according to described offset direction.
Alternatively, the first rotary unit, by controlling the The Cloud Terrace rotation being connected with camera head, changes the shooting orientation of camera head.
Alternatively, device of the present invention can also comprise the first Tip element, and in the time that the face of identifying is not all within coverage, the first Tip element prompting user manually changes the shooting orientation of camera head.
Alternatively, The Cloud Terrace rotation preset angles described in the first rotary unit control, changes the shooting orientation of camera head.
Alternatively, device of the present invention can also comprise the second Tip element, when angle value of reaching capacity of The Cloud Terrace rotation described in the first rotary unit control, and face is not still all within coverage time, the second Tip element prompting user manually changes the shooting orientation of camera head, wherein, described limiting value is the maximum of the angular range of described The Cloud Terrace rotation.
Alternatively, device of the present invention can also comprise: whether the second detecting unit, in the time that the area of human face region is consistent with preset area, detects human face region and overlap with predeterminable area; The second rotary unit, in the time that human face region does not overlap with predeterminable area, the second rotary unit changes the shooting orientation of camera head, then whether the second detecting unit detection human face region in the coverage changing behind shooting orientation overlaps with predeterminable area, wherein, in the time that human face region overlaps with predeterminable area, camera head is taken.
Alternatively, the second rotary unit, by controlling the The Cloud Terrace rotation being connected with camera head, changes the shooting orientation of camera head.
Alternatively, device of the present invention can also comprise the 3rd Tip element, and in the time that human face region does not overlap with predeterminable area, the 3rd Tip element prompting user manually changes the shooting orientation of camera head.
Alternatively, the first distance on the coordinate of the first reference point on the second rotary unit calculating human face region and predeterminable area between the coordinate of the second corresponding reference point, obtain the angle of The Cloud Terrace rotation according to the first distance, control The Cloud Terrace and rotate corresponding angle, change the shooting orientation of camera head, wherein, the relative position of the first reference point in human face region is identical with the relative position of the second reference point in predeterminable area.
Alternatively, the second rotary unit control The Cloud Terrace rotation the 3rd preset angles, then calculate the second distance between the coordinate of controlling the second reference point corresponding on the coordinate of the first reference point on human face region after The Cloud Terrace rotation the 3rd preset angles and predeterminable area, calculate again poor with second distance of the first distance, finally the first distance is multiplied by the 3rd preset angles with the business of described difference and deducts again the 3rd preset angles, obtain the angle of The Cloud Terrace rotation.
Alternatively, device of the present invention can also comprise the 4th Tip element, when angle value of reaching capacity of The Cloud Terrace rotation described in the second rotary unit control, and when human face region does not still overlap with predeterminable area, the 4th Tip element prompting user manually changes the shooting orientation of camera head, wherein, described limiting value is the maximum of the angular range of described The Cloud Terrace rotation.
Alternatively, in the time that the first detecting unit detects a face in coverage, whether the coordinate of the second detecting unit by detecting the first reference point on human face region be consistent with the coordinate of the second reference point corresponding on predeterminable area, judges whether human face region overlaps with predeterminable area.
Alternatively, in the time that the first detecting unit detects multiple face in coverage, the second detecting unit is using the union of the face region of multiple faces as human face region, and face is preset in identification from human face region, human face region is divided into multiple subregions, determine the subregion that default face is positioned at, whether the coordinate that then detects the coordinate of the first reference point of the subregion that default face is positioned at second reference point corresponding with predeterminable area is consistent, judges whether human face region overlaps with predeterminable area.
Alternatively, the second detecting unit is by taking or obtaining default face by the picture of choosing local storage, and default face is stored in local memory cell.
The method and apparatus that adopts control camera head of the present invention to take, the The Cloud Terrace being connected with camera head by control rotates to change the shooting orientation of camera head, so that the correct position in the coverage of one be shooted in camera head.
Brief description of the drawings
By the detailed description of carrying out below in conjunction with accompanying drawing, above and other objects of the present invention, feature and advantage will become apparent, wherein:
Fig. 1 illustrates the flow chart of controlling according to an embodiment of the invention the method that camera head takes;
Fig. 2 illustrates the flow chart that detects the method whether human face region overlap with predeterminable area when multiple face detected in coverage time;
Fig. 3 illustrates the structure chart of controlling according to an embodiment of the invention the device that camera head takes.
Embodiment
Now the embodiment of the present invention is described in detail, its sample table shows in the accompanying drawings, and wherein, identical label represents same parts all the time.Below with reference to the accompanying drawings embodiment is described to explain the present invention.
Fig. 1 illustrates the flow chart of controlling according to an embodiment of the invention the method that camera head takes.
With reference to Fig. 1, in step 101, the face in the coverage of identification camera head.Here can utilize various existing image-recognizing methods to identify the face in the coverage of camera head.
For example, can first control camera head and obtain the image in coverage, the face in the image in the coverage that then identification is obtained.Here the pre-photographic images that the image in the coverage of obtaining, can be camera head before taking.
In step 102, detect the face of identification whether all within coverage.
For example, can identify by step 101 face of face, face that can be by detecting face in step 102 whether all in coverage, judge that the face of identification is whether all in coverage.For example,, if the face of face all in coverage, show that the face of identification is all within coverage; If the face of face not all in coverage, show that the face of identification is not all within coverage.
For example, before step 102, can also comprise and detect whether the face of identifying in coverage is a face.Whether the face that detects identification in coverage is that the step of a face can be: the quantity that detects the face of the face in coverage; In the time that the quantity of the face of the face detecting is greater than the quantity of face of a face, in coverage, the face of identification is multiple faces; In the time detecting that the quantity of face of face is less than or equal to the quantity of face of a face, in coverage, the face of identification is a face.
For example, in the time that the face in coverage is multiple face, face that can be by detecting multiple faces whether all in coverage, judge that multiple faces of identification are whether all in coverage.In the time that the face in coverage is a face, face that can be by detecting a face whether all in coverage, judge that a face of identification is whether all in coverage.
In the time that the face of identifying is not all within coverage, execution step 103: change the shooting orientation of camera head, and return to execution step 101.
Alternatively, when the face of identification is not all within coverage time, the relative position of the major organs (for example, the face of face) that can detect the face in coverage in coverage; The shooting orientation of the Relative position determination camera head according to the major organs of face in coverage is with respect to the offset direction of face; Change the shooting orientation of camera head according to described offset direction.
For example, in the time that the face of face only have part in coverage, the shooting orientation of the Relative position determination camera head of face that can be by detecting face in coverage in coverage is with respect to the offset direction of face, for example, if the nose of the person's of being taken face only detected in coverage, face and eyes, and the nose of the person's of being taken face, face and eye are positioned at the left side of coverage, the shooting orientation that shows camera head is now to the right with respect to the person of being taken, the shooting orientation of camera head need be moved to the left, progressively adjust the shooting orientation of camera head, so that the face of face are all in coverage.
Alternatively, in an example in shooting orientation that changes camera head, in the time that the face of identifying is not all within coverage, can, by controlling the The Cloud Terrace rotation that be connected with camera head, change the shooting orientation of camera head.For example, can control the The Cloud Terrace being connected with camera head (for example horizontally rotates, the angular range that The Cloud Terrace horizontally rotates can be-45 °~+ 45 °) and/or vertical rotary is (for example, the angular range of The Cloud Terrace vertical rotary can be-45 °~+ 45 °), change the shooting orientation of camera head, so that the face of identification is all within coverage.For example, in the time that the face of identifying is not all within coverage, can control The Cloud Terrace rotation preset angles, change the shooting orientation of camera head.
For example, if nose, face and eyes of the person's of being taken face only detected in coverage, and the nose of the person's of being taken face, face and eye are positioned at the left side of coverage, the shooting orientation that shows camera head is to the right with respect to the person of being taken, to anticlockwise preset angles (for example now can control The Cloud Terrace level, 1 °), change the shooting orientation of camera head.The value of the preset angles that here, user can rotate The Cloud Terrace as required arranges.
When execution of step 103 changes behind the shooting orientation of camera head, can return to execution step 101 and again identify the face changing in the coverage of taking behind orientation, and then whether the face that detects identification by step 102 is all within the coverage changing behind shooting orientation, until identification face all in coverage after, continue execution step 104.
Alternatively, the step that changes the shooting orientation of camera head can also comprise: angle value of reaching capacity that the The Cloud Terrace being connected with camera head when control rotates (for example, the maximum of the rotatable angular range of The Cloud Terrace), and the face of identification is still all within coverage time, the shooting orientation that can point out user manually to change camera head.For example, the angular range that the The Cloud Terrace being connected with camera head horizontally rotates can be-45 °~+ 45 °, for example, when to angle value of reaching capacity of anticlockwise (controlling The Cloud Terrace level, 45 °), now the face of identification is not still all within coverage, the shooting orientation that shows camera head is too to the right with respect to the person of being taken, only can not make the person's of being taken face all in coverage by controlling The Cloud Terrace rotation, need to point out to user, take orientation so that photographer and/or the person of being taken shift position realize changing.
Alternatively, in another example in shooting orientation that changes camera head, in the time that the face of identifying is not all within coverage, the shooting orientation that can point out user manually to change camera head.For example, when camera head is that fixing camera head is cannot rotate time, the shooting orientation that can point out user manually to change camera head, so that realizing changing, photographer and/or the person of being taken shift position take orientation, so that the person's of being taken face is all in the coverage changing behind shooting orientation.For example, when the person of being taken is apart from camera head too closely or too partially time, also can, by pointing out to user, change shooting orientation.
In the time that the face of identifying is all within coverage, execution step 104: the area that calculates human face region.Here can utilize various existing methods to calculate the area of human face region.
For example, in the time that the face in coverage is a face, can be using the face region of a face as human face region.In one example, human face region is rectangular area (hereinafter referred to as the first rectangle), but, the invention is not restricted to this, human face region can be the region (for example, elliptical region, border circular areas etc.) of arbitrary shape.
Alternatively, the step of determining the first rectangle can be: calculate comprise rectangle that the region of eyes, nose and mouth forms along with two between the vertical length of side of direction of line and the coverage of camera head along with two between the ratio of the length of side of the vertical direction of line; In the time that described ratio is greater than default ratio, the rectangle that the region that comprises eyes, nose and mouth can be formed is as the first rectangle, and what now can judge camera head current shooting is face's feature of the person of being taken; In the time that described ratio is less than or equal to default ratio, the rectangle that the region that comprises head can be formed is as the first rectangle, and what now can judge camera head current shooting is the person's of being taken whole body photograph.Preferably, default ratio can be 1/7.Here, can utilize and respectively have definite region or the definite region that comprises head that comprises eyes, nose and mouth of existing method, the present invention no longer describes in detail the content of this part.
For example, in the time that the face in coverage is multiple face, can be using the union of the face region of multiple faces as human face region.In one example, human face region is rectangular area (hereinafter referred to as the second rectangle).For example, the step of determining the second rectangle can be: the rectangle that the region of eyes, nose and mouth that comprises multiple faces is formed is as the second rectangle.Here, can utilize the region that respectively has existing method to determine the eyes, nose and the mouth that comprise multiple faces, the present invention no longer describes in detail the content of this part.
In step 105, judge that whether the area of human face region is consistent with preset area.
Alternatively, the area that preset area is predeterminable area for example, can be chosen a region, using the region of choosing as predeterminable area in the coverage of camera head.In one example, can in the coverage of camera head, choose a rectangle (hereinafter referred to as default rectangle), will preset rectangle as predeterminable area.
For example, can be from the step of choosing default rectangle in the coverage of camera head: can utilize Fibonacci method in the coverage of camera head, to determine four gold location points, using the rectangle being formed by four gold location points as default rectangle.Here, utilize Fibonacci method in the coverage of camera head, to determine the common practise that the method for four gold location points is this area, the present invention no longer describes in detail the content of this part.
Alternatively, in step 105 can by the relatively area of human face region and the size of preset area judge the area of human face region and preset area whether consistent.In the time that the area of human face region and the difference of preset area are greater than setting range (that is to say, when the area of human face region excessive or too small compared with preset area), show that the area of human face region and preset area are inconsistent; In the time that the area of human face region and the difference of preset area are less than or equal to setting range, show that the area of human face region is consistent with preset area.
When the area of human face region and preset area are when inconsistent, execution step 106: change the focal length of camera head, and return to execution step 104.
Alternatively, when the area of human face region and preset area are when inconsistent, the focal length (for example, Digital Zoom or optical zoom) of camera head capable of automatic changing.Alternatively, when the area of human face region and preset area are when inconsistent, also can be by prompting user, so that photographer and/or the person of being taken shift position reach the effect of the focal length that changes camera head.
In the time that the area of human face region is consistent with preset area, execution step 107: detect human face region and whether overlap with predeterminable area.
Alternatively, when a face detected in coverage time, whether coordinate that can be by detecting the first reference point on human face region is consistent with the coordinate of the second reference point corresponding on predeterminable area, judges whether human face region overlaps with predeterminable area.
For example, in the situation that foregoing human face region is the first rectangle, can be using the summit in the upper left corner of the first rectangle as the first reference point, using the summit in the upper left corner of default rectangle as the second reference point, calculate respectively the coordinate on the coordinate on summit in the upper left corner of the first rectangle and the summit in the upper left corner of default rectangle, and then detect the coordinate on summit in the upper left corner of the first rectangle and the coordinate on the summit in the upper left corner of default rectangle whether consistent.When the coordinate on summit in the upper left corner of the first rectangle and the coordinate on the summit in the upper left corner of default rectangle inconsistent, show that human face region does not overlap with predeterminable area; When the coordinate on summit in the upper left corner of the first rectangle and the coordinate on the summit in the upper left corner of default rectangle are when consistent, show that human face region overlaps with predeterminable area.But the invention is not restricted to this, and the first reference point can be any point in the first rectangle, and the first reference point also can be more predefined in the first rectangle.
Fig. 2 illustrates the flow chart that detects the method whether human face region overlap with predeterminable area when multiple face detected in coverage time.
Alternatively, when multiple face detected in coverage time, detect human face region and can be with the step whether predeterminable area overlaps:
In step 201, the default face of identification from human face region.
Alternatively, can obtain default face by shooting or by the picture of choosing local storage, and default face is stored in local memory cell.For example, the step that obtains default face can comprise: can be before multiple persons of being taken are taken, from multiple persons of being taken, select the person of being taken, first the person of being taken of this selection is taken to a photo, then this photo is stored in local memory cell.In the time of face in the coverage of identification camera head, can from local memory cell, extract a photo of storing recently, when recognize the face consistent with face in this photo in coverage time, using this face recognizing as default face.Here can utilize various existing methods to identify the default face in the coverage of camera head.
In step 202, human face region is divided into multiple subregions.
For example, human face region can be divided into left side subregion, dynatron region, right side subregion.
In step 203, determine the subregion that default face is positioned at.Here can utilize various existing methods to determine the subregion that default face is positioned at.
In step 204, whether the coordinate that detects the coordinate of the first reference point of the subregion that default face is positioned at second reference point corresponding with predeterminable area is consistent, judges whether human face region overlaps with predeterminable area.
Alternatively, when the coordinate of the coordinate of the first reference point of the subregion being positioned at when default face and the second reference point corresponding on predeterminable area is consistent, perform step 109; When the coordinate of the coordinate of the first reference point of the subregion being positioned at when default face second reference point corresponding with predeterminable area is inconsistent, execution step 205: change the shooting orientation of camera head, and return and perform step 204.
For example, in the situation that foregoing human face region is the second rectangle, in the time comprising default face in the subregion of the left side of the second rectangle, judge that the step whether human face region overlaps with predeterminable area can comprise: can be using the summit in the upper left corner of the second rectangle as the first reference point, using the summit in the upper left corner of default rectangle as the second reference point, calculate respectively the coordinate on the coordinate on summit in the upper left corner of the second rectangle and the summit in the upper left corner of default rectangle, and then detect the coordinate on summit in the upper left corner of the second rectangle and the coordinate on the summit in the upper left corner of default rectangle whether consistent.When the coordinate on summit in the upper left corner of the second rectangle and the coordinate on the summit in the upper left corner of default rectangle are when consistent, show that human face region overlaps with predeterminable area, continue execution step 109; When the coordinate on summit in the upper left corner of the second rectangle and the coordinate on the summit in the upper left corner of default rectangle are when inconsistent, show that human face region does not overlap with predeterminable area, perform step 205, and return to execution step 204.
In the time that human face region does not overlap with predeterminable area, execution step 108: change the shooting orientation of camera head, and return to execution step 107.
Alternatively, in an example in shooting orientation that changes camera head, in the time that human face region does not overlap with predeterminable area, can, by controlling the The Cloud Terrace rotation being connected with camera head, change the shooting orientation of camera head.For example, can control the The Cloud Terrace being connected with camera head (for example horizontally rotates, the angular range that The Cloud Terrace horizontally rotates can be-45 °~+ 45 °) and/or vertical rotary is (for example, the angular range of The Cloud Terrace vertical rotary can be-45 °~+ 45 °), change the shooting orientation of camera head, so that human face region overlaps with predeterminable area.
For example, in the time that human face region does not overlap with predeterminable area, the first distance on the coordinate of the first reference point on calculating human face region and predeterminable area between the coordinate of the second corresponding reference point, obtain the angle of The Cloud Terrace rotation according to the first distance, control The Cloud Terrace and rotate corresponding angle, change the shooting orientation of camera head, wherein, the relative position of the first reference point in human face region is identical with the relative position of the second reference point in predeterminable area.
Alternatively, can be apart from the step of the angle that obtains The Cloud Terrace rotation according to first: control The Cloud Terrace rotation the 3rd preset angles; Second distance on the coordinate of calculating first reference point on human face region after control The Cloud Terrace rotation the 3rd preset angles and predeterminable area between the coordinate of the second corresponding reference point; Calculating the first distance is poor with second distance; The first distance is multiplied by the 3rd preset angles with the business of described difference and deducts again the 3rd preset angles, obtain the angle of The Cloud Terrace rotation.
For example, in the situation that foregoing human face region is the first rectangle, the step that changes the shooting orientation of camera head can comprise: can be using the summit in the upper left corner of the first rectangle as the first reference point, using the summit in the upper left corner of default rectangle as the second reference point, when the coordinate on summit in the upper left corner of the first rectangle and the coordinate on the summit in the upper left corner of default rectangle are when inconsistent, can first calculate respectively the coordinate on the coordinate on summit in the upper left corner of the first rectangle and the summit in the upper left corner of default rectangle, then calculate the first distance between the summit in the upper left corner of the first rectangle and the summit in the upper left corner of default rectangle according to the coordinate on the summit in the upper left corner of the coordinate on the summit in the upper left corner of the first rectangle and default rectangle, then obtain the angle of The Cloud Terrace rotation according to the first distance, thereby control The Cloud Terrace and rotate this angle, change the shooting orientation of camera head, so that the coordinate on the summit in the coordinate on the summit in the upper left corner of the first rectangle and the upper left corner of default rectangle is consistent, then continue execution step 109.
Alternatively, the step that changes the shooting orientation of camera head can also comprise: angle value of reaching capacity that the The Cloud Terrace being connected with camera head when control rotates (for example, the maximum of the rotatable angular range of The Cloud Terrace), and when human face region does not still overlap with predeterminable area, the shooting orientation that can point out user manually to change camera head.For example, the angular range that the The Cloud Terrace being connected with camera head horizontally rotates can be-45 °~+ 45 °, when angle value of reaching capacity of controlling the rotation of The Cloud Terrace horizontal direction (for example, 45 °) and/or the angle of controlling the rotation of The Cloud Terrace vertical direction reach limiting value, when now human face region does not still overlap with predeterminable area, show only can not make human face region overlap with predeterminable area by controlling The Cloud Terrace rotation, need to point out to user, take orientation so that photographer and/or the person of being taken shift position realize changing.Take behind orientation when changing, can return to execution step 107, again detect change take orientation after human face region whether overlap with predeterminable area, until after human face region overlaps with predeterminable area, continuation performs step 109.
Alternatively, in another example in shooting orientation that changes camera head, in the time that human face region does not overlap with predeterminable area, the shooting orientation that can point out user manually to change camera head.For example, when camera head is fixing camera head cannot rotate time, can, by pointing out to user, so that realizing changing, photographer and/or the person of being taken shift position take orientation, so that human face region overlaps with predeterminable area.For example, when the person of being taken is apart from camera head too closely or too partially time, also can, by pointing out to user, change shooting orientation.
In the time that human face region overlaps with predeterminable area, execution step 109: control camera head and take.
For example, in the time that human face region overlaps with predeterminable area, the position of person in coverage that show not now to be taken is suitable position, and one be shooted reaches suitable Body proportion in the coverage of camera head, now controls camera head and takes.
Fig. 3 is the structure chart that the device of taking according to the control camera head of the first embodiment of the present invention is shown.
As shown in Figure 3, the device of taking according to the control camera head of the first embodiment of the present invention comprises the first face identification unit 301, the first detecting unit 302, the first computing unit 303, judging unit 304, zoom unit 305 and camera head 306.
The first face identification unit 301 is identified the face in the coverage of camera head.Here can utilize various existing pattern recognition devices to identify the face in the coverage of camera head.
For example, device of the present invention can also comprise acquiring unit, obtains the image in coverage for the input control camera head that receives user, and then the first face identification unit 301 is identified the face in the image in the coverage of obtaining again.Here the pre-photographic images that the image in the coverage of obtaining, can be camera head before taking.For example, user's input can be the input of obtaining the image in coverage for controlling camera head, for example, touches input or key-press input.
The first detecting unit 302 detects the face of identification whether all within coverage.
For example, can utilize the first face identification unit 301 to identify the face of face, then the face of the first detecting unit 302 by detecting face whether all in coverage, judge that the face of identification is whether all in coverage.For example,, if the face of face all in coverage, show that the face of identification is all within coverage; If the face of face not all in coverage, show that the face of identification is not all within coverage.
For example, the first detecting unit 302 also can detect whether the face of identification in coverage is a face.The first detecting unit 302 detects the quantity of the face of the face of identification in coverage, in the time detecting that the quantity of face of face of identification is greater than the quantity of face of a face, face in coverage is multiple faces, in the time detecting that the quantity of the face of identifying face is less than or equal to the quantity of face of a face, the face in coverage is a face.
For example, in the time that the face in coverage is multiple face, the face that the first detecting unit 302 can be by detecting multiple faces whether all in coverage, judge that multiple faces of identification are whether all in coverage.In the time that the face in coverage is a face, the face that the first detecting unit 302 can be by detecting a face whether all in coverage, judge that a face of identification is whether all in coverage.
Alternatively, control according to an embodiment of the invention the device that camera head takes and can also comprise the first rotary unit, in the time that the face of identifying is not all within coverage, the first rotary unit changes the shooting orientation of camera head, and then the first face identification unit 301 identifications change the face in the coverage of taking behind orientation.
Alternatively, in the time that the face of identifying is not all within coverage, the major organs that the first rotary unit can detect the face in coverage (for example, the face of face) relative position in coverage, the shooting orientation of the Relative position determination camera head according to the major organs of face in coverage, with respect to the offset direction of face, changes the shooting orientation of camera head according to described offset direction.
For example, in the time that the face of identifying is not all within coverage, the first rotary unit can, by controlling the The Cloud Terrace rotation that be connected with camera head, change the shooting orientation of camera head.Alternatively, The Cloud Terrace rotation preset angles described in the first rotary unit control, changes the shooting orientation of image unit.
Owing to the function of the first rotary unit in the shooting orientation that changes camera head being have been described in detail in step 103, therefore do not repeat them here.
Alternatively, control according to an embodiment of the invention the device that camera head takes and can also comprise the second Tip element, angle value of reaching capacity that the The Cloud Terrace being connected with camera head when the first rotary unit control rotates (for example, the maximum of the rotatable angular range of The Cloud Terrace), and the face of identification is still all within coverage time, the shooting orientation that the second Tip element can point out user manually to change camera head.
Owing to the function of pointing out user manually to change second Tip element in the shooting orientation of camera head being have been described in detail in step 103, therefore do not repeat them here.
Alternatively, control according to an embodiment of the invention the device that camera head takes and can also comprise the first Tip element, in the time that the face of identification is not all within coverage, the shooting orientation that the first Tip element can point out user manually to change camera head.
Owing to the function of pointing out user manually to change first Tip element in the shooting orientation of camera head being have been described in detail in step 103, therefore do not repeat them here.
In the time that the face of identifying is all within coverage, the first computing unit 303 calculates the area of human face region.Here can utilize various existing devices to calculate the area of human face region.
For example, in the time that the face in coverage is a face, can be using the face region of a face as human face region.In the time that the face in coverage is multiple face, can be using the union of the face region of multiple faces as human face region.
Owing to the function of the first computing unit 303 of the area that calculates human face region being have been described in detail in step 104, therefore do not repeat them here.
Judging unit 304 judges that whether the area of human face region is consistent with preset area.
Alternatively, the area that preset area is predeterminable area for example, can be chosen a region, using the region of choosing as predeterminable area in the coverage of camera head.
Alternatively, judging unit 304 can by the relatively area of human face region and the size of preset area judge the area of human face region and preset area whether consistent.In the time that the area of human face region and the difference of preset area are greater than setting range (that is to say, when the area of human face region excessive or too small compared with preset area), show that the area of human face region and preset area are inconsistent; In the time that the area of human face region and the difference of preset area are less than or equal to setting range, show that the area of human face region is consistent with preset area.
Owing to the function that judges the area of the human face region judging unit 304 whether consistent with preset area being have been described in detail in step 105, therefore do not repeat them here.
When the area of human face region and preset area are when inconsistent, zoom unit 305 changes the focal length of camera head, and then the first computing unit 303 calculates the area of the human face region after the focal length that changes camera heads.
Alternatively, when the area of human face region and preset area are when inconsistent, the focal length (for example, Digital Zoom or optical zoom) of zoom unit 305 camera heads capable of automatic changing.Alternatively, the device that control camera head of the present invention is taken also can comprise Tip element, when the area of human face region and preset area are when inconsistent, Tip element can be by prompting user, so that photographer and/or the person of being taken shift position reach the effect of the focal length that changes camera head.
Control according to an embodiment of the invention the device that camera head takes and can also comprise the second detecting unit, in the time that the area of human face region is consistent with preset area, whether the second detecting unit detects human face region and overlaps with predeterminable area.
Alternatively, in the time that the first detecting unit 302 detects a face in coverage, the second detecting unit can be by detecting the first reference point on human face region coordinate whether consistent with the coordinate of the second reference point corresponding on predeterminable area, judge whether human face region overlaps with predeterminable area.
Owing to the face in coverage has been a face in step 107 time, the function that detects the second detecting unit whether human face region overlap with predeterminable area have been described in detail, and does not therefore repeat them here.
Alternatively, in the time that the first detecting unit 302 detects multiple face in coverage, the second detecting unit is using the union of the face region of multiple faces as human face region, and face is preset in identification from human face region, human face region is divided into multiple subregions, determine the subregion that default face is positioned at, whether the coordinate that then detects the coordinate of the first reference point of the subregion that default face is positioned at second reference point corresponding with predeterminable area is consistent, judges whether human face region overlaps with predeterminable area.
Alternatively, the second detecting unit can obtain default face by shooting or by the picture of choosing local storage, and default face is stored in local memory cell.
Alternatively, control according to an embodiment of the invention the device that camera head takes and can also comprise the second rotary unit, when the coordinate of the coordinate of the first reference point of the subregion being positioned at when default face and the second reference point corresponding on predeterminable area is consistent, camera head 306 is taken; When the coordinate of the coordinate of the first reference point of the subregion being positioned at when default face second reference point corresponding with predeterminable area is inconsistent, the second rotary unit changes the shooting orientation of camera head, and then the second detecting unit detects whether change the coordinate of the coordinate of first reference point of taking the subregion that default face is positioned at behind orientation second reference point corresponding with predeterminable area consistent.
Due in the step 201 of Fig. 2 to the function that detects the second whether consistent detecting unit of the coordinate of the coordinate of the first reference point of the subregion that default face is positioned at second reference point corresponding with predeterminable area being have been described in detail in step 204, therefore do not repeat them here.
Alternatively, in the time that human face region does not overlap with predeterminable area, the second rotary unit can change the shooting orientation of camera head, then whether the second detecting unit detection human face region in the coverage changing behind shooting orientation overlaps with predeterminable area, in the time that human face region overlaps with predeterminable area, camera head 306 is taken.
Alternatively, in an example in shooting orientation that changes camera head, in the time that human face region does not overlap with predeterminable area, the second rotary unit can, by controlling the The Cloud Terrace rotation being connected with camera head, change the shooting orientation of camera head.
For example, in the time that human face region does not overlap with predeterminable area, the first distance on the coordinate of the first reference point on the second rotary unit calculating human face region and predeterminable area between the coordinate of the second corresponding reference point, obtain the angle of The Cloud Terrace rotation according to the first distance, control The Cloud Terrace and rotate corresponding angle, change the shooting orientation of camera head, wherein, the relative position of the first reference point in human face region is identical with the relative position of the second reference point in predeterminable area.
Alternatively, the second rotary unit control The Cloud Terrace rotation the 3rd preset angles, then calculate the second distance between the coordinate of controlling the second reference point corresponding on the coordinate of the first reference point on human face region after The Cloud Terrace rotation the 3rd preset angles and predeterminable area, calculate again poor with second distance of the first distance, finally the first distance is multiplied by the 3rd preset angles with the business of described difference and deducts again the 3rd preset angles, obtain the angle of The Cloud Terrace rotation.
Owing to the function of the second rotary unit in the shooting orientation that changes camera head being have been described in detail in step 108, therefore do not repeat them here.
Alternatively, control according to an embodiment of the invention the device that camera head takes and can also comprise the 4th Tip element, angle value of reaching capacity that the The Cloud Terrace being connected with camera head when the second rotary unit control rotates (for example, the maximum of the rotatable angular range of The Cloud Terrace), and when human face region does not still overlap with predeterminable area, the shooting orientation that the 4th Tip element can point out user manually to change camera head.
Owing to the function of pointing out user manually to change the 4th Tip element in the shooting orientation of camera head being have been described in detail in step 108, therefore do not repeat them here.
Alternatively, in another example in shooting orientation that changes camera head, control according to an embodiment of the invention the device that camera head takes and can also comprise the 3rd Tip element, in the time that human face region does not overlap with predeterminable area, the shooting orientation that the 3rd Tip element can point out user manually to change camera head.
Owing to the function of pointing out user manually to change the 3rd Tip element in the shooting orientation of camera head being have been described in detail in step 108, therefore do not repeat them here.
In the time that human face region overlaps with predeterminable area, camera head 306 is taken.
For example, in the time that human face region overlaps with predeterminable area, the position of person in coverage that show not now to be taken is suitable position, and one be shooted reaches suitable Body proportion in the coverage of camera head, and now camera head 306 is taken.
The present invention proposes a kind of method and apparatus that camera head is taken of controlling, to change the shooting orientation of camera head, the shooting orientation that rotates to realize the change to taking orientation or manually change camera head by prompting user by controlling the The Cloud Terrace being connected with camera head.And by the judgement person of being taken whether in the coverage of camera head, and whether the position of the person of being taken in the coverage of camera head is suitable position, to change the shooting orientation of camera head and to change focal length, in the time that the position in the coverage of the person of being taken at camera head is suitable position, camera head is taken automatically.Adopt method and apparatus of the present invention to offer convenience to user's shooting, improve user experience.
Should be appreciated that, judge the consistent situation of coordinate of two points in the method and apparatus that control camera head of the present invention is taken, might not refer to that the coordinate figure of two points is completely equal, can be also that two distances between point are within setting range.In the method and apparatus that control camera head of the present invention is taken, judging the inconsistent situation of coordinate of two points, might not refer to that the coordinate figure of two points is unequal completely, can be also that two distances between point are outside setting range.
In addition, should be appreciated that, the unit in the device that control camera head of the present invention is taken can be implemented nextport hardware component NextPort.Those skilled in the art, according to the performed processing of unit limiting, can for example use field programmable gate array (FPGA) or application-specific integrated circuit (ASIC) (ASIC) to realize unit.
In addition the method that, control camera head of the present invention is taken may be implemented as the computer code in computer readable recording medium storing program for performing.Those skilled in the art can be according to the description of said method is realized to described computer code.In the time that being performed in computer, realizes described computer code said method of the present invention.
Although specifically shown with reference to its exemplary embodiment and described the present invention, but it should be appreciated by those skilled in the art, in the case of not departing from the spirit and scope of the present invention that claim limits, can carry out the various changes in form and details to it.
Claims (32)
1. control the method that camera head is taken, described method comprises:
(a) face in the coverage of identification camera head;
(b) whether the face that detects identification is all within coverage;
(c) in the time that the face of identifying is all within coverage, execution step (d): the area that calculates human face region;
(e) judge that whether the area of human face region is consistent with preset area;
(f) when the area of human face region and preset area are when inconsistent, change the focal length of camera head, and return to execution step (d);
(g) in the time that the area of human face region is consistent with preset area, execution step (h): control camera head and take.
2. method according to claim 1, also comprises step (i): in the time that the face of identifying is not all within coverage, change the shooting orientation of camera head, return to execution step (a).
3. method according to claim 2, wherein, step (i) comprising:
In the time that the face of identifying is not all within coverage, the relative position of the major organs of the face in detection coverage in coverage;
The shooting orientation of the Relative position determination camera head according to the major organs of face in coverage is with respect to the offset direction of face;
Change the shooting orientation of camera head according to described offset direction.
4. method according to claim 2, wherein, the step that changes the shooting orientation of camera head comprises: by controlling the The Cloud Terrace rotation being connected with camera head, change the shooting orientation of camera head.
5. method according to claim 2, wherein, the step that changes the shooting orientation of camera head comprises: prompting user manually changes the shooting orientation of camera head.
6. method according to claim 4, wherein, the step that changes the shooting orientation of camera head comprises: control described The Cloud Terrace rotation preset angles, change the shooting orientation of camera head.
7. method according to claim 4, wherein, the step that changes the shooting orientation of camera head comprises: when controlling angle value of reaching capacity of described The Cloud Terrace rotation, and the face of identification is not still all within coverage time, prompting user manually changes the shooting orientation of camera head, wherein, described limiting value is the maximum of the angular range of described The Cloud Terrace rotation.
8. method according to claim 1, also comprises: in the time that the area of human face region is consistent with preset area, and execution step:
(j) whether detect human face region overlaps with predeterminable area;
(k), in the time that human face region does not overlap with predeterminable area, change the shooting orientation of camera head, and return to execution step (j);
(l) in the time that human face region overlaps with predeterminable area, execution step (h).
9. method according to claim 8, wherein, the step that changes the shooting orientation of camera head comprises: by controlling the The Cloud Terrace rotation being connected with camera head, change the shooting orientation of camera head.
10. method according to claim 8, wherein, the step that changes the shooting orientation of camera head comprises: prompting user manually changes the shooting orientation of camera head.
11. methods according to claim 9, wherein, the step that changes the shooting orientation of camera head comprises: the first distance on the coordinate of the first reference point on calculating human face region and predeterminable area between the coordinate of the second corresponding reference point, obtain the angle of The Cloud Terrace rotation according to the first distance, control The Cloud Terrace and rotate corresponding angle, change the shooting orientation of camera head, wherein, the relative position of the first reference point in human face region is identical with the relative position of the second reference point in predeterminable area.
12. methods according to claim 11, wherein, the step that obtains the angle of The Cloud Terrace rotation according to the first distance is:
Control The Cloud Terrace rotation the 3rd preset angles;
Second distance on the coordinate of calculating first reference point on human face region after control The Cloud Terrace rotation the 3rd preset angles and predeterminable area between the coordinate of the second corresponding reference point;
Calculating the first distance is poor with second distance;
The first distance is multiplied by the 3rd preset angles with the business of described difference and deducts again the 3rd preset angles, obtain the angle of The Cloud Terrace rotation.
13. methods according to claim 9, wherein, the step that changes the shooting orientation of camera head comprises: when controlling angle value of reaching capacity of described The Cloud Terrace rotation, and when human face region does not still overlap with predeterminable area, prompting user manually changes the shooting orientation of camera head, wherein, described limiting value is the maximum of the angular range of described The Cloud Terrace rotation.
14. methods according to claim 11, wherein, detecting human face region comprises with the step whether predeterminable area overlaps: when a face detected in coverage time, whether the coordinate by detecting the first reference point on human face region is consistent with the coordinate of the second reference point corresponding on predeterminable area, judges whether human face region overlaps with predeterminable area.
15. methods according to claim 11, wherein, detect human face region and comprise with the step whether predeterminable area overlaps: when multiple face detected in coverage time, using the union of the face region of multiple faces as human face region,
(m) the default face of identification from human face region;
(n) human face region is divided into multiple subregions;
(o) determine the subregion that default face is positioned at;
(p) whether unanimously detect the coordinate of the coordinate of the first reference point of the subregion that default face is positioned at the second reference point corresponding with predeterminable area, judge whether human face region overlaps with predeterminable area.
16. methods according to claim 15, wherein, by taking or obtaining default face by the picture of choosing local storage, and default face is stored in local memory cell.
Control the device that camera head is taken for 17. 1 kinds, described device comprises:
The first face identification unit, the face in the coverage of identification camera head;
The first detecting unit, detects the face of identification whether all within coverage;
The first computing unit, in the time that the face of identifying is all within coverage, the first computing unit calculates the area of human face region;
Judging unit, judges that whether the area of human face region is consistent with preset area;
Zoom unit, when the area of human face region and preset area are when inconsistent, zoom unit changes the focal length of camera head, and then the first computing unit calculates the area of the human face region after the focal length that changes camera head;
Camera head, in the time that the area of human face region is consistent with preset area, camera head is taken.
18. devices according to claim 17, also comprise the first rotary unit, in the time that the face of identifying is not all within coverage, the first rotary unit changes the shooting orientation of camera head, and then the first face identification unit identification changes the face in the coverage of taking behind orientation.
19. devices according to claim 18, wherein, in the time that the face of identifying is not all within coverage, the relative position of the major organs that the first rotary unit can detect the face in coverage in coverage, the shooting orientation of the Relative position determination camera head according to the major organs of face in coverage, with respect to the offset direction of face, changes the shooting orientation of camera head according to described offset direction.
20. devices according to claim 18, wherein, the first rotary unit, by controlling the The Cloud Terrace rotation being connected with camera head, changes the shooting orientation of camera head.
21. devices according to claim 17, also comprise the first Tip element, and in the time that the face of identifying is not all within coverage, the first Tip element prompting user manually changes the shooting orientation of camera head.
22. devices according to claim 20, wherein, The Cloud Terrace rotation preset angles described in the first rotary unit control, changes the shooting orientation of camera head.
23. devices according to claim 20, also comprise the second Tip element, when angle value of reaching capacity of The Cloud Terrace rotation described in the first rotary unit control, and face is not still all within coverage time, the second Tip element prompting user manually changes the shooting orientation of camera head, wherein, described limiting value is the maximum of the angular range of described The Cloud Terrace rotation.
24. devices according to claim 17, also comprise:
Whether the second detecting unit, in the time that the area of human face region is consistent with preset area, detects human face region and overlap with predeterminable area;
The second rotary unit, in the time that human face region does not overlap with predeterminable area, the second rotary unit changes the shooting orientation of camera head, and then whether the second detecting unit detection human face region in the coverage changing behind shooting orientation overlaps with predeterminable area,
Wherein, in the time that human face region overlaps with predeterminable area, camera head is taken.
25. devices according to claim 24, wherein, the second rotary unit, by controlling the The Cloud Terrace rotation being connected with camera head, changes the shooting orientation of camera head.
26. devices according to claim 24, also comprise the 3rd Tip element, and in the time that human face region does not overlap with predeterminable area, the 3rd Tip element prompting user manually changes the shooting orientation of camera head.
27. devices according to claim 25, wherein, the first distance on the coordinate of the first reference point on the second rotary unit calculating human face region and predeterminable area between the coordinate of the second corresponding reference point, obtain the angle of The Cloud Terrace rotation according to the first distance, control The Cloud Terrace and rotate corresponding angle, change the shooting orientation of camera head, wherein, the relative position of the first reference point in human face region is identical with the relative position of the second reference point in predeterminable area.
28. devices according to claim 27, wherein, the second rotary unit control The Cloud Terrace rotation the 3rd preset angles, then calculate the second distance between the coordinate of controlling the second reference point corresponding on the coordinate of the first reference point on human face region after The Cloud Terrace rotation the 3rd preset angles and predeterminable area, calculate again poor with second distance of the first distance, finally the first distance is multiplied by the 3rd preset angles with the business of described difference and deducts again the 3rd preset angles, obtain the angle of The Cloud Terrace rotation.
29. devices according to claim 24, also comprise the 4th Tip element, when angle value of reaching capacity of The Cloud Terrace rotation described in the second rotary unit control, and when human face region does not still overlap with predeterminable area, the 4th Tip element prompting user manually changes the shooting orientation of camera head, wherein, described limiting value is the maximum of the angular range of described The Cloud Terrace rotation.
30. devices according to claim 27, wherein, in the time that the first detecting unit detects a face in coverage, whether the coordinate of the second detecting unit by detecting the first reference point on human face region be consistent with the coordinate of the second reference point corresponding on predeterminable area, judges whether human face region overlaps with predeterminable area.
31. devices according to claim 27, wherein, in the time that the first detecting unit detects multiple face in coverage, the second detecting unit is using the union of the face region of multiple faces as human face region, and face is preset in identification from human face region, human face region is divided into multiple subregions, determine the subregion that default face is positioned at, whether the coordinate that then detects the coordinate of the first reference point of the subregion that default face is positioned at second reference point corresponding with predeterminable area is consistent, judges whether human face region overlaps with predeterminable area.
32. devices according to claim 31, wherein, the second detecting unit is by taking or obtaining default face by the picture of choosing local storage, and default face is stored in local memory cell.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410176036.5A CN104104867B (en) | 2014-04-28 | 2014-04-28 | The method and apparatus that control camera device is shot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410176036.5A CN104104867B (en) | 2014-04-28 | 2014-04-28 | The method and apparatus that control camera device is shot |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104104867A true CN104104867A (en) | 2014-10-15 |
CN104104867B CN104104867B (en) | 2017-12-29 |
Family
ID=51672636
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410176036.5A Active CN104104867B (en) | 2014-04-28 | 2014-04-28 | The method and apparatus that control camera device is shot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104104867B (en) |
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104883506A (en) * | 2015-06-26 | 2015-09-02 | 重庆智韬信息技术中心 | Self-service shooting method based on face identification technology |
CN105049730A (en) * | 2015-08-20 | 2015-11-11 | 天脉聚源(北京)传媒科技有限公司 | Image pick-up method and device |
CN105187719A (en) * | 2015-08-21 | 2015-12-23 | 深圳市金立通信设备有限公司 | Shooting method and terminal |
CN105227832A (en) * | 2015-09-09 | 2016-01-06 | 厦门美图之家科技有限公司 | A kind of self-timer method based on critical point detection, self-heterodyne system and camera terminal |
CN105611142A (en) * | 2015-09-11 | 2016-05-25 | 广东欧珀移动通信有限公司 | Shooting method and apparatus thereof |
CN105812671A (en) * | 2016-05-13 | 2016-07-27 | 广州富勤信息科技有限公司 | Method and equipment for automatically shooting scenic spot photo |
CN106231200A (en) * | 2016-08-29 | 2016-12-14 | 广东欧珀移动通信有限公司 | A kind of photographic method and device |
CN106295610A (en) * | 2016-08-22 | 2017-01-04 | 歌尔股份有限公司 | The photographic method of the smart machine being equipped on unmanned plane The Cloud Terrace and system |
CN107566734A (en) * | 2017-09-29 | 2018-01-09 | 努比亚技术有限公司 | Portrait is taken pictures intelligent control method, terminal and computer-readable recording medium |
CN107911616A (en) * | 2017-12-26 | 2018-04-13 | Tcl移动通信科技(宁波)有限公司 | A kind of camera automatic focusing method, storage device and mobile terminal |
CN108133166A (en) * | 2016-11-30 | 2018-06-08 | 中兴通讯股份有限公司 | A kind of method and device of show staff's state |
CN108629260A (en) * | 2017-03-17 | 2018-10-09 | 北京旷视科技有限公司 | Live body verification method and device and storage medium |
CN108875473A (en) * | 2017-06-29 | 2018-11-23 | 北京旷视科技有限公司 | Living body verification method, device and system and storage medium |
CN109091118A (en) * | 2017-06-21 | 2018-12-28 | 深圳大森智能科技有限公司 | A kind of sign data monitoring system, monitoring method and terminal |
CN109189885A (en) * | 2018-08-31 | 2019-01-11 | 广东小天才科技有限公司 | A kind of real-time control method and smart machine based on smart machine camera |
CN109361865A (en) * | 2018-11-21 | 2019-02-19 | 维沃移动通信(杭州)有限公司 | A kind of image pickup method and terminal |
WO2019033411A1 (en) * | 2017-08-18 | 2019-02-21 | 华为技术有限公司 | Panoramic shooting method and device |
CN110933293A (en) * | 2019-10-31 | 2020-03-27 | 努比亚技术有限公司 | Shooting method, terminal and computer readable storage medium |
CN111147749A (en) * | 2019-12-31 | 2020-05-12 | 宇龙计算机通信科技(深圳)有限公司 | Photographing method, photographing device, terminal and storage medium |
CN111277752A (en) * | 2020-01-22 | 2020-06-12 | Oppo广东移动通信有限公司 | Prompting method and device, storage medium and electronic equipment |
CN111970454A (en) * | 2020-09-10 | 2020-11-20 | 青岛鳍源创新科技有限公司 | Shot picture display method, device, equipment and storage medium |
WO2021056442A1 (en) * | 2019-09-27 | 2021-04-01 | 深圳市大疆创新科技有限公司 | Composition method and system for photographing device, and storage medium |
CN113079320A (en) * | 2021-04-13 | 2021-07-06 | 浙江科技学院 | Sectional type multifunctional camera shooting method based on whole body mirror |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100091140A1 (en) * | 2008-10-10 | 2010-04-15 | Chi Mei Communication Systems, Inc. | Electronic device and method for capturing self portrait images |
CN101731004A (en) * | 2007-04-23 | 2010-06-09 | 夏普株式会社 | Image picking-up device, computer readable recording medium including recorded program for control of the device, and control method |
CN102111541A (en) * | 2009-12-28 | 2011-06-29 | 索尼公司 | Image pickup control apparatus, image pickup control method and program |
CN102413282A (en) * | 2011-10-26 | 2012-04-11 | 惠州Tcl移动通信有限公司 | Self-shooting guidance method and equipment |
CN103475849A (en) * | 2013-09-22 | 2013-12-25 | 广东欧珀移动通信有限公司 | Method and device for adjusting shooting angle of camera during video call |
-
2014
- 2014-04-28 CN CN201410176036.5A patent/CN104104867B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101731004A (en) * | 2007-04-23 | 2010-06-09 | 夏普株式会社 | Image picking-up device, computer readable recording medium including recorded program for control of the device, and control method |
US20100091140A1 (en) * | 2008-10-10 | 2010-04-15 | Chi Mei Communication Systems, Inc. | Electronic device and method for capturing self portrait images |
CN102111541A (en) * | 2009-12-28 | 2011-06-29 | 索尼公司 | Image pickup control apparatus, image pickup control method and program |
CN102413282A (en) * | 2011-10-26 | 2012-04-11 | 惠州Tcl移动通信有限公司 | Self-shooting guidance method and equipment |
CN103475849A (en) * | 2013-09-22 | 2013-12-25 | 广东欧珀移动通信有限公司 | Method and device for adjusting shooting angle of camera during video call |
Cited By (35)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104883506B (en) * | 2015-06-26 | 2021-07-02 | 重庆智韬信息技术中心 | Self-service shooting method based on face recognition technology |
CN104883506B9 (en) * | 2015-06-26 | 2021-09-03 | 重庆智韬信息技术中心 | Self-service shooting method based on face recognition technology |
CN104883506A (en) * | 2015-06-26 | 2015-09-02 | 重庆智韬信息技术中心 | Self-service shooting method based on face identification technology |
CN105049730A (en) * | 2015-08-20 | 2015-11-11 | 天脉聚源(北京)传媒科技有限公司 | Image pick-up method and device |
CN105187719A (en) * | 2015-08-21 | 2015-12-23 | 深圳市金立通信设备有限公司 | Shooting method and terminal |
CN105227832A (en) * | 2015-09-09 | 2016-01-06 | 厦门美图之家科技有限公司 | A kind of self-timer method based on critical point detection, self-heterodyne system and camera terminal |
CN105227832B (en) * | 2015-09-09 | 2018-08-10 | 厦门美图之家科技有限公司 | A kind of self-timer method, self-heterodyne system and camera terminal based on critical point detection |
CN105611142B (en) * | 2015-09-11 | 2018-03-27 | 广东欧珀移动通信有限公司 | A kind of photographic method and device |
CN105611142A (en) * | 2015-09-11 | 2016-05-25 | 广东欧珀移动通信有限公司 | Shooting method and apparatus thereof |
CN105812671A (en) * | 2016-05-13 | 2016-07-27 | 广州富勤信息科技有限公司 | Method and equipment for automatically shooting scenic spot photo |
CN105812671B (en) * | 2016-05-13 | 2018-11-20 | 广州富勤信息科技有限公司 | A kind of method and apparatus automatically snapping sight spot photograph |
WO2018036040A1 (en) * | 2016-08-22 | 2018-03-01 | 歌尔股份有限公司 | Photographing method and system of smart device mounted on cradle head of unmanned aerial vehicle |
CN106295610A (en) * | 2016-08-22 | 2017-01-04 | 歌尔股份有限公司 | The photographic method of the smart machine being equipped on unmanned plane The Cloud Terrace and system |
CN106231200A (en) * | 2016-08-29 | 2016-12-14 | 广东欧珀移动通信有限公司 | A kind of photographic method and device |
CN108133166A (en) * | 2016-11-30 | 2018-06-08 | 中兴通讯股份有限公司 | A kind of method and device of show staff's state |
CN108629260B (en) * | 2017-03-17 | 2022-02-08 | 北京旷视科技有限公司 | Living body verification method and apparatus, and storage medium |
CN108629260A (en) * | 2017-03-17 | 2018-10-09 | 北京旷视科技有限公司 | Live body verification method and device and storage medium |
CN109091118A (en) * | 2017-06-21 | 2018-12-28 | 深圳大森智能科技有限公司 | A kind of sign data monitoring system, monitoring method and terminal |
CN108875473A (en) * | 2017-06-29 | 2018-11-23 | 北京旷视科技有限公司 | Living body verification method, device and system and storage medium |
US11108953B2 (en) | 2017-08-18 | 2021-08-31 | Huawei Technologies Co., Ltd. | Panoramic photo shooting method and apparatus |
WO2019033411A1 (en) * | 2017-08-18 | 2019-02-21 | 华为技术有限公司 | Panoramic shooting method and device |
CN107566734A (en) * | 2017-09-29 | 2018-01-09 | 努比亚技术有限公司 | Portrait is taken pictures intelligent control method, terminal and computer-readable recording medium |
CN107566734B (en) * | 2017-09-29 | 2020-03-17 | 努比亚技术有限公司 | Intelligent control method, terminal and computer readable storage medium for portrait photographing |
WO2019129020A1 (en) * | 2017-12-26 | 2019-07-04 | 捷开通讯(深圳)有限公司 | Automatic focusing method of camera, storage device and mobile terminal |
CN107911616A (en) * | 2017-12-26 | 2018-04-13 | Tcl移动通信科技(宁波)有限公司 | A kind of camera automatic focusing method, storage device and mobile terminal |
CN109189885A (en) * | 2018-08-31 | 2019-01-11 | 广东小天才科技有限公司 | A kind of real-time control method and smart machine based on smart machine camera |
CN109361865A (en) * | 2018-11-21 | 2019-02-19 | 维沃移动通信(杭州)有限公司 | A kind of image pickup method and terminal |
WO2021056442A1 (en) * | 2019-09-27 | 2021-04-01 | 深圳市大疆创新科技有限公司 | Composition method and system for photographing device, and storage medium |
CN110933293A (en) * | 2019-10-31 | 2020-03-27 | 努比亚技术有限公司 | Shooting method, terminal and computer readable storage medium |
CN111147749A (en) * | 2019-12-31 | 2020-05-12 | 宇龙计算机通信科技(深圳)有限公司 | Photographing method, photographing device, terminal and storage medium |
CN111277752B (en) * | 2020-01-22 | 2021-08-31 | Oppo广东移动通信有限公司 | Prompting method and device, storage medium and electronic equipment |
CN111277752A (en) * | 2020-01-22 | 2020-06-12 | Oppo广东移动通信有限公司 | Prompting method and device, storage medium and electronic equipment |
CN111970454A (en) * | 2020-09-10 | 2020-11-20 | 青岛鳍源创新科技有限公司 | Shot picture display method, device, equipment and storage medium |
CN113079320A (en) * | 2021-04-13 | 2021-07-06 | 浙江科技学院 | Sectional type multifunctional camera shooting method based on whole body mirror |
CN113079320B (en) * | 2021-04-13 | 2022-03-11 | 浙江科技学院 | Sectional type multifunctional camera shooting method based on whole body mirror |
Also Published As
Publication number | Publication date |
---|---|
CN104104867B (en) | 2017-12-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104104867A (en) | Method for controlling image photographing device for photographing and device thereof | |
US10841551B2 (en) | User feedback for real-time checking and improving quality of scanned image | |
EP3092603B1 (en) | Dynamic updating of composite images | |
US10222877B2 (en) | Method and apparatus for presenting panoramic photo in mobile terminal, and mobile terminal | |
CN107770452B (en) | Photographing method, terminal and related medium product | |
US9986155B2 (en) | Image capturing method, panorama image generating method and electronic apparatus | |
US9900500B2 (en) | Method and apparatus for auto-focusing of an photographing device | |
US11210796B2 (en) | Imaging method and imaging control apparatus | |
RU2624569C2 (en) | Image displaying method and device | |
CN104754216A (en) | Photographing method and device | |
WO2017008353A1 (en) | Capturing method and user terminal | |
WO2015104236A1 (en) | Adaptive camera control for reducing motion blur during real-time image capture | |
CN105915803B (en) | Photographing method and system based on sensor | |
WO2018228466A1 (en) | Focus region display method and apparatus, and terminal device | |
WO2017144899A1 (en) | Depth of field processing | |
WO2015103745A1 (en) | An apparatus and associated methods for image capture | |
KR20220054157A (en) | Method of providing photographing guide and system therefor | |
US20090202180A1 (en) | Rotation independent face detection | |
TWI737588B (en) | System and method of capturing image | |
CN107925724B (en) | Technique for supporting photographing in device having camera and device thereof | |
TWI530747B (en) | Portable electronic devices and methods for image extraction | |
WO2018072179A1 (en) | Iris recognition-based image preview method and device | |
CN105472232B (en) | Image acquisition method and electronic device | |
TWI569641B (en) | Image capturing method and electronic apparatus | |
EP2421272A2 (en) | Apparatus and method for displaying three-dimensional (3D) object |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |