WO2017096861A1 - 拍摄照片的方法及装置 - Google Patents
拍摄照片的方法及装置 Download PDFInfo
- Publication number
- WO2017096861A1 WO2017096861A1 PCT/CN2016/088969 CN2016088969W WO2017096861A1 WO 2017096861 A1 WO2017096861 A1 WO 2017096861A1 CN 2016088969 W CN2016088969 W CN 2016088969W WO 2017096861 A1 WO2017096861 A1 WO 2017096861A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- feature value
- determining
- lip
- degree
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
Definitions
- Embodiments of the present disclosure relate to smart terminal technologies, for example, to a method and apparatus for taking a photo.
- the character is usually the main body of the shooting picture, and the expression of the character directly affects the shooting effect.
- the inventor found that the user usually cannot capture the best moment of the character's expression when shooting, and the shooting result cannot satisfy the user's request. For example, when taking a picture of a smiling person, the user is often unable to determine which smile has the best smile, which may cause the user to take multiple shots. Especially when shooting a baby's smile, it is difficult to predict the best moment of the baby's smile, and there is often a problem that does not catch the baby's smile for the best moment. Sometimes in order to get the best shot, the user needs to repeat the shooting multiple times, and then select the most satisfactory one from the multiple photos taken, which increases the time cost of shooting and the user experience is not good.
- the present disclosure provides a method and apparatus for taking a photo, which can automatically recognize a user's expression for shooting, and obtain a photo with the best expression.
- an embodiment of the present disclosure provides a method for taking a photo, including:
- the shooting scene is photographed to obtain Take the final image.
- an embodiment of the present disclosure further provides an apparatus for taking a photo, the apparatus comprising:
- a preview image generating unit configured to generate a preview image by acquiring a shooting scene in real time through a camera
- a facial feature value determining unit configured to determine a facial feature value of the face image in the preview image for each frame
- the final image acquisition unit is configured to photograph the shooting scene to obtain a final image when the facial feature value satisfies the set shooting condition.
- a preview image generating unit configured to generate a preview image by acquiring a shooting scene in real time through a camera
- a facial feature value determining unit configured to determine a facial feature value of the face image in the preview image for each frame
- the final image acquisition unit is configured to photograph the shooting scene to obtain a final image when the facial feature value satisfies the set shooting condition.
- an embodiment of the present disclosure provides a non-volatile storage medium storing computer-executable instructions configured to perform a method of taking a photo in any of the embodiments of the present disclosure.
- the present disclosure generates a preview image by acquiring a shooting scene in real time through a camera; determining a facial feature value of a face image in the preview image in each frame; and capturing the shooting scene when the facial features value satisfies a set shooting condition, Get the final image.
- the present disclosure solves the problem that the shooting opportunity is missed due to the inability to predict the best moment of the character's expression, and realizes the purpose of automatically recognizing the user's expression for shooting and obtaining a photo with the best expression, thereby achieving the effect of improving the photographing efficiency and the user's application experience. .
- FIG. 1 is a flowchart of a method for taking a photo according to Embodiment 1 of the present disclosure
- Embodiment 2 is a flow chart of a method for taking a photo in Embodiment 2 of the present disclosure
- Embodiment 3 is a flow chart of a method for taking a photo in Embodiment 3 of the present disclosure
- Embodiment 4 is a schematic structural diagram of an apparatus for taking a photograph in Embodiment 4 of the present disclosure
- FIG. 5 is a schematic structural diagram of a hardware of a terminal according to an embodiment of the present disclosure.
- Embodiment 1 is a flowchart of a method for taking a photo according to Embodiment 1 of the present disclosure.
- the present embodiment is applicable to a situation in which a photo of a person's smile is optimally photographed.
- the method may be performed by an intelligent terminal. It is equipped with a device for taking pictures of the best moments of the person's smile.
- the method includes: step 110, step 120, and step 130.
- a preview image is generated by acquiring a shooting scene in real time by the camera.
- the user selects a shooting scene and aligns the camera on the smart terminal with the shooting scene.
- the shooting scene is imaged on the photosensitive element in the camera through the lens of the camera.
- the optical signal is converted into an electrical signal by the photosensitive element and sent to a controller within the intelligent terminal.
- a preview image of the shooting scene is generated by the controller, and the display screen of the smart terminal is controlled to display the preview image. Since the person and/or object in the shooting scene selected by the user is not in a static state, the smart terminal does not acquire only one frame of the shooting scene, but acquires the shooting scene in real time according to the set frequency to generate a preview image. Display.
- step 120 a facial feature value of the face image in the preview image for each frame is determined.
- the facial features value may include a motion feature value of the eyebrow, a motion feature value of the eye, or a lip feature value. Since the emotion of the character can be characterized by the facial feature, the emotional state of the character can be expressed according to the set value of the facial feature value. For example, a smile can be expressed by raising the corner of the mouth, opening the lips, and/or reducing the eyes, and expressing anger by sagging the mouth, closing the lips, and/or exposing the eyes.
- the smart terminal performs face recognition on the preview image of each frame to determine whether a face is included in the preview image. If the preview image includes a human face, the position of the face is determined.
- the smart terminal determines, by an image processing algorithm, a lip contour in the preview image of each frame including a human face, and determines a lip motion feature value according to the lip contour. It is also possible to determine an eye contour in the preview image per frame including a face by an image processing algorithm, and determine a motion feature value or the like of the eye based on the eye contour.
- step 130 when the facial features value satisfies the set shooting condition, the shooting scene is photographed to obtain a final image.
- the intelligent terminal matches the facial features value with the set shooting condition, and continuously captures the continuous shooting of the preset number of frames for the shooting scene within the set time length when the facial features value meets the set shooting condition
- the image determines that the image with the largest expression feature value in the continuous shooting image is the final image. For example, if the user wants to take a photo of the moment when the person smiles best, the camera is aimed at the person by the method of the embodiment, the camera automatically focuses on the person's face, and the lip feature value of the person is acquired to match the set shooting condition.
- the preset shooting condition is a lip motion feature value corresponding to the time when the user just started laughing
- Multiple images of the character The smart terminal determines an expression feature value of a person in the image of the plurality of people photographed according to the lip movement feature value, and uses an image of the plurality of images of the character that has the largest expression feature value as the final image.
- the smart terminal saves the final image and deletes the remaining images in the continuous shooting image. Since the emoticon value of the final image is the largest, the emoticon shooting effect of the frame image is the best, and deleting the remaining image can avoid the problem that the storage space is reduced due to the storage space occupied by the photograph with poor photographing effect.
- the camera generates a preview image by acquiring a shooting scene in real time; determining a facial feature value of the face image in the preview image of each frame; and when the facial features value meets the set shooting condition, the shooting is performed The scene is taken to get the final image.
- the technical solution of the embodiment solves the problem that the shooting opportunity is missed due to the inability to predict the best moment of the character expression, and realizes the purpose of automatically recognizing the user's expression and capturing the photo with the best expression, thereby achieving the improvement of the photographing efficiency and the user's The effect of the app experience.
- Embodiment 2 is a flowchart of a method for taking a photo in Embodiment 2 of the present disclosure.
- the method for taking a photo in Embodiment 2 includes steps 210-290.
- a preview image is generated by acquiring a shooting scene in real time by the camera.
- the user aligns the camera with the shooting scene and activates the shooting function of the embodiment, and the smart terminal acquires the shooting scene in real time through the camera to generate a preview image and displays it through the display screen.
- the camera of the smart terminal can be aimed at the baby and the photographing function of the embodiment can be activated.
- the smart terminal activates the corresponding function according to the instruction input by the user to activate the shooting function of the embodiment, and acquires the image of the baby in real time according to the preset frequency by the camera, and generates a preview image to be displayed on the display screen.
- step 220 the preview image of each frame is subjected to face recognition, and when the face is recognized, the face is focused.
- the smart terminal performs face recognition on each preview image of the frame, and if the preview image does not include a human face, the data related to the preview image is deleted. If the preview image includes a human face, the position of the human face is located, and the face is focused. For example, after acquiring the preview image including the baby, the smart terminal performs face recognition on the preview image, and selects a preview image including the baby face information in the preview image.
- the preview image including the baby's face information is, for example, a clear and complete preview image of the lip contour due to the need to acquire the lip motion feature value.
- the smart camera controls the camera to focus on the face to obtain clear information of the face.
- step 230 a rough position of the lip in the preview image currently including the face is determined, and the lip contour is extracted after determining the precise position of the lip based on the coarse position.
- the smart terminal sequentially acquires one frame image in the preview image including the human face as the current preview image, and determines a face region in the current preview image. After detecting the face area, the smart terminal can perform rough lip positioning according to the geometric features of the face. According to the data analysis of the face information of a large number of people, the lip area can be defined as the lower third of the face, and the distance from the left and right face boundaries is within a quarter of the width of the face. Since there are many methods for extracting lip information from the face information, in this embodiment, only an optional method is exemplified, and the method is not limited to use in the embodiment.
- the smart terminal uses an image processing algorithm to further process the lip region to determine the precise location of the lips.
- a Fisher transform can be performed on the preview image to distinguish the skin color region from the lip color region, thereby obtaining the precise position of the lips. Then, the lip information is distinguished from the oral information according to the brightness information of the preview image, and the oral information is filtered out to prevent the oral information from affecting the determination of the lip contour. Finally, the processed image is binarized, and the result of the binarization is grayscale projected to obtain the information of the lip contour.
- step 240 the degree of splitting and/or opening of the lips is determined based on the contour of the lips.
- the intelligent terminal can obtain the left and right mouth angles by vertical projection, and use the horizontal projection to obtain the uppermost point of the upper lip center, the lowermost point of the upper lip center, the uppermost point of the lower lip center, and the lowermost point of the lower lip center.
- the intelligent terminal can obtain the left and right mouth angles by vertical projection, and use the horizontal projection to obtain the uppermost point of the upper lip center, the lowermost point of the upper lip center, the uppermost point of the lower lip center, and the lowermost point of the lower lip center.
- the degree of cracking of the lips at the beginning of the smile is used as the preset first threshold.
- the cracking degree of the lips when the character smiles can be obtained by statistically counting the facial expressions when a large number of people smile. Comparing the degree of cracking of the lips of the person included in the preview image with a preset first threshold. If the degree of cracking exceeds the first threshold, and the person is considered to start smiling, step 280 is performed, if the degree of cracking is less than The first threshold is performed, and step 260 is performed.
- step 260 it is determined whether the opening degree exceeds a preset second threshold. If yes, step 280 is performed, and if no, step 270 is performed.
- the opening degree of the lips at the start of the smile is used as the preset second threshold.
- the opening degree of the lips when the character smiles can also be obtained by statistically counting the facial expressions when a large number of people smile. And when the degree of cracking of the character's lips included in the preview image is less than a preset first threshold, comparing the opening degree of the character's lips included in the preview image with a preset second threshold. If the degree of opening exceeds the preset second threshold, the person can also be considered to start smiling, and step 280 is performed. If the degree of opening is less than the preset second threshold, step 270 is performed.
- step 270 it is determined that the lip feature value does not satisfy the set shooting condition.
- the smart terminal determines that the current preview image includes a degree of cracking of the lip that is less than a preset first threshold, and the opening degree of the lip is less than a preset second threshold, indicating that the baby does not start smiling. If the current lip feature value does not satisfy the set shooting condition of the smiling person, the information of the current preview image is cleared, and the process returns to step 230 to re-determine the next frame preview image as the current preview image.
- step 280 it is determined that the lip feature value satisfies the set shooting condition.
- the smart terminal determines that the current preview image includes a cracking degree exceeding a preset first threshold, indicating that the baby starts to smile, that is, the lip motion characteristic value satisfies the set shooting condition, and the shooting can be started. And determining, by the smart terminal, that the degree of cracking of the lip included in the current preview image is less than a preset first threshold, determining whether the opening degree of the lip included in the preview image exceeds a preset second threshold, if the opening degree exceeds a pre-predetermined value
- the second threshold value can also indicate that the baby starts to smile, that is, the lip movement characteristic value satisfies the set shooting condition, and the shooting can be started.
- step 290 a continuous shooting image of a preset number of frames is continuously captured for the shooting scene for a set time length, and an image with the largest expression feature value in the continuous shooting image is determined as a final image.
- the smart terminal takes a preview image that satisfies the set shooting conditions as a shooting start point, and continuously captures the set time length (the continuous shooting duration may be 1 minute) to obtain a continuous shooting image of the preset number of frames. For example, the smart terminal is set to take 9 photos of the shooting scene for 1 minute in continuous shooting when the continuous shooting mode is activated. And performing weighted summation calculation on the splitting degree and the opening degree of each frame image in the continuous shooting image to obtain an expression feature value; comparing the expression feature value of the continuous shooting image to determine a maximum expression feature value corresponding to the image One frame of image is the final image.
- the smart terminal may determine the expression feature value by multiplying the value of the degree of cracking by the sum of its set weight factor and the degree of opening by the sum of its set weighting factors. For example, when determining the smile feature value, the smile feature value can be determined by calculating the split degree of the lip multiplied by 80% (weight factor) and the opening degree multiplied by 20% (weight factor). The smart terminal calculates the smile feature value of each frame including the preview image of the face, compares the calculated smile feature value, and selects the preview image corresponding to the maximum smile feature value as the final image.
- FIG. 3 is a flowchart of a method for taking a photo in Embodiment 3 of the present disclosure.
- the method for taking a photo in Embodiment 3 includes: Steps 310-380.
- step 310 the shooting mode is turned on.
- the user aligns the camera with the shooting scene and activates the shooting mode of the embodiment. For example, if the user wants to take a baby's smile, the camera of the smart terminal can be aimed at the baby and the photographing function of the embodiment can be activated.
- the smart terminal starts the corresponding function according to the instruction of the shooting function initiated by the user to activate the shooting function of the embodiment, and the image of the baby is obtained in real time by the camera according to the preset frequency, and the generated preview image is displayed on the display screen in real time.
- step 320 face recognition and face focusing are performed.
- the smart terminal performs face recognition on the acquired preview image, and controls the camera to focus on the face for the preview image that recognizes the face.
- the smart terminal recognizes the face area, and can determine the outline of the facial features according to the face area.
- the intelligent terminal can determine the facial features value in real time according to the information of the outline of the facial features. For example, the smart terminal may determine an area of the lip according to the recognized face area, and then extract a lip contour by an image processing algorithm, and determine a lip feature value according to the lip contour.
- the lip movement characteristic value includes a degree of splitting and/or opening of the lips.
- step 330 a smile feature value is calculated for each preview frame.
- the smart terminal can determine the user's smile feature value by the degree of splitting and/or opening of the lips.
- a weighting factor a cracking weight factor
- step 340 it is determined whether the smile feature value is greater than a preset threshold, and if so, step 350 is performed, and if not, then step 330 is performed.
- the preset threshold may be an empirical value of the smile feature value at the start of the smile time, and the experience value of the smile feature value may be obtained by statistically counting the facial expressions when a large number of people smile.
- the smile feature value calculated according to the smile feature value formula is compared with a preset threshold. If the smile feature value exceeds a preset threshold, step 350 is performed. If the smile feature value is less than the preset threshold, return to step 330.
- step 350 continuous shooting is initiated for one minute of continuous shooting.
- the smart terminal determines that the smile feature value included in the current preview image exceeds the preset threshold, the current preview image is used as the start image, and the shooting scene is continuously shot for 1 minute. For example, the smart terminal determines that the current preview image includes a smile feature value that exceeds a preset threshold, and uses the preview image of the current baby's smile expression as a starting image to perform a one-minute continuous shooting on the baby.
- step 360 a smile feature value calculation is performed on the photograph after the continuous shooting.
- the smart terminal performs face recognition on the continuous shooting image obtained by each continuous shooting, acquires the outline of the lips, and determines the degree of cracking and opening of the lips of the continuous shooting images of each frame according to the contour of the lips. Using the calculation formula of the smile feature value, the smile feature value of the continuous shooting image per frame is determined according to the split degree and the opening degree.
- step 370 a photo with the largest smile feature value is selected for saving.
- the smart terminal compares the smile feature values of the continuous shooting images obtained by each continuous shooting, determines a one-frame continuous shooting image in which the smile feature value is the largest as the final image, saves the final image, and deletes the remaining continuous shooting images.
- step 380 the photo is taken.
- the smart terminal saves the final image to complete a shooting of the smiling moment of the character.
- the user can select a thumbnail of the final image to view the saved final image.
- the user can continue to perform shooting using the shooting mode in this embodiment.
- the smart terminal saves the final image of the baby's smile feature value to complete a shot of the baby's smile moment.
- the smart terminal detects a viewing instruction input by the user, and according to the viewing instruction, the smart terminal exits the shooting mode of the embodiment, and displays the final image obtained by the shooting.
- the apparatus for photographing includes a preview image generating unit 410, a facial feature value determining unit 420, and a final image acquiring unit 430.
- the preview image generating unit 410 is configured to generate a preview image by acquiring a shooting scene in real time through the camera;
- the facial feature value determining unit 420 is configured to determine a facial feature value of the face image in the preview image for each frame;
- the preview image generating unit 410 generates a preview image by acquiring a shooting scene in real time through the camera; determining, by the facial feature value determining unit 420, the facial features of the facial image in each preview image; When the feature value satisfies the set shooting condition, the shooting scene is captured by the final image acquiring unit 430 to acquire the final image.
- the technical solution of the embodiment solves the problem that the shooting opportunity is missed due to the inability to predict the best moment of the character expression, and realizes the purpose of automatically recognizing the user's expression and capturing the photo with the best expression, thereby achieving the improvement of the photographing efficiency and the user's The effect of the app experience.
- the final image obtaining unit 430 is configured to:
- the continuous shooting image of the preset number of frames is continuously captured for the shooting scene for a set time length, and the image with the largest expression feature value in the continuous shooting image is determined to be the final image.
- the facial feature value determining unit 420 includes:
- a lip motion feature value determining subunit is configured to determine a lip contour of a human face included in the preview image for each frame, and determine a lip motion feature value according to the lip contour.
- the lip feature value determining subunit is set to:
- the degree of splitting and/or opening of the lips is determined based on the contour of the lips.
- the device further includes:
- a photographing condition determining unit configured to determine a degree of splitting and/or opening of the lip according to the lip contour After the activation, comparing the degree of cracking with a preset first threshold;
- the final image obtaining unit 430 is configured to:
- the device further includes:
- the image saving unit is configured to save the final image after deleting the image having the largest expression feature value in the continuous shooting image as the final image, and delete the remaining image in the continuous shooting image.
- the above-mentioned apparatus for taking a photograph can perform the method of photographing a photograph provided by any embodiment of the present disclosure, and has a function module and a beneficial effect corresponding to the execution method.
- FIG. 5 is a schematic structural diagram of a hardware of a terminal (for example, a function mobile phone) according to an embodiment of the present disclosure. As shown in FIG. 5, the terminal includes:
- One or more processors 501 and memory 502, one processor 501 is taken as an example in FIG.
- the feature phone may further include: an input device 503 and an output device 504.
- the processor 501, the memory 502, the input device 503, and the output device 504 in the terminal may be connected by a bus or other means, and the bus connection is taken as an example in FIG.
- the memory 502 is a non-volatile computer readable storage medium, and can be used for storing a non-volatile software program, a non-volatile computer-executable program, and a module, such as a program corresponding to the method for taking a photo in the embodiment of the present application.
- An instruction/module for example, the preview image generating unit 410, the facial feature value determining unit 420, and the final image acquiring unit 430 shown in FIG. 4).
- the processor 501 executes various functional applications of the server and data processing by executing non-volatile software programs, instructions, and modules stored in the memory 502, that is, a method of taking a photo.
- the memory 502 may include a storage program area and an storage data area, wherein the storage program area may store an operating system, an application required for at least one function; the storage data area may store data created according to a method of taking a photo, and the like.
- memory 502 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device.
- memory 502 can optionally include a memory that is remotely located relative to processor 501.
- the input device 503 can be used to receive input numeric or character information, as well as user settings and key signal inputs related to function control.
- Output device 504 can include a display device such as a display screen.
- the one or more modules are stored in the memory 502, and when executed by the one or more processors 501, perform a method of taking a photo in any of the above method embodiments.
- Embodiments of the present disclosure provide a non-volatile storage medium storing computer-executable instructions configured to perform a method of taking a photo in any of the embodiments of the present disclosure.
- the present disclosure generates a preview image by acquiring a shooting scene in real time through a camera; determining a facial feature value of a face image in the preview image in each frame; and capturing the shooting scene when the facial features value satisfies a set shooting condition, Get the final image.
- the purpose of automatically recognizing the user's expression for shooting and obtaining the best photo of the expression achieves the effect of improving the photographing efficiency and the user's application experience.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
Abstract
本申请公开了一种拍摄照片的方法及装置。所述方法包括通过摄像头实时获取拍摄场景生成预览图像;确定每帧所述预览图像中人脸图像的五官特征值;在所述五官特征值满足设定拍摄条件时,对所述拍摄场景进行拍摄,以获取最终图像。
Description
本申请要求在2015年12月8日提交中国专利局、申请号为201510898441.2、发明名称为“一种拍摄照片的方法及装置”的中国专利申请的优先权,该申请的全部内容通过引用结合在本申请中。
本公开实施例涉及智能终端技术,例如涉及一种拍摄照片的方法及装置。
在日常生活中,人们利用智能终端(例如智能手机或平板电脑)拍摄照片越来越普遍,智能终端的拍摄功能也越来越完善。
目前,用户在拍摄照片时,人物通常是拍摄画面的主体,人物的表情会直接影响拍摄效果。在实现本申请过程中,发明人发现:用户在拍摄时通常不能恰好捕捉到人物表情最好的一瞬间,拍摄结果不能满足用户的要求。例如,在拍摄人物微笑的照片时,用户往往不能确定哪一个时刻的笑容最佳,可能导致用户反复多次拍摄。尤其是在拍摄婴儿的笑容时,由于很难预知婴儿笑容的最佳时刻,经常出现没有刚好捕捉到婴儿的笑容最好的一瞬间的问题。有时候为了获得拍摄最佳的一张照片,用户需要重复拍摄多次,进而从拍摄的多张照片中选择最满意的一张,增加了拍摄的时间成本,用户体验不佳。
发明内容
本公开提供一种拍摄照片的方法及装置,能够自动识别用户表情进行拍摄,获取表情最好的一张照片。
第一方面,本公开实施例提供了一种拍摄照片的方法,包括:
通过摄像头实时获取拍摄场景生成预览图像;
确定每帧所述预览图像中人脸图像的五官特征值;以及
在所述五官特征值满足设定拍摄条件时,对所述拍摄场景进行拍摄,以获
取最终图像。
第二方面,本公开实施例还提供了一种拍摄照片的装置,该装置包括:
预览图像生成单元,设置为通过摄像头实时获取拍摄场景生成预览图像;
五官特征值确定单元,设置为确定每帧所述预览图像中人脸图像的五官特征值;以及
最终图像获取单元,设置为在所述五官特征值满足设定拍摄条件时,对所述拍摄场景进行拍摄,以获取最终图像。
第三方面,本公开实施例还提供一种具有拍摄照片功能的终端,该终端包括:至少一个处理器和存储器,其中,所述存储器存储有可被所述至少一个处理器执行的程序,所述程序包括:
预览图像生成单元,设置为通过摄像头实时获取拍摄场景生成预览图像;
五官特征值确定单元,设置为确定每帧所述预览图像中人脸图像的五官特征值;以及
最终图像获取单元,设置为在所述五官特征值满足设定拍摄条件时,对所述拍摄场景进行拍摄,以获取最终图像。
第四方面,本公开实施例提供了一种非易失性存储介质,存储有计算机可执行指令,所述计算机可执行指令设置为执行本公开的任一实施例中的拍摄照片的方法。
本公开通过摄像头实时获取拍摄场景生成预览图像;确定每帧所述预览图像中人脸图像的五官特征值;在所述五官特征值满足设定拍摄条件时,对所述拍摄场景进行拍摄,以获取最终图像。本公开解决因无法预知人物表情最好的瞬间而错失拍摄良机的问题,实现自动识别用户表情进行拍摄,获取表情最好的一张照片的目的,达到了提高拍照效率和用户的应用体验的效果。
图1为本公开实施例一中的一种拍摄照片的方法的流程图;
图2是本公开实施例二中的一种拍摄照片的方法的流程图;
图3是本公开实施例三中的一种拍摄照片的方法的流程图;
图4是本公开实施例四中的一种拍摄照片的装置的结构示意图;以及
图5是本公开实施例提供的一种终端的硬件结构示意图。
实施方式
下面结合附图和实施例对本公开作进一步的详细说明。可以理解的是,此处所描述的实施例仅仅用于解释本公开,而非对本公开的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与本公开相关的部分而非全部结构。
实施例一
图1为本公开实施例一提供的一种拍摄照片的方法的流程图,本实施例可适用于拍摄人物笑容最佳时刻照片的情况,该方法可以由智能终端来执行,所述智能终端内配备有拍摄人物笑容最佳时刻照片的装置。该方法包括:步骤110,步骤120和步骤130。
在步骤110中,通过摄像头实时获取拍摄场景生成预览图像。
用户选择拍摄场景,并将智能终端上的摄像头对准该拍摄场景。根据光学成像原理,该拍摄场景会通过摄像头的镜头在摄像头内的感光元件上成像。通过所述感光元件将光信号转换成电信号发送至智能终端内的控制器。通过所述控制器生成该拍摄场景的预览图像,并控制智能终端的显示屏显示该预览图像。由于用户选择的拍摄场景中的人和/或物不是处于静止的状态的,所以,智能终端并不是仅获取该拍摄场景的一帧图像,而是按照设定的频率实时获取拍摄场景生成预览图像进行显示。
在步骤120中,确定每帧所述预览图像中人脸图像的五官特征值。
其中,五官特征值可以包括眉毛的运动特征值、眼睛的运动特征值或唇动特征值等。由于人物的情绪可以通过面部特征予以表征,可以根据所述五官特征值的设定取值表示人物的情绪状况。例如,可以通过嘴角上扬、嘴唇开启和/或眼睛缩小等表示微笑,可以通过嘴角下垂、嘴唇闭合和/或眼睛瞪大等表示愤怒等。
智能终端对每帧所述预览图像进行人脸识别,确定所述预览图像中是否包括人脸。若所述预览图像中包括人脸,确定人脸的位置。智能终端通过图像处理算法确定包括人脸的每帧所述预览图像中的嘴唇轮廓,根据所述嘴唇轮廓确定唇动特征值。还可以通过图像处理算法确定包括人脸的每帧所述预览图像中的眼睛轮廓,根据所述眼睛轮廓确定眼睛的运动特征值等。
在步骤130中,在所述五官特征值满足设定拍摄条件时,对所述拍摄场景进行拍摄,以获取最终图像。
其中,拍摄条件可以是与用户想要拍摄的情绪对应的五官特征值的数据统
计值。可以通过对大量人物表情数据进行统计归纳得到人物的不同情绪对应的五官特征值的数据统计值。应用中可以将拍摄条件设定为人物情绪刚刚表现在面部的时刻的五官特征值。例如,可以将用户刚开始笑的时刻对应的唇动特征值设定为开始拍摄的拍摄条件。
智能终端将所述五官特征值与设定拍摄条件进行匹配,在所述五官特征值满足设定拍摄条件时,在设定时间长度内,对所述拍摄场景连续拍摄预设帧数的连拍图像,确定所述连拍图像中表情特征值最大的图像为最终图像。例如,用户想拍摄人物笑容最佳的时刻的照片,利用本实施例的方法将摄像头对准人物,摄像头自动对焦到人物面部,获取人物的唇动特征值与设定拍摄条件进行匹配。若所述预设拍摄条件为用户刚开始笑的时刻对应的唇动特征值,那么判断所述唇动特征值是否超过设定拍摄条件,若是,则启动连拍功能对人物连拍1分钟获得多张该人物的图像。智能终端根据所述唇动特征值确定拍摄的多张该人物的图像中人物的表情特征值,将多张所述该人物的图像中表情特征值最大的图像作为最终图像。智能终端保存所述最终图像,删除所述连拍图像中的剩余图像。由于最终图像的表情特征值最大,该帧图像的表情拍摄效果最佳,删除剩余图像可以避免因拍摄效果不好的照片占用存储空间而导致存储空间减少的问题。
本实施例的技术方案,通过摄像头实时获取拍摄场景生成预览图像;确定每帧所述预览图像中人脸图像的五官特征值;在所述五官特征值满足设定拍摄条件时,对所述拍摄场景进行拍摄,以获取最终图像。本实施例的技术方案解决因无法预知人物表情最好的瞬间而错失拍摄良机的问题,实现自动识别用户表情进行拍摄,获取表情最好的一张照片的目的,达到了提高拍照效率和用户的应用体验的效果。
实施例二
图2是本公开实施例二中的一种拍摄照片的方法的流程图,实施例二的拍摄照片的方法包括:步骤210-290。
在步骤210中,通过摄像头实时获取拍摄场景生成预览图像。
用户将摄像头对准拍摄场景并启动本实施例的拍摄功能,智能终端通过摄像头实时获取拍摄场景生成预览图像,并通过显示屏显示。例如,用户想要拍摄婴儿的笑容,可以将智能终端的摄像头对准婴儿并启动本实施例的拍摄功能。
智能终端根据用户输入的启动本实施例的拍摄功能的指令,启动相应功能,通过摄像头按照预设的频率实时获取婴儿的图像,并生成预览图像显示在显示屏上。
在步骤220中,对每帧所述预览图像进行人脸识别,在识别到人脸时,对所述人脸进行对焦。
智能终端对每帧所述预览图像进行人脸识别,若所述预览图像不包括人脸,则删除该预览图像相关的数据。若所述预览图像包括人脸,定位人脸的位置,对所述人脸进行对焦。例如,智能终端在获取包括婴儿的预览图像后,对所述预览图像进行人脸识别,选择出所述预览图像中包括婴儿面部信息的预览图像。由于需要获取唇动特征值,上述包括婴儿面部信息的预览图像例如为嘴唇轮廓比较清楚和完整的预览图像。智能终端在识别到婴儿的脸部时,控制摄像头对所述脸部进行对焦,以获取脸部的清晰信息。
在步骤230中,确定当前包括人脸的预览图像中嘴唇的粗略位置,根据所述粗略位置确定嘴唇的精确位置后提取嘴唇轮廓。
智能终端顺序获取包括人脸的预览图像中的一帧图像作为当前预览图像,确定当前预览图像中的人脸区域。智能终端在检测到人脸区域后即可根据人脸几何特征进行唇部粗定位。根据对大量人物面部信息的数据分析可以将嘴唇区域划定为人脸的下三分之一,且距左右人脸边界距离为人脸宽度的四分之一的范围内。由于从面部信息中提取嘴唇信息的方法有多种,本实施例中仅是举例说明一种可选的方法,并不限定本实施例中仅是使用该方法。智能终端采用图像处理算法对所述嘴唇区域进行进一步处理,以确定嘴唇的精确位置。例如,可以对所述预览图像进行费希尔变换(Fisher变换)将肤色区域和唇色区域进行区分,从而获得嘴唇的精确位置。然后,根据预览图像的亮度信息将唇部信息与口腔信息进行区分,并过滤掉口腔信息避免口腔信息对嘴唇轮廓的确定产生影响。最后,对处理后的图像进行二值化,对二值化的结果进行灰度投影即可获得嘴唇轮廓的信息。
在步骤240中,根据所述嘴唇轮廓确定嘴唇的裂开度和/或开启度。
智能终端在嘴唇分割和定位的基础上,可以利用垂直投影得到左右嘴角,利用水平投影得到上嘴唇中心最上点、上嘴唇中心最下点、下嘴唇中心最上点以及下嘴唇中心最下点。通过对上述左右嘴角、上嘴唇中心最上点、上嘴唇中心最下点、下嘴唇中心最上点以及下嘴唇中心最下点的坐标进行计算,可以确
定嘴唇的裂开度和开启度。
在步骤250中,判断所述裂开度是否超过预设的第一阈值,若是,则执行步骤280,若否,则执行步骤260。
将人物开始微笑时刻嘴唇的裂开度作为预设的第一阈值。人物微笑时刻嘴唇的裂开度可以通过统计学原理,对大量人物微笑时面部表情进行统计而获得。将所述预览图像包括的人物嘴唇的裂开度与预设的第一阈值进行比较,若裂开度超过所述第一阈值,认为人物开始微笑,则执行步骤280,若裂开度小于所述第一阈值,则执行步骤260。
在步骤260中,判断所述开启度是否超过预设的第二阈值,若是,则执行步骤280,若否,则执行步骤270。
将人物开始微笑时刻嘴唇的开启度作为预设的第二阈值。人物微笑时刻嘴唇的开启度也可以通过统计学原理,对大量人物微笑时面部表情进行统计而获得。在所述预览图像包括的人物嘴唇的裂开度小于预设的第一阈值时,将所述预览图像包括的人物嘴唇的开启度与预设的第二阈值进行比较。若开启度超过预设的第二阈值,同样可以认为人物开始微笑,则执行步骤280。若开启度小于预设的第二阈值,则执行步骤270。
在步骤270中,确定所述唇动特征值不满足设定拍摄条件。
例如,用户在拍摄婴儿的笑容时,通过智能终端确定当前预览图像包括的嘴唇的裂开度小于预设的第一阈值,且嘴唇的开启度小于预设的第二阈值,说明婴儿未开始微笑,当前的唇动特征值不满足拍摄人物微笑的设定拍摄条件,则清除当前预览图像的信息,返回执行步骤230,将下一帧预览图像作为当前预览图像重新进行判断。
在步骤280中,确定所述唇动特征值满足设定拍摄条件。
例如,用户在拍摄婴儿的笑容时,通过智能终端确定当前预览图像包括的裂开度超过预设的第一阈值,说明婴儿开始微笑,即唇动特征值满足设定拍摄条件,可以开始拍摄。若通过智能终端确定当前预览图像包括的嘴唇的裂开度小于预设的第一阈值,再判断该预览图像包括的嘴唇的开启度是否超过预设的第二阈值,若所述开启度超过预设的第二阈值,也可以说明婴儿开始微笑,即唇动特征值满足设定拍摄条件,可以开始拍摄。
在步骤290中,在设定时间长度内,对所述拍摄场景连续拍摄预设帧数的连拍图像,确定所述连拍图像中表情特征值最大的图像为最终图像。
智能终端将满足设定拍摄条件的预览图像作为拍摄起点,连续拍摄设定时间长度(连拍持续时长可以是1分钟)获得预设帧数的连拍图像。例如,智能终端设置为在连拍模式启动时,连拍1分钟可以获得拍摄场景的9张照片。将所述连拍图像中每帧图像的所述裂开度和所述开启度进行加权求和计算,获得表情特征值;比较所述连拍图像的表情特征值,确定最大表情特征值对应的一帧图像为最终图像。智能终端可以通过裂开度的值乘以其设定的权重因子与开启度的值乘以其设定的权重因子的和的方式确定表情特征值。例如,在确定笑容特征值时,可以通过计算嘴唇的裂开度乘以80%(权重因子)与开启度乘以20%(权重因子)之和的方式确定笑容特征值。智能终端分别计算每帧包括人脸的预览图像的笑容特征值,比较计算获得的笑容特征值,选择最大笑容特征值对应的预览图像作为最终图像。
实施例三
图3是本公开实施例三中的一种拍摄照片的方法的流程图,实施例三的拍摄照片的方法包括:步骤310-380。
在步骤310中,开启拍摄模式。
用户将摄像头对准拍摄场景并启动本实施例的拍摄模式。例如,用户想要拍摄婴儿的笑容,可以将智能终端的摄像头对准婴儿并启动本实施例的拍摄功能。智能终端根据用户输入的启动本实施例的拍摄功能的指令,启动相应功能,通过摄像头按照预设的频率实时获取婴儿的图像,并生成预览图像实时显示在显示屏上。
在步骤320中,进行人脸识别和人脸对焦。
智能终端对获取的预览图像进行人脸识别,对识别到人脸的预览图像,控制摄像头对人脸进行对焦。智能终端识别到人脸区域,可以根据所述人脸区域确定五官的轮廓。智能终端可以根据所述五官的轮廓的信息实时确定五官特征值。例如,智能终端可以根据识别到人脸区域确定嘴唇的区域,再通过图像处理算法提取嘴唇轮廓,根据所述嘴唇轮廓确定唇动特征值。其中,所述唇动特征值包括嘴唇的裂开度和/或开启度。
在步骤330中,每个预览帧计算笑容特征值。
智能终端可以通过嘴唇的裂开度和/或开启度确定用户的笑容特征值。通常,可以预先分别为所述裂开度和所述开启度设置权重因子——裂开度权重因
子m和开启度权重因子n(0≤m≤1,0≤n≤1,且m+n=1),根据笑容特征值=裂开度*m+开启度*n,确定每帧预览图像的笑容特征值。例如,可以设置裂开度的权重因子是80%,开启度的权重因子是20%,那么计算人物微笑时笑容特征值的公式可以是:笑容特征值=裂开度*80%+开启度*20%。
在步骤340中,判断笑容特征值是否大于预设阀值,若是,则执行步骤350,若否,则返回执行步骤330。
其中,预设阀值可以是开始微笑时刻的笑容特征值的经验值,该笑容特征值的经验值可以通过统计学原理对大量人物微笑时面部表情进行统计而获得。将根据笑容特征值公式计算的笑容特征值与预设阀值进行比较。若所述笑容特征值超过预设阀值,则执行步骤350。若所述笑容特征值小于预设阀值,则返回执行步骤330。
在步骤350中,启动连拍,进行一分钟的连拍。
智能终端在确定当前预览图像包括的笑容特征值超过预设阀值时,将当前预览图像作为起始图像,对拍摄场景进行1分钟连拍。例如,智能终端确定当前预览图像包括的笑容特征值超过预设阀值,将当前婴儿的微笑表情的预览图像作为起始图像,对婴儿进行1分钟连拍。
在步骤360中,对连拍后的照片进行笑容特征值计算。
智能终端对每张连拍获得的连拍图像进行人脸识别,获取嘴唇的轮廓,根据嘴唇的轮廓确定各帧连拍图像的嘴唇的裂开度及开启度。利用笑容特征值的计算公式,根据所述裂开度和开启度确定每帧所述连拍图像的笑容特征值。
在步骤370中,选取笑容特征值最大的一张照片进行存盘。
智能终端比较每张连拍获得的连拍图像的笑容特征值,将其中笑容特征值最大的一帧连拍图像确定为最终图像,对所述最终图像进行保存,删除剩余连拍图像。
在步骤380中,完成拍照。
智能终端保存所述最终图像完成对人物微笑瞬间的一次拍摄。用户可以选中所述最终图像的缩略图以查看保存的所述最终图像。另外,用户还可以继续利用本实施例中的拍摄模式继续进行拍摄。例如,智能终端保存婴儿笑容特征值最佳的最终图像完成对婴儿微笑瞬间的一次拍摄。随后,智能终端检测到用户输入的查看指令,根据所述查看指令智能终端退出本实施例的拍摄模式,对拍摄获得的所述最终图像进行显示。
实施例四
图4是本公开实施例四中的一种拍摄照片的装置的结构示意图。所述拍摄照片的装置,包括:预览图像生成单元410、五官特征值确定单元420和最终图像获取单元430。
预览图像生成单元410设置为通过摄像头实时获取拍摄场景生成预览图像;
五官特征值确定单元420设置为确定每帧所述预览图像中人脸图像的五官特征值;
最终图像获取单元430设置为在所述五官特征值满足设定拍摄条件时,对所述拍摄场景进行拍摄,以获取最终图像。
本实施例的技术方案,利用预览图像生成单元410通过摄像头实时获取拍摄场景生成预览图像;通过五官特征值确定单元420确定每帧所述预览图像中人脸图像的五官特征值;在所述五官特征值满足设定拍摄条件时,利用最终图像获取单元430对所述拍摄场景进行拍摄,以获取最终图像。本实施例的技术方案解决因无法预知人物表情最好的瞬间而错失拍摄良机的问题,实现自动识别用户表情进行拍摄,获取表情最好的一张照片的目的,达到了提高拍照效率和用户的应用体验的效果。
可选的,最终图像获取单元430设置为:
在设定时间长度内,对所述拍摄场景连续拍摄预设帧数的连拍图像,确定所述连拍图像中表情特征值最大的图像为最终图像。
可选的,所述五官特征值确定单元420包括:
唇动特征值确定子单元,设置为确定每帧所述预览图像包括的人脸的嘴唇轮廓,根据所述嘴唇轮廓确定唇动特征值。
可选的,所述唇动特征值确定子单元设置为:
对每帧所述预览图像进行人脸识别,在识别到人脸时,对所述人脸进行对焦;
根据人脸的几何特征确定嘴唇的粗略位置,根据所述粗略位置确定嘴唇的精确位置后提取嘴唇轮廓;以及
根据所述嘴唇轮廓确定嘴唇的裂开度和/或开启度。
可选的,所述装置还包括:
拍摄条件确定单元,设置为在根据所述嘴唇轮廓确定嘴唇的裂开度和/或开
启度之后,将所述裂开度与预设的第一阈值进行比较;
若所述裂开度超过预设的第一阈值,则确定所述唇动特征值满足设定拍摄条件;
若所述裂开度小于预设的第一阈值,则判断所述开启度是否超过预设的第二阈值;
在所述开启度超过预设的第二阈值时,确定所述唇动特征值满足设定拍摄条件;
在所述开启度小于预设的第二阈值时,确定所述唇动特征值不满足设定拍摄条件。
可选的,最终图像获取单元430设置为:
将所述连拍图像中每帧图像的所述裂开度和所述开启度进行加权求和计算,获得表情特征值;
比较所述连拍图像的表情特征值,确定最大表情特征值对应的一帧图像为最终图像。
可选的,所述装置还包括:
图像保存单元,设置为在确定所述连拍图像中表情特征值最大的图像为最终图像之后,保存所述最终图像,删除所述连拍图像中的剩余图像。
上述拍摄照片的装置可执行本公开任意实施例所提供的拍摄照片的方法,具备执行方法相应的功能模块和有益效果。
图5为本申请实施例提供的一种终端(例如功能手机)的硬件结构示意图,如图5所示,该终端包括:
一个或多个处理器501以及存储器502,图5中以一个处理器501为例。
功能手机还可以包括:输入装置503和输出装置504。
终端中的处理器501、存储器502、输入装置503和输出装置504可以通过总线或者其他方式连接,图5中以通过总线连接为例。
存储器502作为一种非易失性计算机可读存储介质,可用于存储非易失性软件程序、非易失性计算机可执行程序以及模块,如本申请实施例中的拍摄照片的方法对应的程序指令/模块(例如,附图4所示的预览图像生成单元410、五官特征值确定单元420和最终图像获取单元430)。处理器501通过运行存储在存储器502中的非易失性软件程序、指令以及模块,从而执行服务器的各种功能应用以及数据处理,即实现拍摄照片的方法。
存储器502可以包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需要的应用程序;存储数据区可存储根据拍摄照片的方法使用所创建的数据等。此外,存储器502可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实施例中,存储器502可选包括相对于处理器501远程设置的存储器。
输入装置503可用于接收输入的数字或字符信息,以及用户设置以及功能控制有关的键信号输入。输出装置504可包括显示屏等显示设备。
所述一个或者多个模块存储在所述存储器502中,当被所述一个或者多个处理器501执行时,执行上述任意方法实施例中的拍摄照片的方法。
本公开实施例提供了一种非易失性存储介质,存储有计算机可执行指令,所述计算机可执行指令设置为执行本公开任一实施例中的拍摄照片的方法。
注意,上述仅为本公开的较佳实施例及所运用技术原理。本领域技术人员会理解,本公开不限于这里所述的特定实施例,对本领域技术人员来说能够进行各种明显的变化、重新调整和替代而不会脱离本公开的保护范围。因此,虽然通过以上实施例对本公开进行了较为详细的说明,但是本公开不仅仅限于以上实施例,在不脱离本公开构思的情况下,还可以包括更多其他等效实施例,而本公开的范围由所附的权利要求范围决定。
本公开通过摄像头实时获取拍摄场景生成预览图像;确定每帧所述预览图像中人脸图像的五官特征值;在所述五官特征值满足设定拍摄条件时,对所述拍摄场景进行拍摄,以获取最终图像。实现自动识别用户表情进行拍摄,获取表情最好的一张照片的目的,达到了提高拍照效率和用户的应用体验的效果。
Claims (14)
- 一种拍摄照片的方法,包括:通过摄像头实时获取拍摄场景生成预览图像;确定每帧所述预览图像中人脸图像的五官特征值;以及在所述五官特征值满足设定拍摄条件时,对所述拍摄场景进行拍摄,以获取最终图像。
- 根据权利要求1所述的方法,其中,对所述拍摄场景进行拍摄,以获取最终图像包括:在设定时间长度内,对所述拍摄场景连续拍摄预设帧数的连拍图像,确定所述连拍图像中表情特征值最大的图像为最终图像。
- 根据权利要求2所述的方法,其中,确定每帧所述预览图像中人脸图像的五官特征值包括:确定每帧所述预览图像包括的人脸的嘴唇轮廓,根据所述嘴唇轮廓确定唇动特征值。
- 根据权利要求3所述的方法,其中,确定每帧所述预览图像包括的人脸的嘴唇轮廓,根据所述嘴唇轮廓确定唇动特征值,包括:对每帧所述预览图像进行人脸识别,在识别到人脸时,对所述人脸进行对焦;根据人脸的几何特征确定嘴唇的粗略位置,根据所述粗略位置确定嘴唇的精确位置后提取嘴唇轮廓;根据所述嘴唇轮廓确定嘴唇的裂开度和/或开启度。
- 根据权利要求4所述的方法,其中,在根据所述嘴唇轮廓确定嘴唇的裂开度和/或开启度之后,还包括:将所述裂开度与预设的第一阈值进行比较;若所述裂开度超过预设的第一阈值,则确定所述唇动特征值满足设定拍摄条件;若所述裂开度小于预设的第一阈值,则判断所述开启度是否超过预设的第二阈值;在所述开启度超过预设的第二阈值时,确定所述唇动特征值满足设定拍摄条件;在所述开启度小于预设的第二阈值时,确定所述唇动特征值不满足设定拍摄条件。
- 根据权利要求4所述的方法,其中,确定所述连拍图像中表情特征值最大的图像为最终图像,包括:将所述连拍图像中每帧图像的所述裂开度和所述开启度进行加权求和计算,获得表情特征值;比较所述连拍图像的表情特征值,确定最大表情特征值对应的一帧图像为最终图像。
- 根据权利要求2所述的方法,其中,在确定所述连拍图像中表情特征值最大的图像为最终图像之后,还包括:保存所述最终图像,删除所述连拍图像中的剩余图像。
- 一种拍摄照片的装置,包括:预览图像生成单元,设置为通过摄像头实时获取拍摄场景生成预览图像;五官特征值确定单元,设置为确定每帧所述预览图像中人脸图像的五官特征值;以及最终图像获取单元,设置为在所述五官特征值满足设定拍摄条件时,对所述拍摄场景进行拍摄,以获取最终图像。
- 根据权利要求8所述的装置,其中,最终图像获取单元设置为:在设定时间长度内,对所述拍摄场景连续拍摄预设帧数的连拍图像,确定所述连拍图像中表情特征值最大的图像为最终图像。
- 根据权利要求9所述的装置,其中,所述五官特征值确定单元包括:唇动特征值确定子单元,设置为确定每帧所述预览图像包括的人脸的嘴唇轮廓,根据所述嘴唇轮廓确定唇动特征值。
- 根据权利要求10所述的装置,其中,所述唇动特征值确定子单元设置为:对每帧所述预览图像进行人脸识别,在识别到人脸时,对所述人脸进行对焦;根据人脸的几何特征确定嘴唇的粗略位置,根据所述粗略位置确定嘴唇的精确位置后提取嘴唇轮廓;根据所述嘴唇轮廓确定嘴唇的裂开度和/或开启度。
- 根据权利要求11所述的装置,还包括:拍摄条件确定单元,设置为在根据所述嘴唇轮廓确定嘴唇的裂开度和/或开启度之后,将所述裂开度与预设的第一阈值进行比较;若所述裂开度超过预设的第一阈值,则确定所述唇动特征值满足设定拍摄条件;若所述裂开度小于预设的第一阈值,则判断所述开启度是否超过预设的第二阈值;在所述开启度超过预设的第二阈值时,确定所述唇动特征值满足设定拍摄条件;在所述开启度小于预设的第二阈值时,确定所述唇动特征值不满足设定拍摄条件。
- 根据权利要求11所述的装置,其中,最终图像获取单元设置为:将所述连拍图像中每帧图像的所述裂开度和所述开启度进行加权求和计算,获得表情特征值;比较所述连拍图像的表情特征值,确定最大表情特征值对应的一帧图像为最终图像。
- 根据权利要求9所述的装置还包括:图像保存单元,设置为在确定所述连拍图像中表情特征值最大的图像为最终图像之后,保存所述最终图像,删除所述连拍图像中的剩余图像。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/244,509 US20170161553A1 (en) | 2015-12-08 | 2016-08-23 | Method and electronic device for capturing photo |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510898441.2A CN105872352A (zh) | 2015-12-08 | 2015-12-08 | 一种拍摄照片的方法及装置 |
CN201510898441.2 | 2015-12-08 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/244,509 Continuation US20170161553A1 (en) | 2015-12-08 | 2016-08-23 | Method and electronic device for capturing photo |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017096861A1 true WO2017096861A1 (zh) | 2017-06-15 |
Family
ID=56624400
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2016/088969 WO2017096861A1 (zh) | 2015-12-08 | 2016-07-06 | 拍摄照片的方法及装置 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN105872352A (zh) |
WO (1) | WO2017096861A1 (zh) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108229369A (zh) * | 2017-12-28 | 2018-06-29 | 广东欧珀移动通信有限公司 | 图像拍摄方法、装置、存储介质及电子设备 |
CN113313009A (zh) * | 2021-05-26 | 2021-08-27 | Oppo广东移动通信有限公司 | 连拍的输出图像方法、装置、终端及可读存储介质 |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106372627A (zh) * | 2016-11-07 | 2017-02-01 | 捷开通讯(深圳)有限公司 | 基于脸部图像识别的自动拍照方法和装置、电子设备 |
CN107659722B (zh) * | 2017-09-25 | 2020-02-18 | 维沃移动通信有限公司 | 一种图像选择方法及移动终端 |
CN108737729A (zh) * | 2018-05-04 | 2018-11-02 | Oppo广东移动通信有限公司 | 自动拍照方法和装置 |
CN109040420B (zh) * | 2018-06-15 | 2020-11-24 | 青岛海信移动通信技术股份有限公司 | 终端设备解锁方法、装置、电子设备及存储介质 |
CN109276217B (zh) * | 2018-12-05 | 2021-12-24 | 济宁市龙浩钢管制造有限公司 | 过滤器实时驱动平台 |
CN110166696B (zh) * | 2019-06-28 | 2021-03-26 | Oppo广东移动通信有限公司 | 拍摄方法、装置、终端设备以及计算机可读存储介质 |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000259833A (ja) * | 1999-03-08 | 2000-09-22 | Toshiba Corp | 顔画像処理装置及びその処理方法 |
US20040218916A1 (en) * | 2003-03-25 | 2004-11-04 | Hiroshi Yamaguchi | Automatic photography system |
CN1642233A (zh) * | 2005-01-05 | 2005-07-20 | 张健 | 可选择最佳拍照时机的数码相机 |
JP2006237803A (ja) * | 2005-02-23 | 2006-09-07 | Konica Minolta Photo Imaging Inc | 撮像システム、写真撮影スタジオ、撮像システムの制御方法 |
CN101472061A (zh) * | 2007-12-27 | 2009-07-01 | 深圳富泰宏精密工业有限公司 | 笑脸追踪系统及方法 |
CN101472063A (zh) * | 2007-12-28 | 2009-07-01 | 希姆通信息技术(上海)有限公司 | 一种手机相机捕捉笑脸的方法 |
CN102377905A (zh) * | 2010-08-18 | 2012-03-14 | 佳能株式会社 | 图像摄取装置及其控制方法 |
CN103369214A (zh) * | 2012-03-30 | 2013-10-23 | 华晶科技股份有限公司 | 图像获取方法与图像获取装置 |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7671861B1 (en) * | 2001-11-02 | 2010-03-02 | At&T Intellectual Property Ii, L.P. | Apparatus and method of customizing animated entities for use in a multi-media communication application |
US9503645B2 (en) * | 2012-05-24 | 2016-11-22 | Mediatek Inc. | Preview system for concurrently displaying multiple preview images generated based on input image generated by image capture apparatus and related preview method thereof |
CN103685940A (zh) * | 2013-11-25 | 2014-03-26 | 上海斐讯数据通信技术有限公司 | 一种通过表情识别拍摄照片的方法 |
-
2015
- 2015-12-08 CN CN201510898441.2A patent/CN105872352A/zh active Pending
-
2016
- 2016-07-06 WO PCT/CN2016/088969 patent/WO2017096861A1/zh active Application Filing
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000259833A (ja) * | 1999-03-08 | 2000-09-22 | Toshiba Corp | 顔画像処理装置及びその処理方法 |
US20040218916A1 (en) * | 2003-03-25 | 2004-11-04 | Hiroshi Yamaguchi | Automatic photography system |
CN1642233A (zh) * | 2005-01-05 | 2005-07-20 | 张健 | 可选择最佳拍照时机的数码相机 |
JP2006237803A (ja) * | 2005-02-23 | 2006-09-07 | Konica Minolta Photo Imaging Inc | 撮像システム、写真撮影スタジオ、撮像システムの制御方法 |
CN101472061A (zh) * | 2007-12-27 | 2009-07-01 | 深圳富泰宏精密工业有限公司 | 笑脸追踪系统及方法 |
CN101472063A (zh) * | 2007-12-28 | 2009-07-01 | 希姆通信息技术(上海)有限公司 | 一种手机相机捕捉笑脸的方法 |
CN102377905A (zh) * | 2010-08-18 | 2012-03-14 | 佳能株式会社 | 图像摄取装置及其控制方法 |
CN103369214A (zh) * | 2012-03-30 | 2013-10-23 | 华晶科技股份有限公司 | 图像获取方法与图像获取装置 |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108229369A (zh) * | 2017-12-28 | 2018-06-29 | 广东欧珀移动通信有限公司 | 图像拍摄方法、装置、存储介质及电子设备 |
CN108229369B (zh) * | 2017-12-28 | 2020-06-02 | Oppo广东移动通信有限公司 | 图像拍摄方法、装置、存储介质及电子设备 |
CN113313009A (zh) * | 2021-05-26 | 2021-08-27 | Oppo广东移动通信有限公司 | 连拍的输出图像方法、装置、终端及可读存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN105872352A (zh) | 2016-08-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2017096861A1 (zh) | 拍摄照片的方法及装置 | |
US20170161553A1 (en) | Method and electronic device for capturing photo | |
JP4639869B2 (ja) | 撮像装置およびタイマー撮影方法 | |
WO2016180224A1 (zh) | 一种人物图像处理方法及装置 | |
US8494286B2 (en) | Face detection in mid-shot digital images | |
JP4757173B2 (ja) | 撮像装置及びその制御方法及びプログラム | |
WO2016187985A1 (zh) | 拍摄设备、跟踪拍摄方法和系统、以及计算机存储介质 | |
WO2017031901A1 (zh) | 人脸识别方法、装置及终端 | |
CN103079034A (zh) | 一种感知拍摄方法及系统 | |
CN105303161A (zh) | 一种多人拍照的方法及装置 | |
KR20100055946A (ko) | 동영상 썸네일 생성 방법 및 장치 | |
JP2006115406A (ja) | 撮像装置 | |
JP2009075999A (ja) | 画像認識装置、画像認識方法、画像認識プログラム | |
JP2008262447A (ja) | 画像処理装置および撮影システム並びに瞬き状態検出方法、瞬き状態検出プログラムおよびそのプログラムが記録された記録媒体 | |
US9591210B2 (en) | Image processing face detection apparatus, method for controlling the same, and program | |
JP2006319610A (ja) | 撮像装置 | |
CN108513074B (zh) | 自拍控制方法及装置、电子设备 | |
TWI752105B (zh) | 特徵圖像的獲取方法及獲取裝置、使用者認證方法 | |
JP2008219449A (ja) | 撮像装置およびその制御方法 | |
JP2010171797A (ja) | 撮像装置及びプログラム | |
CN108259769B (zh) | 图像处理方法、装置、存储介质及电子设备 | |
JP2008219450A (ja) | 撮像装置およびその制御方法 | |
JP2008219451A (ja) | 撮像装置およびその制御方法 | |
KR20100130670A (ko) | 휴대용 단말기에서 얼굴 검출을 이용한 영상 획득 방법 및 장치 | |
JP2014150348A (ja) | 撮影装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16872062 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16872062 Country of ref document: EP Kind code of ref document: A1 |