CN108093170B - User photographing method, device and equipment - Google Patents

User photographing method, device and equipment Download PDF

Info

Publication number
CN108093170B
CN108093170B CN201711242049.8A CN201711242049A CN108093170B CN 108093170 B CN108093170 B CN 108093170B CN 201711242049 A CN201711242049 A CN 201711242049A CN 108093170 B CN108093170 B CN 108093170B
Authority
CN
China
Prior art keywords
target user
user
users
photos
eyes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711242049.8A
Other languages
Chinese (zh)
Other versions
CN108093170A (en
Inventor
欧阳丹
谭国辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201711242049.8A priority Critical patent/CN108093170B/en
Publication of CN108093170A publication Critical patent/CN108093170A/en
Application granted granted Critical
Publication of CN108093170B publication Critical patent/CN108093170B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses a user photographing method, device and equipment, wherein the method comprises the steps of detecting whether a current preview picture contains facial images of a plurality of users or not when a photographing instruction is obtained; if the preview picture is detected to contain the facial images of the multiple users, whether the facial images of the multiple users contain the pre-stored facial images of the target user is detected; if the face image containing the target user is detected, shooting a plurality of pictures of the preview picture; and carrying out open-close eye detection on the face of the target user in each frame of picture, and outputting the picture with the open eyes of the target user. Therefore, the pictures of the opening and closing eyes of the target user are automatically selected and output, so that the shot pictures are preferred by the target user, the image quality and the shooting efficiency are improved, and the technical problem that the shooting efficiency is low due to the fact that the shooting effect is not ideal in the prior art and the shooting needs to be carried out again is solved.

Description

User photographing method, device and equipment
Technical Field
The present application relates to the field of photographing technologies, and in particular, to a method, an apparatus, and a device for photographing a user.
Background
At present, in order to meet the needs of users in production and life, the functions of terminal devices are increasingly diversified, for example, the terminal devices usually include a photographing function and the like meeting the photographing needs of the users, however, when the users use the terminal devices to photograph, the eyes may be closed, and the like, so that the users are dissatisfied with the photographed photos.
In the related art, when a user is not satisfied with a shot picture, the user needs to shoot again through the terminal device until the satisfactory picture is shot, and the operation is complicated.
Content of application
The application provides a user photographing method, device and equipment, and aims to solve the technical problem that photographing efficiency is low due to the fact that photographing needs to be conducted again due to the fact that photographing effect is not ideal in the prior art.
The embodiment of the application provides a user photographing method, which comprises the following steps: when a photographing instruction is acquired, detecting whether the current preview picture contains facial images of a plurality of users; if the preview picture is detected to contain the facial images of a plurality of users, whether the facial images of the plurality of users contain the pre-stored facial images of the target user is detected; if the face image containing the target user is detected, shooting a plurality of pictures of the preview picture; and carrying out eye opening and closing detection on the face of the target user in each frame of photo, and outputting the photo with the eyes opened by the target user.
Another embodiment of the present application provides a user photographing apparatus, including: the first detection module is used for detecting whether the current preview picture contains facial images of a plurality of users or not when a photographing instruction is obtained; the first detection module is further configured to detect whether the facial images of the multiple users include pre-stored facial images of target users when it is detected that the preview picture includes the facial images of the multiple users; the shooting module is used for shooting a plurality of pictures of the preview picture when the face image containing the target user is detected; and the output module is used for detecting the eyes of the target user in each frame of photo and outputting the photo of the target user with the eyes open.
Yet another embodiment of the present application provides a computer device, including a memory and a processor, where the memory stores computer-readable instructions, and the instructions, when executed by the processor, cause the processor to execute the user photographing method according to the above embodiment of the present application.
Yet another embodiment of the present application provides a non-transitory computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the user photographing method according to the above embodiment of the present application.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
when a photographing instruction is acquired, detecting whether the current preview picture contains facial images of a plurality of users, if so, detecting whether the facial images of the plurality of users contain pre-stored facial images of target users, if so, photographing a plurality of frames of photos on the preview picture, detecting the eyes of the target users in each frame of photo, and outputting the photo of the target users with the eyes open. Therefore, the image quality and the photographing efficiency are improved, the pictures of the eyes of the target user can be automatically selected and output, the photographed pictures are more liked by the target user, and the technical problem that in the prior art, the photographing efficiency is low due to the fact that the photographing effect is not ideal and the photographing needs to be performed again is solved.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a flow chart of a user photographing method according to one embodiment of the present application;
FIG. 2(a) is a scene diagram of a user photographing method according to the prior art;
FIG. 2(b) is a scene diagram of a user photographing method according to an embodiment of the present application;
FIG. 3 is a flow chart of a user photographing method according to another embodiment of the present application;
FIG. 4 is a flowchart of a user photographing method according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a user photographing apparatus according to an embodiment of the present application;
FIG. 6 is a schematic structural diagram of a user photographing apparatus according to another embodiment of the present application;
FIG. 7 is a schematic diagram of a user photographing apparatus according to another embodiment of the present application;
and
FIG. 8 is a schematic diagram of an image processing circuit according to another embodiment of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
Based on the analysis, when the user is not satisfied with the shot picture, the user shoots again through the terminal device, so that the shooting time is long, and the user experience is influenced.
In order to solve the technical problem, the application provides a user photographing method, when a photographing instruction is received, a plurality of pictures can be photographed, and images with high quality can be automatically screened out for output, so that the satisfaction degree of a user on the photographed pictures is greatly improved, and the photographing efficiency is improved.
The following describes a user photographing method, device and equipment in the embodiments of the present application in detail with reference to the accompanying drawings.
The execution main body of the user photographing method and device in the embodiment of the application can be terminal equipment, wherein the terminal equipment can be hardware equipment with a photographing camera, such as a mobile phone, a tablet computer, a personal digital assistant and a wearable device. This wearable equipment can be intelligent bracelet, intelligent wrist-watch, intelligent glasses etc..
Fig. 1 is a flowchart of a user photographing method according to an embodiment of the present application, as shown in fig. 1, the method includes:
step 101, when a photographing instruction is acquired, detecting whether a plurality of user face images are contained in a current preview picture.
The method includes the steps that in different application scenes, photographing instructions are obtained in different modes, for example, when a photographing control is triggered by a user, the photographing instructions are obtained, for example, when a photographing voice instruction sent by the user is obtained, and for example, when the action of the user is detected to be consistent with a preset photographing action, the photographing instructions are obtained.
It should be understood that, in a portrait shooting scene, especially a scene of shooting multiple people, a user may be dissatisfied with a shot photo due to eye closure and the like, and therefore, in the embodiment of the present application, whether multiple user face images are included in a current preview picture is detected, so as to further determine whether to shoot multiple frames of images.
It should be noted that, depending on different application scenarios, a plurality of different implementation manners may be adopted to implement the detection of whether the preview screen includes a plurality of user face images, and the following examples are illustrated:
as a possible implementation manner, whether the preview picture contains a plurality of face images of the user is detected based on geometric features of the face organs, wherein the geometric features contain shapes of eyes, nose, mouth and the like of the face of the user and geometric relationships among the eyes, nose, mouth and the like of the face of the user (such as distances among the eyes, nose, mouth and the like).
As another possible implementation manner, whether a preview picture contains a plurality of user face images is detected by an elastic image matching method, that is, a distance with a certain invariance to normal user face deformation is defined in a two-dimensional space, an attribute topological graph is adopted to represent the user face, and any vertex of the topological graph contains a feature vector used for recording information of the user face near the vertex position, so that the method combines gray scale characteristics and geometric factors, compares the elastic graph with the preview image to identify whether the user face image is contained, can allow the image to have elastic deformation during comparison, and has a good effect on overcoming the influence of expression change on identification.
Step 102, if it is detected that the preview picture contains a plurality of user face images, detecting whether the plurality of user face images contain pre-stored face images of target users.
And 103, if the face image containing the target user is detected, shooting a plurality of pictures of the preview picture.
The pre-stored facial images of the target user include facial images of the owner user, facial images of relatives and friends of the owner user, and the like, the facial images of the target user may be one or multiple, the pre-stored facial images may be facial images of the owner user automatically set by the system, or facial images of related users automatically set by the user according to personal needs, and the method is not limited herein.
Specifically, whether the face image of the user contains a pre-stored face image of the target user is detected, and if the face image of the user contains the pre-stored face image of the target user, in order to further obtain a picture with a good shooting effect for the target user, multiple frames of pictures are shot on the preview picture.
It should be understood that, when the target user is an owner user, the photos in the database may be queried, and the face images of the target user are generated according to the face images of the user with the highest frequency of appearance in the photos, or self-shot photos may be screened from the photos, for example, because the face proportion in the self-shot photos is large, the photos with a large proportion may be used as self-shot photos according to the face proportion, and for example, because the self-shot photos are photos taken by using a front camera, in some scenes, when the photos with the front camera marks are taken, the photos with the front camera marks may be used as self-shot photos, and the face images of the target user are generated according to the face images of the user with the highest frequency of appearance in the self-shot photos.
Of course, because the face information can be acquired by the front face recognition module such as the front camera when the owner user uses the terminal device in daily life, the face image of the user with the highest collection frequency can be acquired by the front face recognition module when the user uses the terminal device, and the face image of the target user is generated through autonomous learning.
It should be noted that, according to different application scenarios, different implementation manners may be adopted to detect that the user face image includes a pre-stored face image of the target user, as one possible implementation manner, the face image of the target user is pre-stored, the face image of the user in the preview screen is matched with the pre-stored face image of the target user, and when the matching degree is higher than a certain value, the preview screen is considered to include the face image of the target user.
In some application scenarios, the user may not be concerned about whether the face images of all users included in the captured picture are closed, for example, the captured picture includes a passer-by, and the user does not intend to have closed eyes of the passer-by, that is, only when the captured preview picture includes the target user, whether the target user has closed eyes is only concerned by the user.
For example, when the target user is the owner user, when the user performs multi-user co-shooting through the terminal device, the eyes of all the shot photos may be closed by the user, and when the owner user selects the photos, the owner user can preferentially select photos without eyes closed by the owner user. In the embodiment of the present application, the photo with the eyes of the user can be directly recognized and output.
In practical applications, the eye-closing time is short in the process of blinking of a user, so that the time for taking a plurality of pictures of a preview picture is usually longer than the blinking time of the user, and the taken plurality of pictures include pictures of eyes opened by the user.
And 104, detecting the eyes of the target user in each frame of photo, and outputting the photo of the target user with the eyes open.
Specifically, the eyes of the target user in the taken multi-frame photos are detected to be opened and closed, the photos of the target user with the eyes opened are directly output, and the situation that the user needs to manually shoot again when the photos of the target user with the eyes closed are taken is avoided.
The method for detecting the opening and closing of the target user's face in the captured multi-frame pictures may include a plurality of different implementation manners in different application scenarios, and as one possible implementation manner, the pupil of the target user may be located, and when the pupil of the target user is located, the corresponding picture is considered as the picture of the target user's opening.
In order to more clearly describe the beneficial effects of the photographing method for the user of the application, the following example is taken in conjunction with the implementation process of the photographing method in a specific application scenario, and the following example is illustrated:
as an example:
in this example, the photographing scene is a multi-person group photograph.
In a scene of multi-user co-shooting in daily life, for example, during travel co-shooting, as shown in fig. 2(a), if an object user closes his or her eyes in a shot picture, the shot picture needs to be re-shot, which greatly increases the time for collecting the co-shooting, and if the user shooting method of the present application is adopted, as shown in fig. 2(b), when a shooting instruction is obtained, after a current preview picture is detected to contain an image of the face of the object user, a plurality of frames of pictures are shot on the preview picture, the eyes of the object user in each frame of pictures are detected to be opened and closed, and the pictures of the eyes of the object user are directly output, so as to continue to refer to fig. 2(b), each shooting is performed to output pictures of which the eyes of the object user are opened, and the practicability is.
Based on the above description, in order to further improve the photographing quality, it may also be detected whether there are other factors affecting the visual effect of the eye part, such as whether the user has a foreign object shielding on the eye part, whether the glasses worn reflect light, and the like.
In an embodiment of the present application, whether a target user wears glasses is detected, and if it is detected that the target user wears the glasses, whether a glasses part of the target user in a photo to be output emits light is detected, for example, by detecting whether an over-bright light spot appears on the glasses part or not, and if it is detected that light reflection appears, an image processing technology is applied to perform light reflection correction processing on the glasses part of the target user, for example, brightness of a light reflection area of the glasses is reduced or the like.
In summary, in the user photographing method according to the embodiment of the application, when the photographing instruction is obtained, whether the current preview picture includes the facial images of the multiple users is detected, if it is detected that the preview picture includes the facial images of the multiple users, whether the facial images of the multiple users include the pre-stored facial image of the target user is detected, if it is detected that the facial image of the target user includes the facial image of the target user, multiple frames of photos are taken from the preview picture, the eyes of the target user in each frame of photo are detected, and the photo of the target user with the eyes open is output. Therefore, the image quality and the photographing efficiency are improved, the pictures of the eyes of the target user can be automatically selected and output, the photographed pictures are more liked by the target user, and the technical problem that in the prior art, the photographing efficiency is low due to the fact that the photographing effect is not ideal and the photographing needs to be performed again is solved.
Based on the above embodiment, on the premise of ensuring that the photos with the eyes of the target user are opened are output, in order to further improve the quality of the photos, the photos with more eyes of other users can be selected.
Specifically, fig. 3 is a flowchart of a user photographing method according to another embodiment of the present application, and as shown in fig. 3, the step 104 includes:
step 201, detecting the eyes of the face of the target user in each frame of photo, and acquiring a candidate photo frame of the eyes of the target user.
And step 202, detecting the open eyes of the faces of other users in each frame of candidate photos, and calculating the proportion of the open-eye users to the total number of the users.
And 203, judging whether the proportion meets a preset threshold value, and if so, outputting a corresponding photo.
And step 204, if the obtained proportion is judged not to meet the preset threshold value, sending eye-opening prompting information to a plurality of users, and re-shooting the multi-frame photos on the preview picture.
Specifically, after detecting that a plurality of user face images include a prestored face image of a target user, a plurality of frames of photographs are taken of a preview screen.
And then, detecting the eyes of the face of the target user in each frame of photo, acquiring a candidate photo frame of the eyes of the target user, detecting the eyes of the faces of other users in each frame of photo in the candidate photo frame, calculating the proportion of the eyes of the users to the total number of the users, judging whether the proportion meets a preset threshold value, and if so, outputting the corresponding photo.
Of course, when no proportion meets the preset threshold, the maximum proportion value of the calculated proportion of all users in the total number of users can be output corresponding to the picture, so that the output picture is guaranteed to be a picture with higher quality in all the taken pictures. Alternatively, the eye-opening prompting information may be sent to a plurality of users, for example, a voice output "please open eyes for each person", or an indicator lamp of the terminal device may be blinked, and the multi-frame picture may be re-captured on the preview picture, so as to ensure that as many users as possible in the finally output picture are opened.
For example, when the preset threshold is 50% and the target user is an owner user, after detecting that the current preview picture includes a facial image of the owner user, detecting whether the current preview picture includes a plurality of facial images of the user to identify whether the current photographed scene is a multi-person close-shot scene, when the current photographed scene is the close-shot scene, taking 10 pictures of the current preview picture, performing eye opening and closing detection on the face of the owner user in each picture in the 10 taken pictures, acquiring 4 candidate picture frames of the eyes of the owner user, performing eye opening and closing detection on the faces of other 4 users in each picture in the candidate picture frames, calculating the proportion of the eyes-opened users to the total number of the users, wherein the proportion is 20%, 40% and 80%, and outputting pictures corresponding to the proportion value of 80%.
In order to make the user photographing method of the embodiment of the present application more clearly understood by those skilled in the art, the following description takes the application of the user photographing method to a multi-person close-up scene as an example, and the following description is provided:
in the scene, the target user is the owner user, and the face information of the owner user is stored in advance through the front face recognition module.
Specifically, as shown in fig. 4, when it is detected that the photographing control is clicked, user face image detection is performed, and it is detected whether a current preview screen includes a plurality of user face images, so as to detect whether the current preview screen is a multi-user scene, and if the current preview screen is not a multi-user scene, a normal photographing process can be directly performed, and a corresponding photo is output.
If the scene is a multi-user scene, matching the detected face images of a plurality of users with the owner face information prestored through the front face recognition module to judge whether the owner user is in the current scene, if not, directly carrying out a normal photographing process, if so, in order to output a picture satisfied by the owner user, shooting a plurality of frames of pictures on a preview picture, carrying out open-close eye detection on the face of the owner user in each frame of pictures, and selecting and outputting the picture of the open eye of the owner user.
In summary, according to the user photographing method in the embodiment of the application, when the eyes of the target user in the output picture are ensured to be opened, the eye opening ratio of other users is higher, and therefore, the visual effect of the whole picture is ensured.
In order to implement the foregoing embodiment, the present application further provides a user photographing apparatus, and fig. 5 is a schematic structural diagram of the user photographing apparatus according to an embodiment of the present application, as shown in fig. 5, the user photographing apparatus includes a first detection module 100, a photographing module 200, and an output module 300.
The first detection module 100 is configured to detect whether a current preview screen includes a plurality of user face images when a photographing instruction is obtained.
The first detecting module 100 is further configured to detect whether the face images of the multiple users include a pre-stored face image of the target user when it is detected that the preview screen includes the face images of the multiple users.
In different application scenarios, it may be detected in different manners whether the facial images of the multiple users include a pre-stored facial image of the target user, in an embodiment of the present application, as shown in fig. 6, on the basis of fig. 5, the user photographing apparatus further includes: a query module 400 and a generation module 500.
The query module 400 is configured to query the photos in the database.
A generating module 500, configured to generate a face image of the target user according to the face image of the user with the highest frequency of appearance in the photograph.
And the shooting module 200 is used for shooting a plurality of frames of photos of the preview picture when the target user face image is detected to be contained.
And an output module 300, configured to perform eye opening and closing detection on the face of the target user in each frame of photo, and output the photo with the target user's eyes open.
It should be noted that the foregoing description of the method embodiments is also applicable to the apparatus in the embodiments of the present application, and the implementation principles thereof are similar and will not be described herein again.
The division of each module in the user photographing apparatus is only for illustration, and in other embodiments, the user photographing apparatus may be divided into different modules as needed to complete all or part of the functions of the user photographing apparatus.
In summary, the user photographing device according to the embodiment of the application detects whether a current preview picture includes facial images of a plurality of users when a photographing instruction is obtained, detects whether the facial images of the plurality of users include pre-stored facial images of target users if the preview picture includes the facial images of the plurality of users, photographs of a plurality of frames of the preview picture are taken if the facial images of the target users are detected, detects that the faces of the target users in each frame of the photographs are open and closed, and outputs the photographs of the target users. Therefore, the image quality and the photographing efficiency are improved, the pictures of the eyes of the target user can be automatically selected and output, the photographed pictures are more liked by the target user, and the technical problem that in the prior art, the photographing efficiency is low due to the fact that the photographing effect is not ideal and the photographing needs to be performed again is solved.
Fig. 7 is a schematic structural diagram of a user photographing apparatus according to another embodiment of the present application, and as shown in fig. 7, on the basis of fig. 5, an output module 300 includes an obtaining unit 310, a calculating unit 320, and an output unit 330.
The acquiring unit 310 is configured to perform eye opening and closing detection on the face of the target user in each frame of picture, and acquire a candidate picture frame in which the target user opens the eyes.
And a calculating unit 320, configured to perform open-close eye detection on the faces of other users in each frame of candidate photos, and calculate a proportion of the open-eye users to the total number of users.
The output unit 330 is configured to determine whether the ratio satisfies a preset threshold, and output a corresponding photo when it is determined that the ratio satisfies the preset threshold.
It should be noted that the foregoing description of the method embodiments is also applicable to the apparatus in the embodiments of the present application, and the implementation principles thereof are similar and will not be described herein again.
The division of each module in the user photographing apparatus is only for illustration, and in other embodiments, the user photographing apparatus may be divided into different modules as needed to complete all or part of the functions of the user photographing apparatus.
In summary, the user photographing device according to the embodiment of the application ensures that when the eyes of the target user are opened in the output picture, the eye opening ratio of other users is high, thereby ensuring the visual effect of the whole picture.
To implement the above embodiments, the present application also proposes a computer device including therein an Image Processing circuit, which may be implemented using hardware and/or software components, and may include various Processing units defining an ISP (Image Signal Processing) pipeline. FIG. 8 is a schematic diagram of an image processing circuit in one embodiment. As shown in fig. 8, for convenience of explanation, only aspects of the image processing technology related to the embodiments of the present application are shown.
As shown in fig. 8, the image processing circuit includes an ISP processor 840 and control logic 850. Image data captured by imaging device 810 is first processed by ISP processor 840, and ISP processor 840 analyzes the image data to capture image statistics that may be used to determine and/or control one or more parameters of imaging device 810. Imaging device 810 may include a camera having one or more lenses 812 and an image sensor 814. Image sensor 814 may include an array of color filters (e.g., Bayer filters), and image sensor 814 may acquire light intensity and wavelength information captured with each imaging pixel of image sensor 814 and provide a set of raw image data that may be processed by ISP processor 840. The sensor 820 may provide raw image data to the ISP processor 840 based on the sensor 820 interface type. The sensor 820 interface may utilize an SMIA (Standard Mobile Imaging Architecture) interface, other serial or parallel camera interfaces, or a combination of the above.
The ISP processor 840 processes the raw image data pixel by pixel in a variety of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and ISP processor 840 may perform one or more image processing operations on the raw image data, collecting statistical information about the image data. Wherein the image processing operations may be performed with the same or different bit depth precision.
ISP processor 840 may also receive pixel data from image memory 830. For example, raw pixel data is sent from the sensor 820 interface to the image memory 830, and the raw pixel data in the image memory 830 is then provided to the ISP processor 840 for processing. The image Memory 830 may be a portion of a Memory device, a storage device, or a separate dedicated Memory within an electronic device, and may include a DMA (Direct Memory Access) feature.
Upon receiving raw image data from the sensor 820 interface or from the image memory 830, the ISP processor 840 may perform one or more image processing operations, such as temporal filtering. The processed image data may be sent to image memory 830 for additional processing before being displayed. ISP processor 840 receives processed data from image memory 830 and performs image data processing on the processed data in the raw domain and in the RGB and YCbCr color spaces. The processed image data may be output to a display 870 for viewing by a user and/or further Processing by a Graphics Processing Unit (GPU). Further, the output of ISP processor 840 may also be sent to image memory 830 and display 870 may read image data from image memory 830. In one embodiment, image memory 830 may be configured to implement one or more frame buffers. In addition, the output of ISP processor 840 may be transmitted to encoder/decoder 860 for encoding/decoding image data. The encoded image data may be saved and decompressed before being displayed on the display 870 device. The encoder/decoder 860 may be implemented by a CPU or GPU or coprocessor.
The statistics determined by ISP processor 840 may be sent to control logic 850 unit. For example, the statistical data may include image sensor 814 statistical information such as auto-exposure, auto-white balance, auto-focus, flicker detection, black level compensation, lens 812 shading correction, and the like. Control logic 850 may include a processor and/or microcontroller that executes one or more routines (e.g., firmware) that may determine control parameters of imaging device 810 and, in turn, control parameters based on the received statistical data. For example, the control parameters may include sensor 820 control parameters (e.g., gain, integration time for exposure control), camera flash control parameters, lens 812 control parameters (e.g., focal length for focusing or zooming), or a combination of these parameters. The ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (e.g., during RGB processing), as well as lens 812 shading correction parameters.
The following steps are implemented for implementing the user photographing method by using the image processing technology in fig. 8:
when a photographing instruction is acquired, detecting whether the current preview picture contains facial images of a plurality of users;
if the preview picture is detected to contain the facial images of a plurality of users, whether the facial images of the plurality of users contain the pre-stored facial images of the target user is detected;
if the face image containing the target user is detected, shooting a plurality of pictures of the preview picture;
and carrying out eye opening and closing detection on the face of the target user in each frame of photo, and outputting the photo with the eyes opened by the target user.
To achieve the above embodiments, the present application also proposes a non-transitory computer-readable storage medium, in which instructions, when executed by a processor, enable execution of the user photographing method as described in the above embodiments.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (8)

1. A user photographing method is characterized by comprising the following steps:
when a photographing instruction is acquired, detecting whether the current preview picture contains facial images of a plurality of users;
inquiring photos in a database;
screening self-timer photos from the photos according to the face proportion in the photos, and identifying the face image of the user with the highest frequency of appearance in the self-timer photos as the face image of the target user; if the preview picture is detected to contain the face images of a plurality of users, detecting whether the face images of the plurality of users contain the face image of the target user;
if the face image containing the target user is detected, shooting a plurality of pictures of the preview picture;
and carrying out eye opening and closing detection on the face of the target user in each frame of photo, and outputting the photo with the eyes opened by the target user.
2. The method of claim 1, wherein the performing open-eye and closed-eye detection on the face of the target user in each frame of photos and outputting the photos of the target user with their eyes open comprises:
carrying out eye opening and closing detection on the face of the target user in each frame of photo, and acquiring a candidate photo frame of the target user for opening eyes;
carrying out eye opening and closing detection on the faces of other users in each frame of candidate photos, and calculating the proportion of the eye opening users in the total number of the users;
and judging whether the proportion meets a preset threshold value, and if the proportion meets the preset threshold value, outputting a corresponding photo.
3. The method of claim 2, wherein after said determining whether said ratio satisfies a predetermined threshold, further comprising:
and if the proportion is judged to be not satisfied with a preset threshold value, sending eye-opening prompting information to the users, and re-shooting the multi-frame photos on the preview picture.
4. The method of any of claims 1-3, further comprising, prior to said outputting the photograph of the target user's open eyes:
detecting whether the target user wears glasses;
if the target user is detected to wear the glasses, detecting whether the glasses part of the target user in the photo to be output reflects light;
and if the reflection is detected, performing reflection correction processing on the glasses part of the target user by applying an image processing technology.
5. A user photographing apparatus, comprising:
the first detection module is used for detecting whether the current preview picture contains facial images of a plurality of users or not when a photographing instruction is obtained;
the query module is used for querying the photos in the database;
the recognition module is used for screening self-timer photos from the photos according to the face proportion in the photos, and recognizing the face images of the users with the highest frequency of appearance in the self-timer photos as the face images of the target users;
the first detection module is further used for detecting whether the facial images of the plurality of users contain the facial image of the target user when the preview picture is detected to contain the facial images of the plurality of users;
the shooting module is used for shooting a plurality of pictures of the preview picture when the face image containing the target user is detected;
and the output module is used for detecting the eyes of the target user in each frame of photo and outputting the photo of the target user with the eyes open.
6. The apparatus of claim 5, wherein the output module comprises:
the acquisition unit is used for detecting the eyes of the target user in each frame of photo and acquiring a candidate photo frame of the target user with the eyes open;
the calculating unit is used for detecting the eyes of other users in each frame of candidate photos and calculating the proportion of the eyes-opening users in the total number of the users;
and the output unit is used for judging whether the proportion meets a preset threshold value or not and outputting a corresponding photo when the proportion meets the preset threshold value.
7. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method of photographing a user as claimed in any one of claims 1 to 4 when executing the program.
8. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out a method for photographing a user according to any one of claims 1 to 4.
CN201711242049.8A 2017-11-30 2017-11-30 User photographing method, device and equipment Active CN108093170B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711242049.8A CN108093170B (en) 2017-11-30 2017-11-30 User photographing method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711242049.8A CN108093170B (en) 2017-11-30 2017-11-30 User photographing method, device and equipment

Publications (2)

Publication Number Publication Date
CN108093170A CN108093170A (en) 2018-05-29
CN108093170B true CN108093170B (en) 2021-03-16

Family

ID=62172402

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711242049.8A Active CN108093170B (en) 2017-11-30 2017-11-30 User photographing method, device and equipment

Country Status (1)

Country Link
CN (1) CN108093170B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109740472A (en) * 2018-12-25 2019-05-10 武汉纺织大学 A kind of photographic method of anti-eye closing
CN109922257A (en) * 2019-02-22 2019-06-21 珠海格力电器股份有限公司 Control method, device, storage medium, processor and the electronic equipment of shooting
JP7418104B2 (en) 2019-08-30 2024-01-19 キヤノン株式会社 Image processing device and method of controlling the image processing device
CN113542580B (en) * 2020-04-22 2022-10-28 华为技术有限公司 Method and device for removing light spots of glasses and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005039365A (en) * 2003-07-16 2005-02-10 Fuji Photo Film Co Ltd Digital camera and control method thereof
CN101420527A (en) * 2007-10-25 2009-04-29 株式会社尼康 Camera and image recording method
CN102622740A (en) * 2011-01-28 2012-08-01 鸿富锦精密工业(深圳)有限公司 Anti-eye-closure portrait shooting system and method
CN105516588A (en) * 2015-12-07 2016-04-20 小米科技有限责任公司 Photographic processing method and device
CN105635567A (en) * 2015-12-24 2016-06-01 小米科技有限责任公司 Shooting method and device
CN106657759A (en) * 2016-09-27 2017-05-10 奇酷互联网络科技(深圳)有限公司 Anti-eye closing photographing method and anti-eye closing photographing device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5388774B2 (en) * 2009-09-24 2014-01-15 キヤノン株式会社 Image processing apparatus and image processing apparatus control method
US8503722B2 (en) * 2010-04-01 2013-08-06 Broadcom Corporation Method and system for determining how to handle processing of an image based on motion

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005039365A (en) * 2003-07-16 2005-02-10 Fuji Photo Film Co Ltd Digital camera and control method thereof
CN101420527A (en) * 2007-10-25 2009-04-29 株式会社尼康 Camera and image recording method
CN102622740A (en) * 2011-01-28 2012-08-01 鸿富锦精密工业(深圳)有限公司 Anti-eye-closure portrait shooting system and method
CN105516588A (en) * 2015-12-07 2016-04-20 小米科技有限责任公司 Photographic processing method and device
CN105635567A (en) * 2015-12-24 2016-06-01 小米科技有限责任公司 Shooting method and device
CN106657759A (en) * 2016-09-27 2017-05-10 奇酷互联网络科技(深圳)有限公司 Anti-eye closing photographing method and anti-eye closing photographing device

Also Published As

Publication number Publication date
CN108093170A (en) 2018-05-29

Similar Documents

Publication Publication Date Title
CN107730445B (en) Image processing method, image processing apparatus, storage medium, and electronic device
CN107948519B (en) Image processing method, device and equipment
CN107886484B (en) Beautifying method, beautifying device, computer-readable storage medium and electronic equipment
CN107734253B (en) Image processing method, image processing device, mobile terminal and computer-readable storage medium
CN108810413B (en) Image processing method and device, electronic equipment and computer readable storage medium
WO2019237992A1 (en) Photographing method and device, terminal and computer readable storage medium
CN107766831B (en) Image processing method, image processing device, mobile terminal and computer-readable storage medium
CN113766125B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN110290289B (en) Image noise reduction method and device, electronic equipment and storage medium
JP4196714B2 (en) Digital camera
JP4254873B2 (en) Image processing apparatus, image processing method, imaging apparatus, and computer program
CN108111749B (en) Image processing method and device
CN108093170B (en) User photographing method, device and equipment
CN107862653B (en) Image display method, image display device, storage medium and electronic equipment
US20050200722A1 (en) Image capturing apparatus, image capturing method, and machine readable medium storing thereon image capturing program
US20090002514A1 (en) Digital Image Processing Using Face Detection Information
CN110166707B (en) Image processing method, image processing apparatus, electronic device, and storage medium
CN107993209B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN108805198B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN107820017B (en) Image shooting method and device, computer readable storage medium and electronic equipment
CN108156369B (en) Image processing method and device
CN107800965B (en) Image processing method, device, computer readable storage medium and computer equipment
CN107862658B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN108052883B (en) User photographing method, device and equipment
CN107844764B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant