WO2022027432A1 - Procédé de prise de photographie, appareil photographique et dispositif de terminal - Google Patents

Procédé de prise de photographie, appareil photographique et dispositif de terminal Download PDF

Info

Publication number
WO2022027432A1
WO2022027432A1 PCT/CN2020/107396 CN2020107396W WO2022027432A1 WO 2022027432 A1 WO2022027432 A1 WO 2022027432A1 CN 2020107396 W CN2020107396 W CN 2020107396W WO 2022027432 A1 WO2022027432 A1 WO 2022027432A1
Authority
WO
WIPO (PCT)
Prior art keywords
weight
human eye
area
region
face
Prior art date
Application number
PCT/CN2020/107396
Other languages
English (en)
Chinese (zh)
Inventor
邓宝华
袁田
Original Assignee
深圳市锐明技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市锐明技术股份有限公司 filed Critical 深圳市锐明技术股份有限公司
Priority to CN202080001474.1A priority Critical patent/CN112055961B/zh
Priority to PCT/CN2020/107396 priority patent/WO2022027432A1/fr
Publication of WO2022027432A1 publication Critical patent/WO2022027432A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/76Circuitry for compensating brightness variation in the scene by influencing the image signals

Definitions

  • the present application belongs to the field of photographing technologies, and in particular, relates to a photographing method, a photographing apparatus, a terminal device, and a computer-readable storage medium.
  • a camera In application scenarios such as in-vehicle security, cameras are often deployed to capture human behavior. For example, a camera can be installed in the cab of the vehicle to capture the driver's driving state to determine whether the driver is in a fatigued driving state. Therefore, it is necessary to capture detailed information of human eyes when shooting.
  • the embodiments of the present application provide a photographing method, a photographing apparatus, a terminal device, and a computer-readable storage medium, which can improve the photographing quality of human eye details when photographing a human face.
  • an embodiment of the present application provides a shooting method, including:
  • the size of the human eye weight area and the size of the first face weight area determine the first weight of the human eye weight area and the second weight of the first face weight area
  • the target user is photographed.
  • an embodiment of the present application provides a photographing device, including:
  • a first determination module configured to determine a human eye weight area and a first face weight area in the face image according to the feature point information in the face image after acquiring the face image;
  • a second determination module configured to determine the first weight of the eye weight region and the first weight of the first face weight region according to the size of the eye weight region and the size of the first face weight region two weights;
  • a third determining module configured to determine an exposure parameter according to the first weight, the second weight, the first brightness information of the human eye weight area, and the second brightness information of the first face weight area;
  • the photographing module is used for photographing the target user according to the exposure parameter.
  • an embodiment of the present application provides a terminal device, including a memory, a processor, a display, and a computer program stored in the memory and running on the processor, characterized in that the processor executes the computer During the program, the shooting method as described above in the first aspect is implemented.
  • an embodiment of the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the shooting method described in the first aspect is implemented.
  • an embodiment of the present application provides a computer program product that, when the computer program product runs on a terminal device, enables the terminal device to execute the above-mentioned shooting method in the above-mentioned first aspect.
  • the eye weight area and the first face weight area may be determined in the face image according to the feature point information in the face image.
  • the face image may be partitioned according to the feature point information. Then, according to the size of the eye weight area and the size of the first face weight area, determine the first weight of the eye weight area and the second weight of the first face weight area, and then according to The first weight, the second weight, the first brightness information of the human eye weight area, and the second brightness information of the first face weight area, determine exposure parameters such that the first weight and The second weight can better reflect the importance of the corresponding weight area in the face image, so that the importance of the different weight areas of the face and the brightness of the different weight areas can be fully considered to determine the exposure parameters, so as to avoid the exposure parameters caused by the presence of human eyes.
  • the proportion of the human face is small, so that the exposure parameters during shooting are mainly determined according to other features in the face image, so that the shooting quality of the details of the human eye is poor, so that according to the exposure parameters, the When the target user is shooting, it can improve the shooting quality of human eye details.
  • FIG. 1 is a schematic flowchart of a photographing method provided by an embodiment of the present application.
  • FIG. 2 is a schematic diagram of an exemplary division manner of each weight region in a face image provided by an embodiment of the present application
  • FIG. 3 is a schematic flowchart of step S102 provided by an embodiment of the present application.
  • FIG. 4 is an exemplary schematic diagram of determining the edge of the human eye weight region provided by an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of a photographing device according to an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of a terminal device provided by an embodiment of the present application.
  • FIG. 1 shows a flowchart of a photographing method provided by an embodiment of the present application, and the photographing method may be applied to a terminal device.
  • the shooting methods provided in the embodiments of the present application can be applied to in-vehicle devices, servers, desktop computers, mobile phones, tablet computers, wearable devices, augmented reality (AR)/virtual reality (VR) devices, notebook computers, On terminal devices such as an ultra-mobile personal computer (UMPC), a netbook, and a personal digital assistant (PDA), the embodiments of the present application do not limit the specific type of the terminal device.
  • UMPC ultra-mobile personal computer
  • PDA personal digital assistant
  • the driver's face image can be captured by a camera installed on the car, and whether the driver is in a fatigued driving state can be determined through the details of the human eye and other facial information in the face image.
  • the exposure brightness will be continuously adjusted according to factors such as skin reflectivity, etc., so that the shooting of human eye details cannot be guaranteed. quality; in addition, wearing glasses, masks, etc. will also interfere with the shooting of human eye details; and, in scenes such as backlighting, the human face is often dark, so the human eye details cannot be effectively identified.
  • the first weight of the human eye weight region and the first weight of the human eye weight region may be determined according to the size of the human eye weight region in the human face image and the size of the first face weight region in the human face image.
  • a second weight of the face weight region and then according to the first weight, the second weight, the first brightness information of the human eye weight region, and the second brightness information of the first face weight region, Determine the exposure parameters so that the first weight and the second weight can better reflect the importance of the corresponding weight regions in the face image, so that the importance of the different weight regions of the face and the different weight regions can be fully considered
  • the exposure parameter is determined according to the brightness of the target user, so that when the target user is photographed according to the exposure parameter, the photographing quality of the details of the human eye can be improved.
  • the shooting method may include:
  • Step S101 after acquiring the face image, according to the feature point information in the face image, determine the human eye weight area and the first face weight area in the face image.
  • the face image is an image including a human face.
  • the face image may be captured by a camera in the terminal device implementing the embodiments of the present application; in addition, the face image may also be captured by a camera communicatively connected to the terminal device, and transmitted to in the terminal device; alternatively, the face image may also be a local image pre-stored in the terminal device or the like.
  • the specific acquisition method of the face image is not limited herein.
  • the feature point information may include at least one of information on feature points of facial features in a human face, feature points on an edge of the human face, and the like, and information on feature points on objects associated with the human face.
  • the objects associated with the face may include items such as glasses, masks, etc., which are worn by the face in the face image, as well as other shielding objects for the face, etc.
  • the feature point information may be It includes information such as the coordinates of the feature points used to identify the outline of the glasses, and the coordinates of the feature points used to identify the outline of the mask.
  • the human eye weight area may include at least part of the human eye area
  • the first face weight area may include at least part of the face area
  • the first face weight area may include at least part of the cheek area, at least part of the forehead area and at least one of at least part of the hair area and the like.
  • the first face weight region may not overlap with the human eye weight region.
  • the region where the human eye is located may be determined as the human eye weight region according to the edge feature points of the human eye; or, the size of the region where the human eye is located, the relative positional relationship between the target camera and the human face may be used (For example, the camera shoots the front or side of the face, or is inclined at a certain angle relative to the front of the face, etc.), expand the area where the human eye is located, and use the expanded human eye area as the eye weight area.
  • the manner of determining the first face weight region may be similar to that of the human eye weight region, or may be determined in other manners.
  • the first face weight region may be determined according to information such as edges of the human eye weight region and corresponding facial feature point information.
  • the first face weight area may be a cheek weight area.
  • the lower edge feature point, the left edge feature point and the right edge feature point of the cheek may be obtained, and according to the human eye weight The lower edge of the region, which determines the edge of the cheek weight region.
  • Step S102 Determine a first weight of the human eye weight area and a second weight of the first face weight area according to the size of the eye weight area and the size of the first face weight area.
  • the first weight may be used to reflect the importance of the human eye weight area
  • the second weight may be used to reflect the importance of the first face weight area
  • the first weight and the second weight may be dynamically determined according to the size of the eye weight region and the size of the first face weight region, rather than being preset.
  • the ratio between the first weight and the second weight may be determined according to the ratio between the area of the eye weight area and the area of the first face weight area, so that all When the eye weight area is smaller than the first face weight area, the size of the first weight can be appropriately increased, so as to avoid subsequent determinations due to the small proportion of the first eye weight area.
  • the exposure parameters and other shooting parameters cannot capture the details of the human eye.
  • the determining the first weight of the eye weight region and the size of the first face weight region according to the size of the eye weight region and the size of the first face weight region Second weight including:
  • the ratio is greater than a preset ratio, wherein the first area is the area of the human eye weight region, and the second area is the area of the first face weight region.
  • the specific value of the preset ratio may be determined according to a specific application scenario, a preset experiment, or the like.
  • the preset ratio may be 3.
  • the first weight and the second weight can be better adjusted according to the area of different weighted regions in the face image, so as to avoid the problem of the human eyes occupying a small proportion in the face.
  • the exposure parameters during shooting are mainly determined according to other features in the face image, so that the shooting quality of the details of the human eye is poor.
  • Step S103 Determine an exposure parameter according to the first weight, the second weight, the first brightness information of the human eye weight area, and the second brightness information of the first face weight area.
  • the first brightness information of the human eye weight region may include an average brightness value, an overall brightness value, and/or a brightness value of each pixel of the first human eye weight region, and the like.
  • the human eye weight region may also include multiple sub-regions, and the first brightness information may include brightness information of each of the sub-regions.
  • a corresponding weighting operation may be performed on the first weight, the second weight, the average brightness value of the human eye weight region, and the average brightness value of the first face weight region to obtain An overall brightness value from which to determine exposure parameters.
  • the exposure parameters may include exposure time, exposure gain, and the like. If the overall brightness value is small, the image brightness can be improved by increasing the exposure time and/or increasing the exposure gain; if the overall brightness value is large, reducing the exposure time and/or reducing the exposure gain etc. to reduce the image brightness.
  • the exposure parameter is determined according to the first weight, the second weight, the first brightness information of the human eye weight area, and the second brightness information of the first face weight area ,include:
  • the exposure parameter is determined according to the comprehensive brightness information.
  • the comprehensive brightness information can fully consider the human eye weight area, and can also take into account the first face weight area, etc. other areas other than the weight area of the human eye, so that the brightness comprehensively evaluated in the comprehensive brightness information can better reflect the approximate moderate range of human eye brightness in the face image, and is used to determine the exposure parameter, so that The exposure parameters can ensure that parts such as human eyes will not be overexposed or underexposed when photographing a human face, so that details of human eyes will not be lost.
  • the first face-weighted region is located below the eye-weighted region
  • the determining of the comprehensive brightness information according to the first weight, the second weight, the first brightness information and the second brightness information includes:
  • the area directly above the eye weight area and the height is the first height threshold is used as the second face weight area, and the area located in the second face weight area The area directly above the area and the height of which is the second height threshold is used as the third face weight area;
  • the first weight determine the third weight of the second face weight area and the fourth weight corresponding to the third face weight area
  • the first weight the second weight, the first brightness information, the second brightness information, the third weight, the fourth weight, the third brightness of the second face weight area information and fourth luminance information of the third face weighting region to determine the comprehensive luminance information.
  • the information on the feature points of the human eye may include the coordinates of the upper edge feature point of the human eye and the coordinates of the lower edge feature point of the human eye.
  • the height of the human eye can be calculated according to the coordinates of the upper edge feature point of the human eye and the coordinates of the lower edge feature point of the human eye.
  • the width of the human eye weight region and the width of the first face weight region may be relatively close. Therefore, the height of the human eye can reflect the size of the region where the human eye is located in the human eye image, and will have a greater impact on the size of the human eye weight region.
  • the height of the human eye is greater than the second preset height, it can be considered that the region where the human eye is located in the human eye image is larger, contains more details of the human eye, and the corresponding weight region of the human eye is often larger. big. Therefore, the second face weight area and the third face weight area in the face image can be further determined, so that on the basis of obtaining richer human eye information, it is possible to take into account factors such as forehead, hair, cheeks, etc. Or the effect of facial factors in areas such as masks on image brightness.
  • the first height threshold and the second height threshold may be determined according to actual scenarios.
  • the face image is divided into a plurality of first regions, in this case, the first height threshold and the second height threshold may be determined according to the height of the first region, for example
  • the first height threshold may be the height of a first area
  • the second height threshold may be the height of a first area, etc., so as to facilitate subsequent determination of the first area belonging to the second face weight area and a first region belonging to the third face weight region.
  • the third weight of the second face weight area and the fourth weight corresponding to the third face weight area may be determined according to the first weight, therefore, the third weight and the fourth weight may also be determined according to The first weight is dynamically adjusted to ensure the importance of the eye weight area, and at the same time, the second face weight area and the third face weight area can be taken into account.
  • the first face weight area is located below the human eye weight area and can be considered as the cheek weight area
  • the second face weight area is located directly above the human eye weight area and can be considered as the forehead weight area area
  • the third face weight area is located directly above the second weight area, which can be considered as a hair weight area.
  • the eye weight area, the first face weight area, the second face weight area, and the third face weight area may roughly include the main feature areas of the human face, and the weight area of each weight area may be determined dynamically weights to determine the importance of each weighted area respectively.
  • the third weight may be 0.5 times the first weight
  • the fourth weight may be 0.25 times the first weight, so as to ensure the importance of the human eye weight region, At the same time, the second face weight area and the third face weight area can be taken into consideration.
  • the determining comprehensive luminance information according to the first weight, the second weight, the first luminance information and the second luminance information includes:
  • the comprehensive brightness information is determined according to the first weight, the second weight, the first brightness information, the second brightness information, the fifth weight and the third brightness information.
  • the second face weight region in the face image and the fifth weight of the second face weight region can be further determined to reduce the influence of other face factors in the face except the human eye on the brightness of the image .
  • the fifth weight may be 0.5 times the first weight.
  • step S104 the target user is photographed according to the exposure parameter.
  • the shooting manner of shooting the target user may be associated with the shooting manner of the face image.
  • the target user may be a driver in a cab.
  • the eye weight area and the first face weight area may be determined in the face image according to the feature point information in the face image.
  • the face image may be partitioned according to the feature point information. Then, according to the size of the eye weight area and the size of the first face weight area, determine the first weight of the eye weight area and the second weight of the first face weight area, and then according to The first weight, the second weight, the first brightness information of the human eye weight area, and the second brightness information of the first face weight area, determine exposure parameters such that the first weight and The second weight can better reflect the importance of the corresponding weight area in the face image, so that the importance of the different weight areas of the face and the brightness of the different weight areas can be fully considered to determine the exposure parameters, so as to avoid the exposure parameters caused by the presence of human eyes.
  • the proportion of the human face is small, so that the exposure parameters during shooting are mainly determined according to other features in the face image, so that the shooting quality of the details of the human eye is poor, so that according to the exposure parameters, the When the target user is shooting, it can improve the shooting quality of human eye details.
  • the feature point information includes human eye feature point information and first facial feature point information
  • Step S301 after acquiring the face image, divide the face image into a plurality of first regions;
  • Step S302 according to the human eye feature point information, determine the first area belonging to the human eye weight area
  • Step S303 Determine a first area belonging to the first face weight area according to the edge of the human eye weight area and the first facial feature point information.
  • the first facial feature point information is associated with the first facial weight region.
  • the first facial feature point information includes cheek feature point information, for example, includes information such as coordinates of cheek edge feature points; If the partial weight area includes other areas in the face area except the eye weight area, the first facial feature point information may include face feature point information, for example, including the coordinates of the face edge feature points, etc. information.
  • the number and size of the first regions may be determined according to the size of the face image, the size of the face region, and the like. Exemplarily, if the resolution of the face image may be 1920*1080, the number of the first regions may be K*P, wherein the resolution of each first region may be (1920/K )*(1080/P).
  • the human eye feature point information may include human eye edge feature point information, human eye center feature point information, and the like. According to the human eye feature point information, the first region belonging to the human eye weight region can be determined; of course, in some applications, it can also be based on the human eye feature point information and information such as face edge feature point information, etc., to determine the first region belonging to the human eye weight region.
  • the human eye feature point information includes information of human eye center feature points and human eye edge feature points
  • Determining the first region belonging to the human eye weight region according to the human eye feature point information includes:
  • the target camera is a camera that captures the face image
  • the central feature point of the human eye is located on the edge line of any first region, then according to the distance threshold and the first distance between each preset edge feature point and the central feature point of the human eye, determine the central feature point of the human eye.
  • the relative positional relationship between the target camera and the human face may affect the presentation effect of the human face part in the corresponding human face image.
  • the target camera is installed on the left side of the subject such as the driver
  • the face image obtained by the target camera shooting the subject has not been mirrored
  • the face in the face image is often will be close to the right side of the face image, and the face features in the right side of the face image are also more clear
  • the target camera captures the subject, the face image obtained has already been captured.
  • the face in the face image tends to be close to the left side of the face image.
  • the target camera is installed on the right side of the person to be photographed, such as the driver, the situation is reversed.
  • the person when determining the first area belonging to the weighted area of the human eye, the person can be determined by judging the location of the edge feature points of the human eye. An eye weight region, so that most of the left eye features or even all left eye features of the photographed subject can be obtained in the human eye weight region.
  • the height of the human eye may also roughly reflect the size of the area occupied by the human eye in the human face image and the number of details of the human eye contained therein. For example, if the height of the human eye is small, it can be considered that the area where the human eye is located in the human eye image is small, contains less details of the human eye, and the corresponding weight area of the human eye is often small, then By setting the distance threshold, the human eye weight region includes as many human eye features as possible. Therefore, the distance threshold can be determined according to the relative positional relationship between the target camera and the human face and the height of the human eye.
  • the distance threshold may include an upper distance threshold, a lower distance threshold, a left distance threshold and a right distance threshold.
  • the human eye edge feature points may include at least one of a human eye upper edge feature point, a human eye lower edge feature point, a human eye left edge feature point, and a human eye right edge feature point.
  • the preset edge feature points are used to determine the edge of the human eye weight region. Wherein, for example, if the upper edge and the lower edge of the human eye weight region are determined according to the upper edge and the lower edge of the human eye, the preset edge feature points may include the upper edge feature point of the human eye and the lower edge of the human eye. edge feature points; in addition, if the left edge and the right edge of the eye weight region are determined according to the cheek edge of the human face, the preset edge feature points may include the left edge feature point of the cheek and the right edge feature point of the cheek.
  • each of the first distances is a distance between the corresponding preset edge feature point and the center feature point of the human eye.
  • a remainder of dividing the first distance by the height of the first region may be obtained, and the remainder may be compared with a corresponding distance threshold to determine the edge of the eye-weighted region, wherein , the human eye weight region may be composed of a first region, and therefore, the edge of the human eye weight region may coincide with the edge of a specific first region.
  • the distance threshold may be determined to be 0. That is to say, as shown in Figure 4(a), if the center feature point of the human eye is located on the edge line of any first region, then if the preset edge feature point includes the right edge feature point of the cheek and the cheek The left edge feature point, and the right edge feature point of the cheek and the left edge feature point of the cheek are respectively on the edge line of a certain first area, then the two edge lines can be used as the left edge and right edge of the human eye weight area ; The manner of determining the upper edge and the lower edge of the human eye weight region may be similar, and will not be described in detail here.
  • the photographing method further includes:
  • the first region where the central feature point of the human eye is located is determined as the core region, and it is determined whether there is an external edge feature point, wherein the The outer edge feature points are preset edge feature points located outside the core area;
  • each external edge feature point obtains a second distance between the external edge feature point and the target edge, where the target edge is directed from the external edge feature point The first region edge closest to the outer edge feature point in the core region;
  • the edge of the human eye weight region is determined according to the target edge and the core region.
  • the rendering effect of the human face part in the corresponding human face image may be affected.
  • the target camera is installed on the left side of the subject such as the driver
  • the face image obtained by the target camera shooting the subject has not been mirrored
  • the face in the face image is often will be close to the right side of the face image, and the face features in the right side of the face image are also more and often clearer
  • the target camera captures the face obtained by the subject The image has been mirrored, so the face in the face image tends to be close to the left side of the face image.
  • the distance threshold is determined according to the relative positional relationship between the target camera and the face and the height of the human eye, wherein the target camera is the camera that captures the face image, and the distance threshold can be determined according to the relative positional relationship and the human eye height. Eye height, adjust the distance thresholds in different directions, so that more abundant human eye information in a specific direction can be preserved in the human eye weight area.
  • the human eye is obtained.
  • a certain height threshold for example, less than the height of a first area
  • the target camera shoots and mirrors the human face from the right side
  • the human eye is obtained.
  • image it can be determined that the right distance threshold in the distance thresholds is smaller, and the left distance threshold is larger, so that the information of the right face in the face image can be better obtained.
  • the target edge located on the right side of the feature point on the left edge of the cheek is used as the The left edge of the weight area of the human eye
  • the second distance corresponding to the feature point on the right edge of the cheek in the face image is greater than the right distance threshold
  • the edge of the first region located on the right side of the feature point on the right edge of the cheek is used as the The right edge of the eye weight area.
  • the distance threshold is determined according to the relative positional relationship between the target camera and the face and the height of the human eye, wherein the target camera is a camera that captures the face image, and the relative position can be used to determine the distance threshold.
  • the face presentation mode in the face image corresponding to the relationship pre-judgment, so as to adjust the distance threshold according to the face presentation mode, so as to better determine the eye weight area, so as to obtain the following results.
  • the brightness information of the human eye part that is more in line with the actual situation; in addition, the approximate region where the human eye is located in the human face image can be preliminarily evaluated according to the human eye height, so as to determine the human eye weight region.
  • the exposure parameter is determined according to the first weight, the second weight, the first brightness information of the human eye weight area, and the second brightness information of the first face weight area ,include:
  • the exposure parameter is determined according to the first weight, the second weight, the first brightness value and the second brightness value.
  • the regional brightness value of the first region may include the mean brightness of each pixel in the first region, the median brightness of the pixel, the weighted value of the brightness of the pixel, etc. at least one of etc.
  • the specific calculation methods of the regional luminance values corresponding to each of the first regions in the human eye weight regions may be different. For example, for any first region located at the edge of the human eye weight region, the weight of each pixel point in the first region can be determined according to the positions of the human eye edge feature points included in the first region, and then The weighted value of the brightness of the pixel points in the first area is calculated, so that the regional brightness value of the first area can better reflect the brightness of the part of the human eye included in the first area.
  • the specific calculation method of the regional brightness value corresponding to each first region in the human eye weight region can also be the same, for example, the regional brightness value can be the corresponding pixel of each pixel in the first region. Average point brightness.
  • the face image by dividing the face image into a plurality of first regions, it is convenient to divide the eye weight region and the first face weight according to each first region and its edge.
  • the first brightness value of the corresponding human eye weight area can be determined, and the first brightness value of the first face weight area can be determined.
  • Two brightness values and then determine the exposure parameters in combination with the dynamically determined first weight and second weight, so that the importance of the different weighted areas of the face and the brightness of the different weighted areas can be fully considered to determine the exposure parameters, avoiding the need for human eyes.
  • the proportion of the human face is small, so that the exposure parameters during shooting are mainly determined according to other features in the face image, which makes the shooting quality of human eye details poor, so that according to the exposure parameters, When shooting the target user, the shooting quality of the details of the human eye can be improved.
  • An embodiment of the present application provides a photographing apparatus, and the above photographing apparatus may be integrated into a terminal device.
  • FIG. 5 shows a structural block diagram of a photographing apparatus provided by an embodiment of the present application. For convenience of description, only parts related to the embodiment of the present application are shown.
  • the photographing device 5 includes:
  • the first determination module 501 is configured to determine the eye weight area and the first face weight area in the face image according to the feature point information in the face image after acquiring the face image;
  • a second determination module 502 configured to determine a first weight of the human eye weight region and a size of the first face weight region according to the size of the eye weight region and the size of the first face weight region the second weight;
  • the third determination module 503 is configured to determine exposure parameters according to the first weight, the second weight, the first brightness information of the human eye weight area, and the second brightness information of the first face weight area ;
  • the photographing module 504 is configured to photograph the target user according to the exposure parameter.
  • the feature point information includes human eye feature point information and first facial feature point information
  • the first determining module 501 includes:
  • a dividing unit configured to divide the facial image into a plurality of first regions after acquiring the facial image
  • a first determining unit configured to determine a first region belonging to the human eye weight region according to the human eye feature point information
  • a second determining unit configured to determine a first area belonging to the first face weight area according to the edge of the human eye weight area and the first facial feature point information.
  • the human eye feature point information includes the information of the human eye center feature point and the human eye edge feature point;
  • the first determining unit includes:
  • a first determination subunit configured to determine the height of the human eye according to the information of the human eye feature point
  • a second determination subunit configured to determine a distance threshold according to the relative positional relationship between the target camera and the face and the height of the human eye, wherein the target camera is a camera that captures the face image;
  • the third determination subunit is configured to, if the central feature point of the human eye is located on the edge line of any first region, determine the difference between the central feature point of the human eye and the central feature point of the human eye according to the distance threshold and each preset edge feature point. and determine the edge of the human eye weight region, wherein the preset edge feature points include at least one human eye edge feature point.
  • the photographing device 5 further includes:
  • the fourth determination module is used to determine that the first region where the central feature point of the human eye is located is the core region if the central feature point of the human eye is not located on the edge line of any first region, and determine whether there is an external edge feature points, wherein the external edge feature points are preset edge feature points located outside the core area;
  • the obtaining module is configured to, if there is at least one external edge feature point, for each external edge feature point, obtain the second distance between the external edge feature point and the target edge, wherein the target edge is from the When the outer edge feature point points to the core region, the first region edge closest to the outer edge feature point;
  • a fifth determination module configured to determine the edge of the human eye weight region according to the first region and the core region where the external edge feature points are located if the second distance is greater than the corresponding distance threshold;
  • a sixth determination module configured to determine the edge of the human eye weight region according to the target edge and the core region if the second distance is not greater than the corresponding distance threshold.
  • the third determining module 503 includes:
  • a seventh determination module configured to determine the first brightness value of the human eye weight region according to the regional brightness value of the first region included in the human eye weight region
  • an eighth determination module configured to determine a second brightness value of the first face weight region according to the regional brightness value of the first region included in the first face weight region;
  • a ninth determination module configured to determine the exposure parameter according to the first weight, the second weight, the first brightness value and the second brightness value.
  • the third determining module 503 includes:
  • a third determining unit configured to determine comprehensive brightness information according to the first weight, the second weight, the first brightness information and the second brightness information
  • the fourth determining unit is configured to determine the exposure parameter according to the comprehensive brightness information.
  • the first face weight area is located below the human eye weight area
  • the third determining unit includes:
  • a fourth determination subunit configured to determine the height of the human eye according to the human eye feature point information in the feature point information
  • a fifth determination subunit configured to use the area directly above the eye weight area and the height of which is the first height threshold as the second face weight area if the height of the human eye is greater than the second preset height, and Taking the area directly above the second face weight area and having a height of the second height threshold as the third face weight area;
  • a sixth determination subunit configured to determine the third weight of the second face weight area and the fourth weight corresponding to the third face weight area according to the first weight
  • a seventh determination subunit configured to determine according to the first weight, the second weight, the first brightness information, the second brightness information, the third weight, the fourth weight, the first The third luminance information of the bi-face weighting area and the fourth luminance information of the third face weighting area determine the comprehensive luminance information.
  • the seventh determination subunit is used for:
  • the comprehensive brightness information is determined according to the first weight, the second weight, the first brightness information, the second brightness information, the fifth weight and the third brightness information.
  • the second determining module 502 is specifically configured to:
  • the ratio is greater than a preset ratio, wherein the first area is the area of the human eye weight region, and the second area is the area of the first face weight region.
  • FIG. 6 is a schematic structural diagram of a terminal device provided by an embodiment of the present application.
  • the terminal device 6 in this embodiment includes: at least one processor 60 (only one is shown in FIG. 6 ), a memory 61 , and is stored in the above-mentioned memory 61 and can run on the above-mentioned at least one processor 60
  • the computer program 62 when the processor 60 executes the computer program 62, implements the steps in any of the foregoing shooting method embodiments.
  • the above-mentioned terminal device 6 may be a vehicle-mounted device, a server, a mobile phone, a wearable device, an augmented reality (AR)/virtual reality (virtual reality, VR) device, a desktop computer, a notebook, a desktop computer, a handheld computer and other computing devices. equipment.
  • the terminal device may include, but is not limited to, a processor 60 and a memory 61 .
  • FIG. 6 is only an example of the terminal device 6, and does not constitute a limitation on the terminal device 6, and may include more or less components than the one shown, or combine some components, or different components , for example, may also include input devices, output devices, network access devices, and so on.
  • the above-mentioned input devices may include keyboards, touchpads, fingerprint collection sensors (for collecting user's fingerprint information and fingerprint direction information), microphones, cameras, etc.
  • output devices may include displays, speakers, and the like.
  • the above-mentioned processor 60 may be a central processing unit (Central Processing Unit, CPU), and the processor 60 may also be other general-purpose processors, digital signal processors (Digital Signal Processors) Processor, DSP), application-specific integrated circuits (Application Specific Integrated Circuit, ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the above-mentioned memory 61 may be an internal storage unit of the above-mentioned terminal device 6 in some embodiments, such as a hard disk or a memory of the terminal device 6 .
  • the above-mentioned memory 61 may also be an external storage device of the above-mentioned terminal device 6 in other embodiments, such as a plug-in hard disk equipped on the above-mentioned terminal device 6, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital) , SD) card, flash memory card (Flash Card), etc.
  • the above-mentioned memory 61 may also include both the internal storage unit of the above-mentioned terminal device 6 and an external storage device.
  • the above-mentioned memory 61 is used to store an operating system, an application program, a boot loader (Boot Loader), data, and other programs, for example, program codes of the above-mentioned computer programs.
  • the above-mentioned memory 61 can also be used to temporarily store data that has been output or is to be output.
  • the above-mentioned terminal device 6 may also include a network connection module, such as a Bluetooth module, a Wi-Fi module, a cellular network module, etc., which will not be repeated here.
  • a network connection module such as a Bluetooth module, a Wi-Fi module, a cellular network module, etc., which will not be repeated here.
  • the processor 60 executes the computer program 62 to implement the steps in any of the above-mentioned shooting method embodiments, after acquiring the face image, according to the feature point information in the face image, The eye weight region and the first face weight region are determined in the face image, and at this time, the face image can be partitioned according to the feature point information.
  • the first weight of the eye weight area and the second weight of the first face weight area determine the first weight of the eye weight area and the second weight of the first face weight area, and then according to The first weight, the second weight, the first brightness information of the human eye weight area, and the second brightness information of the first face weight area, determine exposure parameters such that the first weight and The second weight can better reflect the importance of the corresponding weight area in the face image, so that the importance of the different weight areas of the face and the brightness of the different weight areas can be fully considered to determine the exposure parameters, to avoid
  • the proportion of the human face is small, so that the exposure parameters during shooting are mainly determined according to other features in the face image, so that the shooting quality of the details of the human eye is poor, so that according to the exposure parameters, the When the target user is shooting, it can improve the shooting quality of human eye details.
  • Embodiments of the present application further provide a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the steps in the foregoing method embodiments can be implemented.
  • the embodiments of the present application provide a computer program product, when the computer program product runs on a terminal device, so that the terminal device can implement the steps in the foregoing method embodiments when executed.
  • the above-mentioned integrated units are implemented in the form of software functional units and sold or used as independent products, they may be stored in a computer-readable storage medium.
  • the present application realizes all or part of the processes in the methods of the above-mentioned embodiments, which can be completed by instructing the relevant hardware through a computer program, and the above-mentioned computer program can be stored in a computer-readable storage medium, and the computer program is in When executed by the processor, the steps of the foregoing method embodiments can be implemented.
  • the above-mentioned computer program includes computer program code, and the above-mentioned computer program code may be in the form of source code, object code form, executable file or some intermediate form.
  • the above-mentioned computer-readable medium may include at least: any entity or device capable of carrying the computer program code to the photographing device/terminal device, a recording medium, a computer memory, a read-only memory (ROM, Read-Only Memory), a random access memory (RAM, Random Access Memory), electrical carrier signals, telecommunication signals, and software distribution media.
  • ROM read-only memory
  • RAM Random Access Memory
  • electrical carrier signals telecommunication signals
  • software distribution media for example, U disk, mobile hard disk, disk or CD, etc.
  • computer readable media may not be electrical carrier signals and telecommunications signals.
  • the disclosed apparatus/device and method may be implemented in other manners.
  • the apparatus/equipment embodiments described above are only illustrative.
  • the division of the above modules or units is only a logical function division.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the above-mentioned units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.

Abstract

La présente invention concerne un procédé de prise de photographie, comprenant : après l'acquisition d'une image de visage, la détermination d'une région de pondération d'œil humain et d'une première région de pondération de visage dans l'image de visage selon des informations de point caractéristique dans l'image de visage; la détermination d'une première pondération de la région de pondération d'œil humain et d'une seconde pondération de la première région de pondération de visage selon la taille de la région de pondération d'œil humain et la taille de la première région de pondération de visage; la détermination d'un paramètre d'exposition selon la première pondération, la seconde pondération, des premières informations de luminosité de la région de pondération d'œil humain et des secondes informations de luminosité de la première région de pondération de visage; et la prise d'une photographie d'un utilisateur cible selon le paramètre d'exposition. Au moyen du procédé, la qualité photographique de détails d'un œil humain durant la prise d'une photographie d'un visage peut être améliorée.
PCT/CN2020/107396 2020-08-06 2020-08-06 Procédé de prise de photographie, appareil photographique et dispositif de terminal WO2022027432A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202080001474.1A CN112055961B (zh) 2020-08-06 2020-08-06 拍摄方法、拍摄装置及终端设备
PCT/CN2020/107396 WO2022027432A1 (fr) 2020-08-06 2020-08-06 Procédé de prise de photographie, appareil photographique et dispositif de terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/107396 WO2022027432A1 (fr) 2020-08-06 2020-08-06 Procédé de prise de photographie, appareil photographique et dispositif de terminal

Publications (1)

Publication Number Publication Date
WO2022027432A1 true WO2022027432A1 (fr) 2022-02-10

Family

ID=73606031

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/107396 WO2022027432A1 (fr) 2020-08-06 2020-08-06 Procédé de prise de photographie, appareil photographique et dispositif de terminal

Country Status (2)

Country Link
CN (1) CN112055961B (fr)
WO (1) WO2022027432A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115375987A (zh) * 2022-08-05 2022-11-22 北京百度网讯科技有限公司 一种数据标注方法、装置、电子设备及存储介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113507569A (zh) * 2021-06-30 2021-10-15 上海商汤临港智能科技有限公司 车载摄像头的控制方法及装置、设备和介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011109427A (ja) * 2009-11-18 2011-06-02 Fujifilm Corp 複眼式撮像装置
CN104869319A (zh) * 2014-02-20 2015-08-26 华硕电脑股份有限公司 影像处理方法及影像处理装置
CN105096267A (zh) * 2015-07-02 2015-11-25 广东欧珀移动通信有限公司 一种基于拍照识别调节眼部亮度的方法和装置
WO2018119590A1 (fr) * 2016-12-26 2018-07-05 深圳市道通智能航空技术有限公司 Procédé et dispositif de mesurage de lumière, procédé et dispositif d'exposition, et véhicule aérien sans pilote
CN109639981A (zh) * 2018-12-29 2019-04-16 维沃移动通信有限公司 一种图像拍摄方法及移动终端
CN110099222A (zh) * 2019-05-17 2019-08-06 睿魔智能科技(深圳)有限公司 一种拍摄设备的曝光调整方法、装置、存储介质及设备
WO2019163576A1 (fr) * 2018-02-26 2019-08-29 ソニー株式会社 Dispositif de traitement d'informations, procédé de traitement d'informations et programme

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100568926C (zh) * 2006-04-30 2009-12-09 华为技术有限公司 自动曝光控制参数的获得方法及控制方法和成像装置
JP2009116742A (ja) * 2007-11-08 2009-05-28 Aisin Seiki Co Ltd 車載用画像処理装置、画像処理方法、および、プログラム
JP5127686B2 (ja) * 2008-12-11 2013-01-23 キヤノン株式会社 画像処理装置および画像処理方法、ならびに、撮像装置
CN109543523A (zh) * 2018-10-18 2019-03-29 安克创新科技股份有限公司 图像处理方法、装置、系统和存储介质
CN109918993B (zh) * 2019-01-09 2021-07-02 杭州中威电子股份有限公司 一种基于人脸区域曝光的控制方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011109427A (ja) * 2009-11-18 2011-06-02 Fujifilm Corp 複眼式撮像装置
CN104869319A (zh) * 2014-02-20 2015-08-26 华硕电脑股份有限公司 影像处理方法及影像处理装置
CN105096267A (zh) * 2015-07-02 2015-11-25 广东欧珀移动通信有限公司 一种基于拍照识别调节眼部亮度的方法和装置
WO2018119590A1 (fr) * 2016-12-26 2018-07-05 深圳市道通智能航空技术有限公司 Procédé et dispositif de mesurage de lumière, procédé et dispositif d'exposition, et véhicule aérien sans pilote
WO2019163576A1 (fr) * 2018-02-26 2019-08-29 ソニー株式会社 Dispositif de traitement d'informations, procédé de traitement d'informations et programme
CN109639981A (zh) * 2018-12-29 2019-04-16 维沃移动通信有限公司 一种图像拍摄方法及移动终端
CN110099222A (zh) * 2019-05-17 2019-08-06 睿魔智能科技(深圳)有限公司 一种拍摄设备的曝光调整方法、装置、存储介质及设备

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115375987A (zh) * 2022-08-05 2022-11-22 北京百度网讯科技有限公司 一种数据标注方法、装置、电子设备及存储介质
CN115375987B (zh) * 2022-08-05 2023-09-05 北京百度网讯科技有限公司 一种数据标注方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN112055961B (zh) 2021-09-17
CN112055961A (zh) 2020-12-08

Similar Documents

Publication Publication Date Title
KR101916355B1 (ko) 듀얼-렌즈 장치의 촬영 방법, 및 듀얼-렌즈 장치
CN108305236B (zh) 图像增强处理方法及装置
CN108322646B (zh) 图像处理方法、装置、存储介质及电子设备
US9959601B2 (en) Distortion rectification method and terminal
CN111147749A (zh) 拍摄方法、拍摄装置、终端及存储介质
WO2021083059A1 (fr) Procédé de reconstruction d'image à super-résolution, appareil de reconstruction d'image à super-résolution et dispositif électronique
CN113538273B (zh) 图像处理方法及图像处理装置
US20210065340A1 (en) System and method for video processing with enhanced temporal consistency
CN110599410B (zh) 图像处理的方法、装置、终端及存储介质
CN109937434B (zh) 图像处理方法、装置、终端和存储介质
CN110648296B (zh) 一种瞳孔颜色修正方法、修正装置、终端设备及存储介质
WO2022027432A1 (fr) Procédé de prise de photographie, appareil photographique et dispositif de terminal
CN112333385B (zh) 电子防抖控制方法及装置
CN108600644B (zh) 一种拍照方法、装置及可穿戴设备
CN112532891A (zh) 拍照方法及装置
CN113487500B (zh) 图像畸变校正方法与装置、电子设备和存储介质
WO2022199395A1 (fr) Procédé de détection d'activité faciale, dispositif terminal et support de stockage lisible par ordinateur
CN109726613B (zh) 一种用于检测的方法和装置
CN116055895B (zh) 图像处理方法及其装置、芯片系统和存储介质
CN111416936B (zh) 图像处理方法、装置、电子设备及存储介质
CN115908120B (zh) 图像处理方法和电子设备
CN107105167B (zh) 一种扫题时拍摄照片的方法、装置及终端设备
CN113486714B (zh) 一种图像的处理方法及电子设备
CN112418189B (zh) 戴口罩人脸识别方法、装置、设备以及存储介质
CN114040048A (zh) 一种隐私保护方法及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20948173

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20948173

Country of ref document: EP

Kind code of ref document: A1