WO2022021093A1 - Procédé de photographie, appareil de photographie et support d'enregistrement - Google Patents

Procédé de photographie, appareil de photographie et support d'enregistrement Download PDF

Info

Publication number
WO2022021093A1
WO2022021093A1 PCT/CN2020/105290 CN2020105290W WO2022021093A1 WO 2022021093 A1 WO2022021093 A1 WO 2022021093A1 CN 2020105290 W CN2020105290 W CN 2020105290W WO 2022021093 A1 WO2022021093 A1 WO 2022021093A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
frame image
target person
current frame
posture
Prior art date
Application number
PCT/CN2020/105290
Other languages
English (en)
Chinese (zh)
Inventor
程正喜
封旭阳
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2020/105290 priority Critical patent/WO2022021093A1/fr
Priority to CN202080005968.7A priority patent/CN113056907A/zh
Publication of WO2022021093A1 publication Critical patent/WO2022021093A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body

Definitions

  • the present application relates to the field of photographing technologies, and in particular, to a photographing method, a photographing device, and a storage medium.
  • Autofocus refers to the use of the photoelectric sensor on the camera device to receive the light reflected by the object. According to the calculation and processing of the internal chip of the camera device, the autofocus device finally focuses.
  • the AF device (motor) actually moves according to the depth information of the target to achieve better focus.
  • the existing automatic focusing methods include: (1) The automatic focusing method based on the change of the size of the face image frame directly estimates the change of the depth information of the target according to the size change of the face image frame, so that the estimated depth information is inaccurate, and the distance is shot. For faces with different poses at the same distance from the device, the size of the detected face image frame varies greatly, but in fact the depth information is basically unchanged.
  • Phase Detection Auto Focus (PDAF, Phase Detection Auto Focus) method is complicated to implement, requires additional devices or redesigns the pixels of the sensor, and has relatively high requirements on light and the positional accuracy of the sensor.
  • Contrast Focusing (CDAF, Contrast Detection Auto Focus) method, which requires many calculations to complete focusing, the focusing speed is slow, and the focusing effect is poor when the contrast is not obvious.
  • Autofocus method based on monocular depth estimation Monocular depth estimation has scale uncertainty, that is, two face images of different sizes at the same distance from the shooting device will be estimated as two targets with different depth information. It is also inaccurate and effective for face estimation in different poses.
  • the present application provides a photographing method, a photographing device, and a storage medium.
  • the present application provides a shooting method, including:
  • the current posture of the at least partial part of the target person in the current frame image determine the size information of the standard posture corresponding to the at least partial part of the target person in the current frame image;
  • the present application provides a photographing device, the device comprising:
  • the memory is used to store a computer program; the processor is used to execute the computer program and when executing the computer program, implement the following steps:
  • the current posture of the at least partial part of the target person in the current frame image determine the size information of the standard posture corresponding to the at least partial part of the target person in the current frame image;
  • the present application provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the processor implements the above-mentioned shooting method.
  • the embodiments of the present application provide a shooting method, a shooting device, and a storage medium, for obtaining the current posture of at least a part of a target person in a current frame image; and determining the current posture of at least a part of the target person in the current frame image
  • the size information of the standard pose corresponding to at least part of the target person in the current frame image according to the size information of the standard pose corresponding to at least part of the target person in the current frame image, the size information corresponding to the previous frame image
  • the focusing device is controlled to focus; the shooting is carried out according to the focusing result.
  • the size information of the corresponding standard posture is determined according to the current posture of at least part of the target person in the current frame image; according to the corresponding standard
  • the change of the size information of the posture and the size information of the corresponding standard posture in the previous frame image the focusing device is controlled to focus, in this way, it can be more accurately estimated that at least part of the target person is in the current frame image and the previous frame.
  • the change of the depth information in the image is more reliable, and the movement of the focusing device can be controlled according to the direction of the change of the accurately estimated depth information.
  • the focusing speed is fast, and it can quickly focus on a certain estimated focus point.
  • the optimal focus point can be searched back and forth in the range; and the method of the embodiment of the present application does not require additional devices, and the cost is relatively low.
  • FIG. 1 is a schematic flowchart of an embodiment of a shooting method of the present application
  • FIG. 2 is a schematic diagram of an embodiment of a representation manner of face gesture information in the photographing method of the present application
  • FIG. 3 is a schematic diagram of the principle of controlling the focusing device according to the change of the size of the standard frontal face image in the shooting method of the present application;
  • FIG. 5 is a schematic flowchart of another embodiment of the photographing method of the present application.
  • FIG. 6 is a schematic flowchart of another embodiment of the photographing method of the present application.
  • FIG. 7 is a schematic structural diagram of an embodiment of a photographing device of the present application.
  • the AF method based on the change of the size of the face image frame, the estimated depth information is inaccurate; the phase focusing method is complicated to implement, requiring additional devices or redesigning the pixels of the sensor, etc.; Contrast focusing This method requires many calculations to complete the focusing, the focusing speed is slow, and the focusing effect is poor when the contrast is not obvious; the automatic focusing method based on monocular depth estimation is estimated to be inaccurate and effective.
  • the embodiments of the present application provide a shooting method, a shooting device, and a storage medium, for obtaining the current posture of at least a part of a target person in a current frame image; and determining the current posture of at least a part of the target person in the current frame image
  • the size information of the standard pose corresponding to at least part of the target person in the current frame image according to the size information of the standard pose corresponding to at least part of the target person in the current frame image, the size information corresponding to the previous frame image
  • the focusing device is controlled to focus; the shooting is carried out according to the focusing result.
  • the embodiment of the present application determines the size information of the corresponding standard posture according to the current posture of at least part of the target person in the current frame image; according to the corresponding standard
  • the change of the size information of the posture and the size information of the corresponding standard posture in the previous frame image control the focusing device to focus, in this way, at least partial parts of the target person with different postures at the same position can be placed in the current frame image.
  • the current pose is transformed into a standard pose, so it is possible to more accurately estimate the change of the depth information of at least part of the target person in the current frame image and the previous frame image, with higher reliability, and can be based on the accurately estimated depth information.
  • the changing direction controls the movement of the focusing device, and the focusing speed is fast, which can quickly focus on a certain estimated focusing point, and does not need to search for the best focusing point in a large range; and the method of the embodiment of the present application does not require additional devices, The cost is relatively low.
  • FIG. 1 is a schematic flowchart of an embodiment of a shooting method of the present application. The method includes:
  • Step S101 Obtain the current posture of at least a partial part of the target person in the current frame image.
  • the target character if there are multiple characters on the frame image, in order to reduce the processing complexity, it is usually necessary to determine the target character among the multiple characters; if there is only one character on the frame image, the character is the target character.
  • At least partial parts of the target person include, but are not limited to, the face, the shoulders, the forehead on the face, the eyebrows on the face, the entire human body, and the like.
  • the current posture can be the posture of at least a part of the target person when the current frame image is captured, such as: front face posture, left face posture, right face posture, raised face posture, bowed face posture, front face eyebrow posture, side face posture Eyebrows posture, front face forehead posture, side face forehead posture, front shoulder posture, side shoulder posture (left shoulder or right shoulder), shrug posture, high and low shoulder posture, body upright posture, body turn posture, body squat posture , body bent posture, etc.
  • the image of the at least partial part of the target person in the current frame image After determining the at least partial part of the target person, determine the image of the at least partial part of the target person in the current frame image, and estimate the angle information of the orientation of the at least partial part of the target person according to the information of the image of the at least partial part of the target person, That is, the current pose estimation of at least local parts of the target person.
  • Attitudes can generally be represented by rotation matrices, rotation vectors, quaternions, or Euler angles (these four quantities can also be converted to each other).
  • Euler angles are more readable and more widely used.
  • the three commonly used Euler angles are represented by pitch, yaw, and roll; among them, pitch is the pitch angle, which means that the object rotates around the x-axis; yaw is the yaw angle, which means that the object rotates around the y-axis; roll is the roll angle, which means that the object rotates around the y-axis.
  • z-axis rotation Taking at least a partial part of the target person including the face of the target person as an example, referring to FIG. 2 , the face posture information obtained in the embodiment of the present application is represented by three Euler angles pitch, yaw, and roll.
  • Step S102 According to the current posture of the at least partial part of the target person in the current frame image, determine the size information of the standard posture corresponding to the at least partial part of the target person in the current frame image.
  • the standard posture can be a unified reference posture for comparison, for example, it can be the posture of at least a part of the target person when the face is in the front, or it can be at least a part of the posture of the target person’s side (left side or right side) at a fixed angle. attitude, etc. Since the posture of at least some parts of the target person is relatively easier to realize and control, the posture of at least some parts of the target person is usually used as the standard posture, such as the posture of the front face, the posture of the forehead when the The posture of the eyebrows, the posture of the shoulders when the body is upright, the posture of the body upright, etc.
  • the size information of the standard pose corresponding to the at least partial part of the target person in the current frame image may be the size information of the image of the standard pose corresponding to the at least partial part of the target person in the current frame image, wherein the size information may be size information, area information, etc., for example, the size information may be: height information, width information, or height information and width information, and so on. Using height information and/or width information can simplify implementation.
  • the target person In the actual environment, the target person is at the same position, if at least some parts of the target person are in different poses, the size information of the at least partial part images of the target person on the frame image will be different. Conversely, if the depth information of the image of the at least partial part of the target person is estimated by the size information of the image of the at least partial part of the target person in different poses on the frame image, various depth information will be obtained, which will lead to the Depth information estimation of images of at least local parts is inaccurate.
  • the image of the current posture of the at least partial part of the target person in the current frame image is converted into the image of the standard posture corresponding to the at least partial part of the target person in the current frame image, that is, the same position, the same person, its
  • the size information of the image corresponding to the standard pose on the frame image is the same, so the depth information of at least local parts of the target person in the current frame image can be estimated more accurately, and the reliability is higher.
  • Step S103 Control the focusing device to focus according to the change of the size information of the standard posture corresponding to the at least partial part of the target person in the current frame image and the size information of the standard posture corresponding to the previous frame image.
  • the previous frame image may be the image before the current frame image, for example, it may be the previous frame image, the previous several frames of images (for example, the previous two frames, the previous five frames, etc.), or the first frame image.
  • the previous frame image adopts the previous frame image, which can make the focusing process more continuous and the focusing speed is faster.
  • the change between the size information of the image of the standard pose corresponding to the at least partial part of the target person in the current frame image and the size information of the image of the standard pose corresponding to the at least partial part of the target person in the previous frame image it can be more accurate. It can accurately estimate the change of the depth information of at least local parts of the target person in the current frame image and the previous frame image, which is more reliable, and can accurately estimate the direction of the change of the depth information, and control the focus according to the direction of the change of the depth information.
  • the device is moving, and the focusing speed is fast, which can quickly focus on a certain estimated focus point, and there is no need to find the best focus point in a large range.
  • FIG. 3 is the control focusing device according to the change of the size information of the image of the frontal posture in the shooting method of the present application Schematic diagram of the principle.
  • the position of the target person (that is, the real face) at time A is O1
  • the object distance is 01
  • the focusing device is controlled to move toward the focal length FA that can be clearly imaged, and the face of the target face is on the image plane after passing through the lens at time A.
  • Imaging that is, the image of the current posture of the face of the target face on the image plane at time A
  • A'A' an image of the corresponding frontal posture
  • the target person moves from position O1 to the direction away from the lens, at The position of time B is 02
  • the object distance is 02
  • the focusing device is controlled to move toward the focal length FB that can be clearly imaged.
  • the image of the current pose on the image plane is converted to the image of the corresponding frontal pose as B'B'.
  • the change direction of the target person from time A to time B is to move away from the camera, and the real object distance becomes larger, that is, OO1 to OO2 become larger;
  • the focusing device should be controlled to move from the focal length FA at the time A to the focal length FB at the time B in the direction in which the focal length increases (move to the right).
  • the size information (for example, area information) of the image of the frontal posture corresponding to the face of the target person in the current frame image is M1
  • the size of the image of the frontal posture corresponding to the face of the target person in the previous frame image The information is M2, and M1 is greater than M2, that is, the size of the image in the frontal posture becomes larger, indicating that the target person is closer to the shooting device, the object distance becomes shorter, and the focal length should become smaller, and the focusing device needs to be in the focal length position corresponding to the previous image Move in the direction of reducing the focal length or increasing the image distance (moving to the left), so as to quickly determine the moving direction of the focusing device, and control the movement of the focusing device to quickly focus on an estimated focus point.
  • Another example take at least part of the target character including the entire body of the target character, the current posture including the turning posture of the target character, and the standard posture including the upright posture of the target character as an example to illustrate, obtain the entire body of the target character in the current frame image.
  • the turning posture of the entire body of the target person in the current frame image determine the size information of the image of the upright posture corresponding to the entire body of the target person in the current frame image, for example, the height information is H1.
  • the height information of the image in the upright posture corresponding to the entire body of the target person in the previous frame of image is H2, and H1 is less than H2, that is, the height of the image in the upright posture becomes smaller, indicating that the target person is farther away from the shooting device, and the object distance If it becomes longer, the focal length should become larger.
  • the focusing device needs to move in the direction of increasing the focal length or decreasing the image distance (moving to the right) at the focal length position corresponding to the previous frame of the image, so as to quickly determine the moving direction of the focusing device and control the movement of the focusing device. , you can quickly focus to an estimated focus point.
  • Another example take as an example that at least a part of the target person includes the entire body of the target person, the current posture includes the squatting posture of the target person, and the standard posture includes the upright posture of the target person as an example, obtain the entire body of the target person in the current frame image. According to the squatting posture of the entire body of the target person in the current frame image, determine the size information of the image of the upright posture corresponding to the entire body of the target person (such as height information and width information) HL1.
  • the size information of the image in the upright posture corresponding to the entire body of the target person in the previous frame image is HL2, and HL1 is greater than HL2, that is, the size of the image in the upright posture becomes larger, indicating that the target person is closer to the shooting device, and the object distance Shorter, the focal length should become smaller, the focusing device needs to move in the direction of reducing the focal length or increasing the image distance (moving to the left) at the focal length position corresponding to the previous frame of the image, so as to quickly determine the moving direction of the focusing device and control the movement of the focusing device , you can quickly focus to an estimated focus point.
  • Step S104 Shooting according to the focusing result.
  • This embodiment of the present application acquires the current posture of at least a partial part of the target person in the current frame image; according to the current posture of the at least partial part of the target person in the current frame image, it is determined that at least a partial part of the target person is in the current frame image
  • the size information of the corresponding standard posture according to the size information of the standard posture corresponding to at least part of the target person in the current frame image, and the change of the size information of the standard posture corresponding to the previous frame image, the focusing device is controlled to perform Focus; shoot based on focusing results.
  • the embodiment of the present application determines the size information of the corresponding standard posture according to the current posture of at least part of the target person in the current frame image; according to the corresponding standard
  • the change of the size information of the posture and the size information of the corresponding standard posture in the previous frame image control the focusing device to focus, in this way, at least partial parts of the target person with different postures at the same position can be placed in the current frame image.
  • the current pose is transformed into a standard pose, so it is possible to more accurately estimate the change of the depth information of at least part of the target person in the current frame image and the previous frame image, with higher reliability, and can be based on the accurately estimated depth information.
  • the changing direction controls the movement of the focusing device, and the focusing speed is fast, which can quickly focus on a certain estimated focusing point, and does not need to search for the best focusing point in a large range; and the method of the embodiment of the present application does not require additional devices, The cost is relatively low.
  • the technology of recognizing human faces and capturing human face gestures is more widely used.
  • at least part of the target person includes the face of the target person, and the standard posture includes a frontal posture.
  • step S101 before the acquiring the current posture of the face of the target person in the current frame image, may include: acquiring the face of the target person in the current frame image. face image.
  • the acquiring the face image of the face of the target person in the current frame image may further include: acquiring the in-focus face image of the focus area of the current frame image; using the in-focus face image as the target person's face image The face image of the face in the current frame image.
  • any one of the multiple face images can be used as the face image of the target person. Since the in-focus face image is the image of the target focused by the photographing device, its image will be more clearly distinguishable than other face images, which is more conducive to determining the size information of the image corresponding to the face pose, and can be used to obtain more accurate images. Provide technical support for changes in depth information.
  • the photographing device has the function of detecting human face
  • the acquiring the face image of the face of the target person in the current frame image may include:
  • A1 Obtain multiple face output areas of the current frame image through the face detection algorithm.
  • A2 Match the focus area of the current frame image with multiple face output areas to obtain the face output area matching the current frame image and the focus area.
  • A3 Take the face image in the face output area where the current frame image matches the focus area as the face image of the target person's face in the current frame image.
  • the face detection algorithm can detect the position of the face image on the image, which is usually output in the form of a rectangular frame.
  • Face detection algorithms include but are not limited to: Dual Shot Face Detector (DSFD, Dual Shot Face Detector), Convolutional Neural Network (CNN, Convolutional Neural Network), Multi-task Cascaded Convolutional Neural Network (MTCNN, Multi-task Cascaded Convolutional Network), Compact Cascade CNN, etc.
  • the face detection algorithm can detect the position of the face image on the image, multiple face output areas can be quickly obtained through the face detection algorithm. After matching with the focus area, the face output area matching the focus area can be obtained. Quickly determine the face image in the face output area that matches the focus area as the face image of the target person's face in the current frame image.
  • the acquiring the current posture of the face of the target person in the current frame image may include: estimating the current posture of the face of the target person in the current frame image through a face posture estimation algorithm.
  • Face pose estimation algorithms include but are not limited to: face pose estimation method based on key point constraints, face pose estimation algorithm based on three-point perspective, 3D face pose estimation method based on face feature points and linear regression, etc. .
  • a face pose estimation algorithm based on human face key points is adopted, that is, step S101, the current pose of the face of the target person in the current frame image is estimated by the face pose estimation algorithm, which can be The method includes: estimating the current pose of the face of the target person in the current frame image through a face pose estimation algorithm based on face key points.
  • a face pose estimation algorithm based on deep learning is adopted, that is, step S101, the estimation of the current pose of the face of the target person in the current frame image through the face pose estimation algorithm may include: : Estimate the current pose of the face of the target person in the current frame image through a deep learning-based face pose estimation algorithm.
  • Face pose estimation algorithm based on face key points and face pose estimation algorithm based on deep learning are two commonly used and mature face pose estimation algorithms. requirements are more moderate.
  • the facial posture can be classified to determine the facial posture category, and each face posture category has a corresponding database image.
  • the obtained face pose determines the face pose category to which it belongs, determines the corresponding database image according to the face pose category to which it belongs, and synthesizes the face pose image according to the corresponding database image, thereby obtaining the size information of the face pose image; or , the size information of the face pose image can be obtained through the face pose correction technology; and so on.
  • the size information of the image of the face pose is obtained in a relatively simple and convenient manner. That is, in step S102, determining the size information of the frontal posture corresponding to the face of the target person in the current frame image according to the current posture of the target person's face in the current frame image, may include: sub-step S1021, Sub-step S1022 and sub-step S1023 are shown in FIG. 4 .
  • Sub-step S1021 Detect the key points of the face image of the target person's face in the current frame image.
  • Sub-step S1022 According to the current posture of the face of the target person in the current frame image, map the key points of the face image of the target person's face in the current frame image to the key points in the frontal posture.
  • Sub-step S1023 Take the minimum circumscribed rectangle including all the mapped key points as the size information of the frontal pose corresponding to the face of the target person in the current frame image.
  • Minimum Bounding Rectangle refers to the maximum range of several two-dimensional shapes (such as points, lines, polygons) represented by two-dimensional coordinates, that is, the maximum abscissa in each vertex of a given two-dimensional shape, A rectangle bounded by the minimum abscissa, maximum ordinate, and minimum ordinate.
  • the minimum bounding rectangle is the two-dimensional form of the minimum bounding box.
  • the minimum circumscribed rectangle is the largest range including all the mapped key points, in this way, the size information of the image of the frontal face pose obtained is relatively accurate.
  • step S103 The specific content of step S103 will be described in detail below.
  • step S103 the control is performed according to the change of the size information of the frontal posture corresponding to the face of the target person in the current frame image and the size information of the frontal posture corresponding to the previous frame image.
  • Focusing by the focusing device may include: sub-step S103A1 and sub-step S103A2, as shown in FIG. 5 .
  • Sub-step S103A1 According to the change of the size information of the frontal posture corresponding to the face of the target person in the current frame image and the size information of the frontal posture corresponding to the previous frame image, determine that the face of the target person is in the current frame image. and the corresponding depth information change in the previous frame of image.
  • Sub-step S103A2 control the focusing device to focus according to the change of the depth information.
  • the face of the target person in the current frame image and the face of the previous frame image is determined by the change of the size information of the face posture images corresponding to the face of the target person in the current frame image and the previous frame image respectively.
  • the change of the depth information corresponding to the image, and then the change of the focal length is determined according to the change of the depth information, and then the focusing device is controlled to focus according to the change of the focal length.
  • sub-step S103A2 the controlling the focusing device to focus according to the change of the depth information may further include:
  • the change trend and change amount of the focusing device are estimated; according to the change trend and change amount of the focusing device, the focusing device is controlled to reach the preliminary position; the automatic focusing method is further controlled to reach the preliminary position position of the focusing device to focus.
  • the change trend and the change amount of the focusing device may be the movement direction trend of the focusing device and the movement amount under the movement direction trend. Since the change trend and change amount of the focusing device are estimated, the focusing device is controlled to focus according to the change trend and change amount of the focusing device. This process is a relatively fast and rough adjustment, which can make the focusing device fast When the initial position is reached, on the basis of the initial position, the auto-focusing method can be further refined, and the focusing device can be adjusted in a small part to focus, which can achieve a better focusing effect.
  • the automatic focusing method includes a contrast focusing method.
  • the previous frame image may be the first frame image
  • the first frame image may be obtained by focusing on a normal auto-focusing method, so the frontal posture of the face of the target person corresponding to the first frame image may be determined
  • the size information of the image and the optimal imaging focal length of the first frame image, the subsequent frame images can all be compared with the first frame image. That is, in step S103, the focusing device is controlled to focus according to the change of the size information of the frontal posture corresponding to the face of the target person in the current frame image and the size information of the frontal posture corresponding to the previous frame image, It may include: sub-step S103B1 and sub-step S103B2, as shown in FIG. 6 .
  • Sub-step S103B1 Determine the current frame according to the size information of the face of the target person in the frontal posture corresponding to the current frame image, the size information of the frontal posture corresponding to the first frame image, and the optimal imaging focal length of the first frame image. The best imaging focal length for the image.
  • Sub-step S103B2 control the focusing device to focus according to the optimal imaging focal length of the current frame image.
  • the face of the target person is placed in the first frame of image.
  • the size information of the image of the frontal pose corresponding to the frame image and the optimal imaging focal length are used as the reference value, and then the size information of the image of the frontal pose corresponding to the face of the target person in each frame image and the frontal pose as the reference.
  • the real optimal imaging focal length of each frame of image can be calculated, and then the focusing device can be controlled to focus according to the optimal imaging focal length of each frame of image.
  • Sub-step S103B1 according to the size information of the frontal posture corresponding to the face of the target person in the current frame image, the size information of the frontal posture corresponding to the first frame image, and the optimal imaging focal length of the first frame image , before determining the optimal imaging focal length of the current frame image, it may further include: obtaining the size information of the face of the target person in the first frame image corresponding to the frontal posture of the first frame image and the maximum image size of the first frame image by means of automatic focusing. best imaging focal length.
  • the automatic focusing method includes a contrast focusing method.
  • the first frame image can be precisely focused by using an automatic focusing method (such as CDAF) to obtain the frontal posture of the target person's face corresponding to the first frame image.
  • the current frame image is not the first frame image, according to the size information of the image of the face of the target person in the frontal posture corresponding to the current frame image and the face of the target person corresponding to the previous frame image (for example, the previous frame image)
  • the change of the size information of the image of the face pose determines the change of the depth information; according to the change of the depth information, the focusing device is controlled to focus.
  • the controlling the focusing device to focus may include: controlling the focusing device to focus using a proportional-integral-derivative control system.
  • FIG. 7 is a schematic structural diagram of an embodiment of a photographing device of the present application. It should be noted that the photographing device of this embodiment can perform the steps in the above-mentioned photographing method. For a detailed description of the relevant content, please refer to the above-mentioned photographing method. The related content will not be repeated here.
  • the photographing device 100 includes: a memory 1 and a processor 2; the processor 2 and the memory 1 are connected through a bus.
  • the processor 2 may be a microcontroller unit, a central processing unit or a digital signal processor, and so on.
  • the memory 1 may be a Flash chip, a read-only memory, a magnetic disk, an optical disk, a U disk, a mobile hard disk, and the like.
  • the memory 1 is used to store a computer program; the processor 2 is used to execute the computer program and implement the following steps when executing the computer program:
  • the processor when executing the computer program, implements the following steps: obtaining the current posture of the face of the target person in the current frame image; according to the current posture of the face of the target person in the current frame image, Determine the size information of the frontal posture corresponding to the face of the target person in the current frame image; according to the size information of the frontal posture corresponding to the face of the target person in the current frame image and corresponding in the previous frame image
  • the change of the size information of the frontal face posture controls the focusing device to focus.
  • the processor executes the computer program, the following steps are implemented: acquiring a face image of the face of the target person in the current frame image.
  • the processor when executing the computer program, implements the following steps: acquiring a focused face image of the focus area of the current frame image; using the focused face image as the face of the target person in the current frame image face image in .
  • the processor when executing the computer program, implements the following steps: obtaining multiple face output areas of the current frame image through a face detection algorithm; matching the focus area of the current frame image with the multiple face output areas, Obtain the face output area matching the current frame image and the focus area; take the face image in the face output area matching the current frame image and the focus area as the face image of the target person's face in the current frame image.
  • the processor when executing the computer program, implements the following steps: estimating the current posture of the face of the target person in the current frame image through a face posture estimation algorithm.
  • the processor when executing the computer program, implements the following steps: obtain the current posture of the face of the target person in the current frame image by estimating the facial posture estimation algorithm based on the facial key points.
  • the processor executes the computer program, the following steps are implemented: obtain the current pose of the face of the target person in the current frame image by estimating the face pose estimation algorithm based on deep learning.
  • the processor when executing the computer program, implements the following steps: detecting the key points of the face image of the target person's face in the current frame image; according to the target person's face in the current frame image The current posture of the target person, the key point of the face image of the target person's face in the current frame image is mapped to the key point under the face posture; The size information of the face pose corresponding to the face of the target person in the current frame image.
  • the processor when executing the computer program, implements the following steps: according to the size information of the frontal posture corresponding to the face of the target person in the current frame image and the frontal posture corresponding to the previous frame image According to the change of the size information of the target person, the change of the depth information corresponding to the face of the target person in the current frame image and the previous frame image is determined; according to the change of the depth information, the focusing device is controlled to focus.
  • the processor when executing the computer program, implements the following steps: according to the change of the depth information, estimating the change trend and change amount of the focusing device; according to the change trend and change amount of the focusing device , control the focusing device to reach the preliminary position; further control the focusing device that has reached the preliminary position to focus by means of automatic focusing.
  • the processor when executing the computer program, implements the following steps: according to the size information of the frontal posture corresponding to the face of the target person in the current frame image, the size information of the frontal posture corresponding to the first frame image The size information and the optimal imaging focal length of the first frame image determine the optimal imaging focal length of the current frame image; and control the focusing device to focus according to the optimal imaging focal length of the current frame image.
  • the processor when executing the computer program, implements the following steps: obtaining the size information of the frontal posture of the face of the target person in the first frame image corresponding to the first frame image and the maximum size of the first frame image by auto-focusing. best imaging focal length.
  • the automatic focusing method includes a contrast focusing method.
  • the present application further provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the processor enables the processor to implement the photographing method described in any one of the above.
  • a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the processor enables the processor to implement the photographing method described in any one of the above.
  • the computer-readable storage medium may be an internal storage unit of the above-mentioned photographing apparatus, such as a hard disk or a memory.
  • the computer-readable storage medium can also be an external storage device, such as an equipped plug-in hard disk, smart memory card, secure digital card, flash memory card, and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

L'invention concerne un procédé de photographie, un appareil de photographie et un support d'enregistrement. Le procédé comprend les étapes consistant à : acquérir une pose actuelle d'au moins une partie locale d'une personne cible dans une trame d'image actuelle (S101) ; déterminer, en fonction de la pose actuelle d'au moins la partie locale de la personne cible dans la trame d'image actuelle, des informations de taille d'une pose standard correspondante d'au moins la partie locale de la personne cible dans la trame d'image actuelle (S102) ; en fonction d'un changement des informations de taille de la pose standard correspondante d'au moins la partie locale de la personne cible dans la trame d'image actuelle par rapport à des informations de taille d'une pose standard correspondante dans une trame d'image précédente, commander un appareil de mise au point pour faire une mise au point (S103) ; et photographier selon un résultat de mise au point (S104).
PCT/CN2020/105290 2020-07-28 2020-07-28 Procédé de photographie, appareil de photographie et support d'enregistrement WO2022021093A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2020/105290 WO2022021093A1 (fr) 2020-07-28 2020-07-28 Procédé de photographie, appareil de photographie et support d'enregistrement
CN202080005968.7A CN113056907A (zh) 2020-07-28 2020-07-28 拍摄方法、拍摄装置及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/105290 WO2022021093A1 (fr) 2020-07-28 2020-07-28 Procédé de photographie, appareil de photographie et support d'enregistrement

Publications (1)

Publication Number Publication Date
WO2022021093A1 true WO2022021093A1 (fr) 2022-02-03

Family

ID=76509773

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/105290 WO2022021093A1 (fr) 2020-07-28 2020-07-28 Procédé de photographie, appareil de photographie et support d'enregistrement

Country Status (2)

Country Link
CN (1) CN113056907A (fr)
WO (1) WO2022021093A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115103105A (zh) * 2022-04-29 2022-09-23 北京旷视科技有限公司 拍摄控制方法、电子设备、存储介质及计算机程序产品

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114040145B (zh) * 2021-11-20 2022-10-21 深圳市音络科技有限公司 一种视频会议人像显示方法、系统、终端及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060140612A1 (en) * 2004-12-28 2006-06-29 Fujinon Corporation Auto focus system
JP2008052225A (ja) * 2006-08-28 2008-03-06 Olympus Imaging Corp カメラ、合焦制御方法、プログラム
CN101339349A (zh) * 2007-07-04 2009-01-07 三洋电机株式会社 摄像装置以及自动聚焦控制方法
CN101387812A (zh) * 2007-09-13 2009-03-18 鸿富锦精密工业(深圳)有限公司 相机自动对焦系统及方法
CN105120149A (zh) * 2015-08-14 2015-12-02 深圳市金立通信设备有限公司 一种自动聚焦的方法以及终端
CN105227833A (zh) * 2015-09-09 2016-01-06 华勤通讯技术有限公司 连续对焦方法及装置

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5375846B2 (ja) * 2011-02-04 2013-12-25 カシオ計算機株式会社 撮像装置、自動焦点調整方法、およびプログラム
CN103929583B (zh) * 2013-01-15 2018-08-10 北京三星通信技术研究有限公司 一种控制智能终端的方法及智能终端
US10311595B2 (en) * 2013-11-19 2019-06-04 Canon Kabushiki Kaisha Image processing device and its control method, imaging apparatus, and storage medium
CN105812652B (zh) * 2015-07-29 2019-11-26 维沃移动通信有限公司 一种终端的对焦方法及终端
CN108495028B (zh) * 2018-03-14 2019-11-29 维沃移动通信有限公司 一种摄像调焦方法、装置及移动终端
CN109657607B (zh) * 2018-12-17 2020-07-07 中新智擎科技有限公司 一种基于人脸识别的人脸目标测距方法、装置及存储介质
CN109961055A (zh) * 2019-03-29 2019-07-02 广州市百果园信息技术有限公司 人脸关键点检测方法、装置、设备及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060140612A1 (en) * 2004-12-28 2006-06-29 Fujinon Corporation Auto focus system
JP2008052225A (ja) * 2006-08-28 2008-03-06 Olympus Imaging Corp カメラ、合焦制御方法、プログラム
CN101339349A (zh) * 2007-07-04 2009-01-07 三洋电机株式会社 摄像装置以及自动聚焦控制方法
CN101387812A (zh) * 2007-09-13 2009-03-18 鸿富锦精密工业(深圳)有限公司 相机自动对焦系统及方法
CN105120149A (zh) * 2015-08-14 2015-12-02 深圳市金立通信设备有限公司 一种自动聚焦的方法以及终端
CN105227833A (zh) * 2015-09-09 2016-01-06 华勤通讯技术有限公司 连续对焦方法及装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115103105A (zh) * 2022-04-29 2022-09-23 北京旷视科技有限公司 拍摄控制方法、电子设备、存储介质及计算机程序产品
CN115103105B (zh) * 2022-04-29 2024-06-11 北京旷视科技有限公司 拍摄控制方法、电子设备、存储介质及计算机程序产品

Also Published As

Publication number Publication date
CN113056907A (zh) 2021-06-29

Similar Documents

Publication Publication Date Title
US10455141B2 (en) Auto-focus method and apparatus and electronic device
CN108496350B (zh) 一种对焦处理方法及设备
CN111566612A (zh) 基于姿势和视线的视觉数据采集系统
WO2022021093A1 (fr) Procédé de photographie, appareil de photographie et support d'enregistrement
CN108076278A (zh) 一种自动对焦方法、装置及电子设备
CN107301665A (zh) 具有可变焦光学摄像头的深度摄像头及其控制方法
US10491804B2 (en) Focus window determining method, apparatus, and device
US20210185231A1 (en) Image stabilization apparatus, method of controlling same, and storage medium
WO2020124517A1 (fr) Procédé de commande d'équipement de photographie, dispositif de commande d'équipement de photographie et équipement de photographie
CN110647782A (zh) 三维人脸重建与多姿态人脸识别方法及装置
WO2014008320A1 (fr) Systèmes et procédés pour la capture et l'affichage de panoramiques à mise au point flexible
WO2017101292A1 (fr) Procédé, dispositif et système de mise au point automatique
CN112446917A (zh) 一种姿态确定方法及装置
Perra et al. Adaptive eye-camera calibration for head-worn devices
EP3109695B1 (fr) Procédé et dispositif électronique pour mise au point automatique sur un objet en mouvement
CN115299031A (zh) 自动对焦方法及其相机系统
CN114463781A (zh) 确定触发手势的方法、装置及设备
CN114363522A (zh) 拍照方法及相关装置
CN108875488B (zh) 对象跟踪方法、对象跟踪装置以及计算机可读存储介质
US11902497B2 (en) Depth measurement
CN112419399B (zh) 一种图像测距方法、装置、设备和存储介质
US11546502B2 (en) Image processing apparatus and method of controlling the same
JP2022048077A (ja) 画像処理装置およびその制御方法
CN114565954A (zh) 一种轻量级人脸检测与跟踪方法
JP2012227830A (ja) 情報処理装置、その処理方法、プログラム及び撮像装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20947423

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20947423

Country of ref document: EP

Kind code of ref document: A1