WO2021093637A1 - Focusing method and apparatus, electronic device, and computer readable storage medium - Google Patents

Focusing method and apparatus, electronic device, and computer readable storage medium Download PDF

Info

Publication number
WO2021093637A1
WO2021093637A1 PCT/CN2020/126139 CN2020126139W WO2021093637A1 WO 2021093637 A1 WO2021093637 A1 WO 2021093637A1 CN 2020126139 W CN2020126139 W CN 2020126139W WO 2021093637 A1 WO2021093637 A1 WO 2021093637A1
Authority
WO
WIPO (PCT)
Prior art keywords
phase difference
image
difference value
target
segmented image
Prior art date
Application number
PCT/CN2020/126139
Other languages
French (fr)
Chinese (zh)
Inventor
贾玉虎
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2021093637A1 publication Critical patent/WO2021093637A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/672Focus control based on electronic image sensor signals based on the phase difference signals

Definitions

  • This application relates to the field of imaging, in particular to a method and device for tracking focus, electronic equipment, and computer-readable storage media.
  • Focus tracking refers to the process of maintaining focus on the subject in the subsequent shooting process after the target camera focuses on the subject.
  • Traditional focusing methods include phase detection auto focus (English: phase detection auto focus; referred to as PDAF).
  • a focus tracking method device, electronic device, and computer-readable storage medium are provided.
  • a tracking focus method is applied to an electronic device.
  • the electronic device includes an image sensor and a lens.
  • the image sensor includes a plurality of pixel point groups arranged in an array, and each of the pixel point groups includes M arranged in an array. *N pixels; each pixel corresponds to a photosensitive unit, where M and N are both natural numbers greater than or equal to 2; the method includes:
  • the target subject moves, determine a target subject prediction area according to the target subject detection area and the movement data of the target subject, and obtain a detection image corresponding to the target subject prediction area;
  • the image sensor is used to obtain the phase difference value of the detection image, the phase difference value includes a phase difference value in a first direction and a phase difference value in a second direction; Set an angle; and
  • the lens is controlled to continuously focus on the moving target subject according to the phase difference value in the first direction and the phase difference value in the second direction.
  • a tracking focus device is applied to an electronic device.
  • the electronic device includes an image sensor and a lens.
  • the image sensor includes a plurality of pixel point groups arranged in an array, and each of the pixel point groups includes M* arranged in an array. N pixels; each pixel corresponds to a photosensitive unit, where M and N are both natural numbers greater than or equal to 2; including:
  • Recognition module used to obtain the target subject detection area where the target subject in the preview image is located
  • a prediction module configured to determine a target subject prediction area according to the target subject detection area and movement data of the target subject when the target subject moves, and obtain a detection image corresponding to the target subject prediction area;
  • the acquiring module is configured to acquire the phase difference value of the detection image by using the image sensor, the phase difference value including a phase difference value in a first direction and a phase difference value in a second direction; the first direction and the phase difference value
  • the second direction is a preset angle
  • the focus tracking module is configured to control the lens to continuously focus on the moving target subject according to the phase difference value in the first direction and the phase difference value in the second direction.
  • An electronic device including a memory and one or more processors.
  • the memory stores computer-readable instructions.
  • the processor executes the Steps of tracking focus method.
  • One or more computer-readable storage media storing computer-readable instructions.
  • the one or more processors implement the steps of the focus tracking method when executed.
  • Figure 1 is a schematic diagram of the principle of phase detection autofocus
  • FIG. 2 is a schematic diagram of phase detection pixel points arranged in pairs in the pixel points included in the image sensor;
  • FIG. 3 is a schematic diagram of a part of the structure of an image sensor in one or more embodiments
  • FIG. 4 is a schematic diagram of the structure of pixels in one or more embodiments.
  • FIG. 5 is a schematic structural diagram of an electronic device in one or more embodiments.
  • Fig. 6 is a schematic diagram of a filter set on a pixel point group in one or more embodiments
  • FIG. 7 is a flowchart of a focus tracking method in one or more embodiments.
  • FIG. 8 is a flowchart of steps in one or more embodiments: the phase difference value in the first direction and the phase difference value in the second direction control the lens to continuously focus on the moving target subject;
  • FIG. 9 is a step in one or more embodiments: a flow chart of obtaining the target defocus distance according to the phase difference value in the first direction and the phase difference value in the second direction;
  • 10 is a step in one or more embodiments: a flow chart of obtaining a target phase difference value according to the phase difference value in the first direction and the phase difference value in the second direction;
  • FIG. 11 is a step in one or more embodiments: a flowchart of determining a target phase difference value according to the magnitude relationship between the first confidence level and the second confidence level;
  • Figure 12 is a step in one or more embodiments: a flow chart of obtaining the phase difference value of the detection image
  • FIG. 13 shows the steps in one or more embodiments: acquiring the phase difference value in the first direction according to the phase relationship corresponding to the first segmented image and the second segmented image and corresponding to the third segmented image and the fourth segmented image A flow chart for obtaining the phase difference value in the second direction according to the phase relationship;
  • FIG. 14 is a structural block diagram of a focusing device in one or more embodiments.
  • Figure 15 is a block diagram of an electronic device in one or more embodiments.
  • first, second, etc. used in this application can be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish the first element from another element.
  • first direction may be referred to as the second direction
  • second direction may be referred to as the first direction. Both the first direction and the second direction are directions, but they are not the same direction.
  • phase detection auto focus (English: phase detection auto focus; referred to as PDAF) is a relatively common auto focus technology.
  • FIG. 1 is a schematic diagram of the principle of phase detection auto focus (PDAF).
  • M1 is the position of the image sensor when the electronic device is in the in-focus state, where the in-focus state refers to the state of successful focusing.
  • the imaging light g reflected by the object W toward the lens Lens in different directions converges on the image sensor, that is, the imaging light g reflected by the object W toward the lens Lens in different directions is in the image
  • the image is imaged at the same position on the sensor. At this time, the image of the image sensor is clear.
  • M2 and M3 are the possible positions of the image sensor when the electronic device is not in focus.
  • the image sensor when the image sensor is at the M2 position or the M3 position, the object W is reflected in different directions of the lens Lens.
  • the imaging light g will be imaged at different positions. Please refer to Figure 1.
  • the imaging light g reflected by the object W in different directions to the lens Lens is imaged at the position A and the position B respectively.
  • the image sensor is at the M3 position
  • the object W is reflected toward the lens.
  • the imaging light g in different directions of the lens Lens is imaged at the position C and the position D respectively. At this time, the image of the image sensor is not clear.
  • the difference in position of the image formed by the imaging light entering the lens from different directions in the image sensor can be obtained.
  • the difference between position A and position B can be obtained, or, Obtain the difference between position C and position D; after obtaining the difference in position of the image formed by the imaging light entering the lens from different directions in the image sensor, the difference and the difference between the lens and the image sensor in the camera
  • the geometric relationship is used to obtain the defocus distance.
  • the so-called defocus distance refers to the distance between the current position of the image sensor and the position where the image sensor should be in the in-focus state; the electronic device can focus according to the obtained defocus distance.
  • the calculated PD value is 0.
  • the larger the calculated value the farther the position of the clutch focal point is, and the smaller the value, the closer the clutch focal point.
  • phase detection pixel points may be provided in pairs in the pixel points included in the image sensor.
  • a phase detection pixel point pair (hereinafter referred to as a pixel point pair) A may be provided in the image sensor.
  • one phase detection pixel is subjected to left shielding (English: Left Shield), and the other phase detection pixel is subjected to right shielding (English: Right Shield).
  • phase detection pixel point that has been blocked on the left only the right beam of the imaging beam directed to the phase detection pixel point can be in the photosensitive part of the phase detection pixel point (that is, the part that is not blocked). ).
  • phase detection pixel that has been occluded on the right only the left beam of the imaging beam directed at the phase detection pixel can be in the photosensitive part of the phase detection pixel (that is, not The occluded part) is imaged. In this way, the imaging beam can be divided into left and right parts, and the phase difference can be obtained by comparing the images formed by the left and right parts of the imaging beam.
  • phase detection pixels set in the image sensor are usually sparse, therefore, only the horizontal phase difference can be obtained through the phase detection pixels, and the horizontal texture scene cannot be calculated.
  • the calculated PD value will cause confusion.
  • the correct result is obtained.
  • the shooting scene is a horizontal line, and the left and right images will be obtained according to the PD characteristics, but the PD value cannot be calculated.
  • an imaging component is provided in the embodiment of the application, which can be used to detect the phase difference value in the first direction and the second The phase difference value of the direction, for a horizontal texture scene, the phase difference value of the second direction can be used to achieve focusing.
  • the present application provides an imaging assembly.
  • the imaging component includes an image sensor.
  • the image sensor may be a metal oxide semiconductor device (English: Complementary Metal Oxide Semiconductor; abbreviation: CMOS) image sensor, a charge-coupled device (English: Charge-coupled Device; abbreviation: CCD), a quantum thin film sensor, or an organic sensor.
  • CMOS Complementary Metal Oxide Semiconductor
  • CCD Charge-coupled Device
  • quantum thin film sensor or an organic sensor.
  • Fig. 3 is a schematic diagram of a part of the image sensor in one of the embodiments.
  • the image sensor 300 includes a plurality of pixel point groups Z arranged in an array, and each pixel point group Z includes a plurality of pixel points D arranged in an array, and each pixel point D corresponds to a photosensitive unit.
  • the multiple pixels include M*N pixels, where both M and N are natural numbers greater than or equal to 2.
  • Each pixel point D includes a plurality of sub-pixel points d arranged in an array. That is, each photosensitive unit can be composed of a plurality of photosensitive elements arranged in an array.
  • the photosensitive element is an element that can convert light signals into electrical signals. In one of the embodiments, the photosensitive element may be a photodiode.
  • each pixel point group Z includes 4 pixel points D arranged in a 2*2 array, and each pixel point may include 4 sub-pixel points d arranged in a 2*2 array.
  • Each pixel point D includes 2*2 photodiodes, and the 2*2 photodiodes are arranged correspondingly to the 4 sub-pixel points d arranged in a 2*2 array.
  • Each photodiode is used to receive optical signals and perform photoelectric conversion, thereby converting the optical signals into electrical signals for output.
  • the 4 sub-pixels d included in each pixel D are set corresponding to the same color filter, so each pixel D corresponds to a color channel, such as the red R channel, or the green channel G, or the blue channel B .
  • sub-pixel point 1 and sub-pixel point 2 can be synthesized, and sub-pixel point 3 and sub-pixel point
  • the pixel points 4 are synthesized to form a pair of PD pixels in the up and down direction, and the horizontal edge is detected to obtain the phase difference value in the second direction, that is, the vertical direction PD value (phase difference value); sub-pixel point 1 and sub-pixel point 3 are synthesized,
  • the sub-pixel point 2 and the sub-pixel point 4 are combined to form a pair of PD pixels in the left and right directions, and vertical edges can be detected to obtain the phase difference value in the first direction, that is, the PD value (phase difference value) in the horizontal direction.
  • Fig. 5 is a schematic structural diagram of an electronic device in one of the embodiments.
  • the electronic device includes a micro lens 50, a filter 52, and an imaging component 54.
  • the micro lens 50, the filter 52 and the imaging component 54 are sequentially located on the incident light path, that is, the micro lens 50 is disposed on the filter 52, and the filter 52 is disposed on the imaging component 54.
  • the filter 52 can include three types of red, green, and blue, and can only transmit light of corresponding wavelengths of red, green, and blue, respectively.
  • One filter 52 is arranged on one pixel.
  • the imaging component 54 includes the image sensor in FIG. 3.
  • the lens 50 is used to receive incident light and transmit the incident light to the filter 52. After the filter 52 smoothes the incident light, the smoothed light is incident on the imaging component 54 on a pixel basis.
  • the photosensitive unit in the image sensor converts the light incident from the filter 52 into a charge signal through the photoelectric effect, and generates a pixel signal consistent with the charge signal.
  • the charge signal is consistent with the received light intensity.
  • FIG. 6 is a schematic diagram of a filter set on the pixel point group in one of the embodiments.
  • the pixel group Z includes 4 pixels D arranged in an array arrangement of two rows and two columns, wherein the color channel of the pixels in the first row and the first column is green, that is, the first row and the first row
  • the filter set on the pixels in one column is a green filter; the color channel of the pixels in the first row and second column is red, that is, the filter set on the pixels in the first row and second column
  • the filter is a red filter; the color channel of the pixel in the second row and the first column is blue, that is, the filter set on the pixel in the second row and the first column is a blue filter;
  • the color channel of the pixel points in the second row and second column is green, that is, the filter set on the pixel points in the second row and second column is a green filter.
  • Fig. 7 is a flowchart of a focus tracking method in one of the embodiments. As shown in FIG. 7, the focus tracking method includes steps 702 to 708.
  • Step 702 Obtain the target subject detection area where the target subject in the preview image is located.
  • the preview image refers to the image obtained after the camera is focused.
  • the subject refers to various objects, such as people, flowers, cats, dogs, cows, blue sky, white clouds, backgrounds, etc.
  • the target subject refers to the subject in need, which can be selected according to needs.
  • the target body detection area may be an area outlined according to the outline of the target body, or may be a rectangular frame or a round frame surrounding the target body. It should be noted that the shape of the target body area is not limited, and the target body area includes most of the target body.
  • a camera device of an electronic device may be used to focus and obtain a preview image, and subject detection of the preview image may be performed to obtain a target subject detection area including the target subject.
  • Focusing refers to the process of adjusting the focal length to make the image of the photographed object clear.
  • the focal length refers to the distance from the optical center of the lens in the camera to the focal point of light gathering.
  • Salient object detection refers to automatically processing regions of interest when facing a scene and selectively ignoring regions that are not of interest.
  • the region of interest is referred to as the target subject detection region.
  • the subject detection model is obtained by pre-collecting a large amount of training data, and inputting the training data into the subject detection model including the initial network weight for training.
  • the subject detection model can be trained to recognize and detect various subjects, such as people, flowers, cats, dogs, backgrounds, etc.
  • Step 704 When the target subject moves, determine the target subject prediction area according to the target subject detection area and the movement data of the target subject, and obtain a detection image corresponding to the target subject prediction area.
  • the movement data of the target subject is data such as the speed, direction of movement, and trajectory of the target subject.
  • the movement data of the target subject can be obtained by using a trained neural network model.
  • the target subject prediction area is the area where the next time-series target subject is predicted.
  • the detected image refers to an image including the target subject collected by the imaging device using the target subject prediction area as the focus area.
  • the target subject when the target subject is a movable subject, the movement of the target subject can be detected, and the focus can be automatically tracked. According to the motion data of the target subject and the current target subject detection area, the target subject prediction area corresponding to the next time sequence is predicted. For example, you can input the first image and the second image in the trained neural network model. The first image and the second image include the same target subject. The trained neural network model can be based on the first image.
  • the image and the second image include different movement data of the target subject, predicting the movement data of the next time sequence target subject and the target subject prediction area; it can also be inputting a first image including a moving target, and the first image includes:
  • the current time sequence corresponds to the target subject detection area and the motion data of the target subject, and the corresponding network model can output a second image, which carries the target subject prediction area and the motion data of the target subject corresponding to the next time sequence.
  • the electronic device obtains the detection image according to the pixel information of the pixel points included in each pixel point group in the image sensor.
  • the sub-pixel included in the image sensor is a photosensitive element that can convert light signals into electrical signals.
  • the intensity of the light signal received by the sub-pixel can be obtained according to the electrical signal output by the sub-pixel, and the intensity of the light signal received by the sub-pixel can be obtained according to the sub-pixel reception.
  • the intensity of the obtained light signal can obtain the pixel information of the sub-pixel point.
  • Step 706 Obtain a phase difference value of the detection image, where the phase difference value includes the phase difference value in the first direction and the phase difference value in the second direction.
  • the first direction and the second direction form a preset angle.
  • the phase difference value of the detection image is acquired, and the phase difference value includes the phase difference value in the first direction and the phase difference value in the second direction.
  • the first direction and the second direction form a preset angle Possible values are 30°, 40°, 45°, 60° and other angles. It may also be 90°, that is, when the phase difference value in the first direction refers to the phase difference value in the horizontal direction, the phase difference value in the second direction refers to the phase difference value in the vertical direction.
  • Step 708 Control the lens to continuously focus on the moving target subject according to the phase difference value in the first direction and the phase difference value in the second direction.
  • the target phase difference value that is in a mapping relationship with the defocus distance value can be obtained, and then due to the difference between the target phase difference value and the target defocus distance value
  • the corresponding relationship can be obtained through calibration, and the phase difference value in the first direction and the phase difference value in the second direction and the target phase difference value can be obtained to obtain the target defocus distance.
  • the control lens continuously focuses on the moving target subject according to the target defocus distance value. Focus tracking refers to the process of maintaining the focus on the target subject in the subsequent shooting process after the lens focuses on the target subject, and the target subject in the acquired detection image remains clear imaging.
  • the focus tracking method provided in this embodiment obtains the target subject detection area in the preview image where the target subject is located.
  • the target subject prediction area is determined according to the target subject detection area and the movement data of the target subject, and the detection image corresponding to the target subject prediction area is acquired.
  • the phase difference value of the detection image is acquired, and the phase difference value includes the phase difference value in the first direction and the phase difference value in the second direction.
  • the first direction and the second direction form a preset angle.
  • the lens is controlled to continuously focus on the moving target subject according to the phase difference value in the first direction and the phase difference value in the second direction.
  • the step of acquiring the detection image corresponding to the target subject prediction area includes: controlling the movement of the lens so that the focus is on the center of the target subject prediction area and collecting the detection image corresponding to the target subject prediction area.
  • the lens is controlled to move so that the focus is on the center of the target subject prediction area and the detection image corresponding to the target subject prediction area is collected, and the detection image includes the target subject.
  • the process is as follows: when the target subject prediction area is rectangular, the focus is on the center of the rectangle; when the target subject prediction area is circular, the focus is on the dot; when the target subject prediction area is an irregular pattern , Then focus on the center of gravity of the target subject prediction area.
  • determining the target subject prediction area according to the target subject detection area and the motion data of the target subject includes: inputting the first image into the prediction network model, the first image carrying the target subject detection area and the motion data of the target subject Information. Obtain a second image output by the prediction network model, and the second image is marked with a target subject prediction area.
  • the predictive network model refers to a network model that has been trained.
  • a first image including a moving target is input.
  • the first image includes: the current time sequence corresponding to the target subject detection area and the target subject's motion data.
  • the network model can output a second image, the second image is marked with the target subject prediction area corresponding to the next time sequence, and the motion data of the target subject can be obtained from the second image.
  • the predictive network model is a network model established based on a recurrent neural network algorithm.
  • Recurrent neural network has memory, parameter sharing and Turing completeness, so it has certain advantages when learning the nonlinear characteristics of the sequence.
  • Recurrent neural networks have applications in natural language processing (NLP) such as speech recognition, language modeling, machine translation and other fields, and are also used in various time series predictions.
  • NLP natural language processing
  • the phase difference value in the first direction and the phase difference value in the second direction control the lens to continuously focus on the moving target subject, including steps 802 and 804.
  • Step 802 Obtain the target defocus distance according to the phase difference value in the first direction and the phase difference value in the second direction.
  • the target phase difference value is determined according to the magnitude relationship between the phase difference value in the first direction and the phase difference value in the second direction or the carried confidence information.
  • There is a mapping relationship between the target phase difference value and the target defocus distance and the target phase difference value is input into a function used to characterize the mapping relationship, and the target defocus distance can be obtained.
  • Step 804 Control the lens movement of the electronic device according to the target defocus distance to continuously focus on the moving target subject.
  • the target defocus distance refers to the distance between the current position of the image sensor and the position where the image sensor should be in the focus state; the electronic device can control the lens to move to the focus position according to the obtained target defocus distance Chase the focus.
  • the above-mentioned focus tracking method may further include: generating a depth value according to the target defocus distance value.
  • the target defocus distance value can calculate the image distance in the in-focus state, and the object distance can be obtained according to the image distance and the focal length, and the object distance is the depth value.
  • the step of obtaining the target defocus distance according to the phase difference value in the first direction and the phase difference value in the second direction includes: step 902 and step 904.
  • Step 902 Obtain a target phase difference value according to the phase difference value in the first direction and the phase difference value in the second direction.
  • the target phase difference value can be determined according to the magnitude relationship between the phase difference value in the first direction and the phase difference value in the second direction or the confidence information, and the target defocus distance can be obtained according to the target phase difference value.
  • the confidence level of the phase difference value in the first direction and the confidence level of the phase difference value in the second direction can be obtained from Select a phase difference value as the target phase difference value according to the phase difference value in the first direction and the phase difference value in the second direction, and then obtain the mapping relationship between the phase difference value and the defocus distance value according to the determined target phase difference value The corresponding target defocus distance value.
  • Step 904 Obtain the target defocus distance according to the target phase difference value.
  • mapping relationship between the target phase difference value and the target defocus distance there is a mapping relationship between the target phase difference value and the target defocus distance, and the target phase difference value is input into a function used to characterize the mapping relationship, and the target defocus distance can be obtained.
  • obtaining the target defocus distance according to the target phase difference value includes: calculating the target defocus distance according to the calibrated defocus function and the target phase difference value, and the calibrated defocus function is used to characterize the target phase difference The relationship between the value and the target defocus distance.
  • the calibration process of the corresponding relationship between the target phase difference value and the defocus distance value includes: dividing the effective focus stroke of the camera module into N (N ⁇ 3) equal parts, namely (near focus DAC-far focus DAC)/N, In this way, the focus range of the motor is covered; focus is performed at each focus DAC (DAC can be 0 to 1023) position, and the phase difference of the current focus DAC position is recorded; after completing the motor focus stroke, take a group of N focus DACs Compare the PD values of, generate N similar ratios K, and fit the two-dimensional data composed of DAC and PD to obtain a straight line with slope K.
  • the step of obtaining the target phase difference value according to the phase difference value in the first direction and the phase difference value in the second direction includes: step 1002 to step 1006.
  • Step 1002 Obtain a first confidence degree corresponding to the phase difference value in the first direction.
  • the phase difference value in the first direction carries information about the first confidence level, and the first confidence level is used to characterize the accuracy of the phase difference value in the first direction.
  • Step 1004 Obtain a second confidence level corresponding to the phase difference value in the second direction.
  • the phase difference value in the second direction carries information about the second confidence level, and the second confidence level is used to characterize the accuracy of the phase difference value in the second direction.
  • Step 1006 Determine the target phase difference value according to the magnitude relationship between the first confidence level and the second confidence level.
  • the target phase difference value is determined according to the magnitude relationship between the first confidence level and the second confidence level. For example, when the confidence level of the phase difference value in the first direction is greater than the confidence level of the phase difference value in the second direction, the accuracy of identifying the phase difference value in the first direction is higher than the accuracy of the phase difference value in the second direction.
  • the phase difference value in the first direction can be selected as the target phase difference value; when the confidence level of the phase difference value in the first direction is less than the confidence level of the phase difference value in the second direction, the phase difference value in the first direction is identified
  • the accuracy is lower than the accuracy of the phase difference in the second direction, the phase difference in the second direction can be selected as the target phase difference; when the confidence in the phase difference in the first direction is equal to that of the phase difference in the second direction
  • the accuracy of identifying the phase difference value in the first direction is equal to the accuracy of the phase difference value in the second direction.
  • the phase difference value can be selected from the phase difference value in the first direction and the phase difference value in the second direction. The larger value is used as the target phase difference value.
  • the step of determining the target phase difference value according to the magnitude relationship between the first confidence level and the second confidence level includes: step 1102 to step 1106.
  • Step 1102 When the first degree of confidence is greater than the second degree of confidence, use the phase difference value in the first direction corresponding to the first degree of confidence as the target phase difference value.
  • the phase difference value in the first direction is selected, and the corresponding defocus distance is obtained according to the phase difference value in the first direction. Value, and confirm that the moving direction is horizontal.
  • Step 1104 When the second degree of confidence is greater than the first degree of confidence, use the phase difference value in the second direction corresponding to the second degree of confidence as the target phase difference value.
  • the phase difference value in the second direction is selected, and the corresponding defocus distance is obtained according to the phase difference value in the second direction. Value, and determine that the moving direction is vertical.
  • Step 1106 When the first degree of confidence is equal to the second degree of confidence, both the phase difference in the first direction and the phase difference in the second direction are used as the target phase difference value.
  • the defocus distance value in the horizontal direction may be determined according to the phase difference value in the first direction, and the defocus distance value in the horizontal direction may be determined according to the second direction.
  • the phase difference value of the direction determines the defocus distance value in the vertical direction. It first moves according to the defocus distance value in the horizontal direction, and then moves according to the defocus distance value in the vertical direction, or first moves according to the defocus distance value in the vertical direction. The focus distance value moves, and then moves according to the defocus distance value in the horizontal direction.
  • the PD pixel pair in the vertical direction can be compared to calculate the second direction in the vertical direction.
  • the defocus distance value is calculated according to the phase difference value in the second direction, and then the lens movement is controlled according to the defocus distance value in the vertical direction to achieve focus; for scenes with vertical texture, because the vertical direction The PD pixel pair cannot get the phase difference value in the second direction.
  • the step of obtaining the phase difference value of the detection image includes: step 1202 and step 1204.
  • Step 1202 According to the first direction, the detected image is divided into a first segmented image and a second segmented image.
  • the phase difference value in the first direction is obtained according to the phase relationship corresponding to the first segmented image and the second segmented image.
  • the electronic device may perform segmentation processing on the target image along the line direction (the x-axis direction in the image coordinate system).
  • each segment of the target image can be segmented.
  • a dividing line is perpendicular to the direction of the row.
  • the first segmented image and the second segmented image obtained after segmenting the target image in the direction of the row can be referred to as the left image and the right image, respectively.
  • Step 1204 According to the second direction, the detected image is divided into a third segmented image and a fourth segmented image. Acquire the phase difference value in the second direction according to the phase relationship corresponding to the third segmented image and the fourth segmented image.
  • the electronic device may perform segmentation processing on the target image along the direction of the column (the y-axis direction in the image coordinate system).
  • each segment of the target image is segmented.
  • a dividing line is perpendicular to the direction of the column.
  • the first segmented image and the second segmented image obtained after segmenting the target image in the direction of the column can be referred to as the upper image and the lower image, respectively.
  • the first direction is the row direction
  • the second direction is the column direction.
  • the step of dividing the detected image into a first segmented image and a second segmented image according to the first direction includes: performing segmentation processing on the detected image according to the first direction to obtain multiple image regions, each of which includes a detected image A row of pixels in. Acquire multiple first segmented image areas and multiple second segmented image areas from multiple image areas.
  • the first segmented image area includes even-numbered rows of pixels in the detected image
  • the second segmented image area includes the detected image. Pixels in odd rows.
  • a plurality of first segmented image areas are used to stitch together a first segmented image
  • a plurality of second segmented image areas are used to form a second segmented image.
  • the first direction is the row direction
  • the detection image is segmented according to the first direction to obtain multiple image regions, and each image region includes a row of pixels in the detection image.
  • the first segmented image area refers to the pixels of the even-numbered rows in the detection image
  • the second segmented image area refers to It is to detect odd-numbered rows of pixels in the image.
  • the multiple first segmented image areas are sequentially stitched according to their positions in the detected image to obtain the first segmented image
  • multiple second segmented image areas are sequentially stitched according to their positions in the detected image to obtain the second segmented image. image.
  • the step of dividing the detected image into a third segmented image and a fourth segmented image according to the second direction includes: performing segmentation processing on the detected image according to the second direction to obtain a plurality of image regions, and each image region includes a detected image A column of pixels in. Acquire multiple third segmented image areas and multiple fourth segmented image areas from multiple image areas, the third segmented image area includes even-numbered columns of pixels in the detected image, and the fourth segmented image area includes the detected image Pixels in odd columns. A plurality of third segmented image regions are used to stitch together a third segmented image, and a plurality of fourth segmented image regions are used to form a fourth segmented image.
  • the second direction is the column direction
  • the detection image is segmented according to the column direction to obtain multiple image regions
  • each image region includes a column of pixels in the detection image.
  • the third segmented image area refers to the pixels in the even-numbered column of the detection image
  • the fourth segmented image area refers to It is to detect odd-numbered columns of pixels in the image.
  • the multiple third segmented image areas are sequentially stitched according to their positions in the detected image to obtain the third segmented image
  • the multiple fourth segmented image areas are sequentially stitched according to their positions in the detected image to obtain the fourth segmented image. image.
  • the step obtains the phase difference value in the first direction according to the phase relationship corresponding to the first segmented image and the second segmented image and according to the third segmented image and the fourth segmented image.
  • Obtaining the phase difference value in the second direction by the phase relationship corresponding to the image includes: step 1302 and step 1304.
  • Step 1302 Determine the phase difference value of the pixels that match each other according to the position difference of the pixels that match each other in the first segmented image and the second segmented image.
  • the phase difference value in the first direction is determined according to the phase difference value of the pixels that match each other.
  • the phase difference value in the first direction can be determined according to the phase difference of the pixel a and the pixel b that are matched with each other.
  • Step 1304 Determine the phase difference value of the pixels that match each other according to the position difference of the pixels that match each other in the third segmented image and the fourth segmented image.
  • the phase difference value in the second direction is determined according to the phase difference value of the pixels that match each other.
  • the phase difference value in the second direction can be determined according to the phase difference of the pixel a and the pixel b that are matched with each other.
  • “Matched pixels” means that the pixel matrix composed of the pixel itself and the surrounding pixels are similar to each other.
  • the pixel a and the surrounding pixels in the first segmented image form a pixel matrix with 3 rows and 3 columns, and the pixel value of the pixel matrix is:
  • the pixel b and the surrounding pixels in the second segmented image also form a pixel matrix with 3 rows and 3 columns, and the pixel value of the pixel matrix is:
  • the two matrices are similar, and it can be considered that the pixel a and the pixel b match each other.
  • the pixel value of each corresponding pixel in the two pixel matrices can be calculated, and then the absolute value of the difference obtained is added, and the result of the addition is used to determine Whether the pixel matrix is similar, that is, if the result of the addition is less than a preset threshold, the pixel matrix is considered to be similar; otherwise, the pixel matrix is considered to be dissimilar.
  • the difference of 1 and 2 the difference of 15 and 15, the difference of 70 and 70, ..., and then the absolute difference Values are added, and the result of the addition is 3. If the result of the addition of 3 is less than the preset threshold, it is considered that the two pixel matrices with 3 rows and 3 columns are similar.
  • Another way to judge whether the pixel matrix is similar is to use the Sobel convolution kernel calculation method or the high Laplacian calculation method to extract the edge characteristics, and judge whether the pixel matrix is similar by the edge characteristics.
  • the positional difference of pixels that match each other refers to the difference between the positions of the pixels in the first segmented image and the positions of the pixels in the second segmented image among the matched pixels .
  • the position difference between the pixel a and the pixel b that match each other refers to the difference between the position of the pixel a in the first segmented image and the position of the pixel b in the second segmented image.
  • the pixels that match each other correspond to the different images formed by the imaging light entering the lens from different directions in the image sensor.
  • the pixel a in the first segmented image and the pixel b in the second segmented image match each other, where the pixel a may correspond to the image formed at position A in FIG. 1, and the pixel b may correspond to FIG. In the image formed at position B.
  • the phase difference of the matched pixels can be determined according to the position difference of the matched pixels. .
  • the brightness value of the pixel points in the above pixel point group obtains the target image.
  • the phase difference value of the matching pixels can be quickly determined, and it also contains a wealth of phases.
  • the difference value can improve the accuracy of the phase difference value, and improve the accuracy and stability of focusing.
  • the embodiment of the present application provides a focus tracking device, which is applied to electronic equipment.
  • the focus tracking device includes: an identification module 1402, a prediction module 1404, an acquisition module 1406, and a focus tracking module 1408.
  • the recognition module 1402 is used to obtain the target subject detection area where the target subject in the preview image is located.
  • the preview image refers to the image obtained after the camera is focused.
  • the subject refers to various objects, such as people, flowers, cats, dogs, cows, blue sky, white clouds, backgrounds, etc.
  • the target subject refers to the subject in need, which can be selected according to needs.
  • the target body detection area may be an area outlined according to the outline of the target body, or may be a rectangular frame or a round frame surrounding the target body. It should be noted that the shape of the target body area is not limited, and the target body area includes most of the target body.
  • the recognition module 1402 is configured to use an electronic device to focus and obtain a preview image, and perform subject detection on the preview image to obtain a target subject detection area including the target subject.
  • Focusing refers to the process of adjusting the focal length to make the image of the photographed object clear.
  • the focal length refers to the distance from the optical center of the lens in the camera to the focal point of the light.
  • Salient object detection refers to automatically processing regions of interest when facing a scene and selectively ignoring regions that are not of interest.
  • the region of interest is referred to as the target subject detection region.
  • the subject detection model is obtained by pre-collecting a large amount of training data, and inputting the training data into the subject detection model including the initial network weight for training.
  • the subject detection model can be trained to recognize and detect various subjects, such as people, flowers, cats, dogs, backgrounds, etc.
  • the prediction module 1404 is configured to determine the target subject prediction area according to the target subject detection area and the movement data of the target subject when the target subject moves, and obtain a detection image corresponding to the target subject prediction area.
  • the movement data of the target subject is data such as the speed, direction of movement, and trajectory of the target subject.
  • the movement data of the target subject can be obtained by using a trained neural network model.
  • the target subject prediction area is the area where the next time-series target subject is predicted.
  • the detected image refers to an image including the target subject collected by the electronic device using the target subject prediction area as the focus area.
  • the prediction module 1404 is configured to detect the movement of the target subject when the target subject is a movable subject, and automatically perform focus tracking. According to the motion data of the target subject and the current target subject detection area, the target subject prediction area corresponding to the next time sequence is predicted. For example, you can input the first image and the second image into the trained neural network model. The first image and the second image include the same target subject. The trained neural network model can be based on the first image.
  • the image and the second image include different movement data of the target subject, which can predict the movement data of the next time series target subject and the target subject prediction area; it can also be inputting a first image including a moving target, the first image Including: the current time sequence corresponding to the target subject detection area and the motion data of the target subject, the corresponding network model can output a second image, and the second image carries the target subject prediction area and the motion data of the target subject corresponding to the next time sequence.
  • the electronic device obtains the detection image according to the pixel information of the pixel points included in each pixel point group in the image sensor.
  • the sub-pixel included in the image sensor is a photosensitive element that can convert light signals into electrical signals.
  • the intensity of the light signal received by the sub-pixel can be obtained according to the electrical signal output by the sub-pixel.
  • the intensity of the obtained light signal can obtain the pixel information of the sub-pixel point.
  • the obtaining module 1406 is used to obtain the phase difference value of the detection image, the phase difference value including the phase difference value in the first direction and the phase difference value in the second direction.
  • the first direction and the second direction form a preset angle.
  • the acquiring module 1406 is configured to acquire the phase difference value of the detection image, and the phase difference value includes the phase difference value in the first direction and the phase difference value in the second direction.
  • the first direction and the second direction form a preset angle Possible values are 30°, 40°, 45°, 60° and other angles. It may also be 90°, that is, when the phase difference value in the first direction refers to the phase difference value in the horizontal direction, the phase difference value in the second direction refers to the phase difference value in the vertical direction.
  • the focus tracking module 1408 is configured to control the lens to continuously focus on the moving target subject according to the phase difference value in the first direction and the phase difference value in the second direction.
  • the focus tracking module 1408 is configured to obtain the target phase difference value that is in a mapping relationship with the defocus distance value according to the phase difference value in the first direction and the phase difference value in the second direction.
  • the correspondence between the focal distance values can be obtained by calibration, and the target defocus distance can be obtained by obtaining the phase difference value in the first direction and the phase difference value in the second direction and the target phase difference value.
  • the control lens continuously focuses on the moving target subject according to the target defocus distance value. Focus tracking refers to the process of maintaining the focus on the target subject in the subsequent shooting process after the lens focuses on the target subject, and the target subject in the acquired detection image remains clear imaging.
  • the focus tracking device acquires the target subject detection area in the preview image where the target subject is located.
  • the target subject prediction area is determined according to the target subject detection area and the movement data of the target subject, and the detection image corresponding to the target subject prediction area is acquired.
  • the phase difference value of the detection image is acquired, and the phase difference value includes the phase difference value in the first direction and the phase difference value in the second direction.
  • the first direction and the second direction form a preset angle.
  • the lens is controlled to continuously focus on the moving target subject according to the phase difference value in the first direction and the phase difference value in the second direction.
  • the prediction module is used to control the movement of the lens so that the focus is at the center of the predicted region of the target subject, and the detection image corresponding to the predicted region of the target subject is collected based on the focus.
  • the prediction module is used to input the first image into the prediction network model, the first image carries the information of the target subject detection area and the motion data of the target subject, and the second image output by the prediction network model is obtained.
  • the image carries the information of the predicted area of the target subject.
  • the acquiring module is used to acquire the target defocus distance according to the phase difference value in the first direction and the phase difference value in the second direction; according to the target defocus distance, the lens movement of the electronic device is controlled to continue to the moving target subject Focus.
  • the obtaining module is configured to obtain the target phase difference value according to the phase difference value in the first direction and the phase difference value in the second direction; and obtain the target defocus distance according to the target phase difference value.
  • the acquisition module is used to calculate the target defocus distance according to the calibrated defocus function and the target phase difference value
  • the calibrated defocus function is used to characterize the relationship between the target phase difference value and the target defocus distance
  • the acquiring module is configured to acquire a first confidence level corresponding to a phase difference value in a first direction; acquire a second confidence level corresponding to a phase difference value in a second direction; according to the first confidence level and the second confidence level The degree of relationship determines the target phase difference value.
  • the acquiring module is configured to use the phase difference value in the first direction corresponding to the first confidence degree as the target phase difference value when the first confidence degree is greater than the second confidence degree;
  • the degree of confidence is one
  • the phase difference in the second direction corresponding to the second degree of confidence is taken as the target phase difference value
  • the first degree of confidence is equal to the second degree of confidence
  • the phase difference in the first direction and the phase difference in the second direction are both As the target phase difference value.
  • the acquisition module is used to divide the detected image into the first segmented image and the second segmented image according to the first direction.
  • the phase difference value in the first direction is obtained according to the phase relationship corresponding to the first segmented image and the second segmented image; the detected image is segmented into the third segmented image and the fourth segmented image according to the second direction. Acquire the phase difference value in the second direction according to the phase relationship corresponding to the third segmented image and the fourth segmented image.
  • the acquisition module is used to perform segmentation processing on the detection image according to the first direction to obtain multiple image areas, each image area includes a row of pixels in the detection image, and multiple image areas are obtained from the multiple image areas.
  • the first segmented image area and a plurality of second segmented image areas the first segmented image area includes pixels in even-numbered lines in the detection image, and the second segmented image area includes pixels in odd-numbered lines in the detection image.
  • the segmented image areas are spliced into the first segmented image, and multiple second segmented image areas are used to form the second segmented image;
  • the detection image is segmented according to the second direction to obtain multiple image areas, each image
  • the area includes a column of pixels in the detection image; multiple third segmented image areas and multiple fourth segmented image areas are obtained from the multiple image areas, and the third segmented image area includes even-numbered columns of pixels in the detected image.
  • the four-segmented image area includes pixels of odd columns in the detection image; a plurality of third-segmented image areas are used to stitch together a third segmented image, and a plurality of fourth-segmented image areas are used to form a fourth segmented image.
  • the acquisition module is configured to determine the phase difference value of the pixels that match each other according to the position difference of the pixels that match each other in the first segmented image and the second segmented image.
  • the phase difference value in the first direction is determined according to the phase difference values of the mutually matched pixels; the phase difference value of the mutually matched pixels is determined according to the position difference of the mutually matched pixels in the third segmented image and the fourth segmented image.
  • the phase difference value in the second direction is determined according to the phase difference value of the pixels that match each other.
  • the focus tracking device may be divided into different modules as needed to complete all or part of the functions of the focus tracking device.
  • Each module in the above-mentioned tracking device can be implemented in whole or in part by software, hardware, and a combination thereof.
  • the foregoing modules may be embedded in the form of hardware or independent of the processor in the electronic device, or may be stored in the memory of the electronic device in the form of software, so that the processor can call and execute the operations corresponding to the foregoing modules.
  • An electronic device that includes a memory and one or more processors.
  • the memory stores computer-readable instructions.
  • the computer-readable instructions are executed by one or more processors, the one or more processors execute the various implementations described above.
  • the focus tracking method in the example is described above.
  • the embodiment of the present application also provides a computer-readable storage medium.
  • One or more non-volatile computer-readable storage media containing computer-executable instructions when the computer-executable instructions are executed by one or more processors, cause the processors to execute the tracking in each of the foregoing embodiments ⁇ Focus method.
  • an electronic device is provided.
  • the electronic device may be an electronic device, and the electronic device may be an electronic device with a digital image capturing function.
  • the electronic device may be a smart phone or a tablet.
  • the internal structure diagram can be as shown in Figure 15.
  • the electronic device includes a processor and a memory connected through a system bus. Among them, the processor of the electronic device is used to provide calculation and control capabilities.
  • the memory of the electronic device includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium may store an operating system and computer readable instructions.
  • the internal memory provides an environment for the operation of the operating system and computer-readable instructions in the non-volatile storage medium.
  • the electronic device may also include a lens and an image sensor, where the lens may be composed of a set of lenses, and the image sensor may be a metal oxide semiconductor element (English: Complementary Metal Oxide Semiconductor. Abbreviation) : CMOS) image sensor, charge-coupled device (English: Charge-coupled Device. Abbreviation: CCD), quantum thin film sensor or organic sensor, etc.
  • the image sensor may be connected to the processor through a bus, and the processor may implement a focus tracking method provided by the embodiment of the present application through a signal output from the image sensor to the processor.
  • FIG. 15 is only a block diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation on the electronic device to which the solution of the present application is applied.
  • the specific electronic device may Including more or fewer parts than shown in the figure, or combining some parts, or having a different arrangement of parts.
  • an electronic device in an embodiment of the present application, includes a memory and one or more processors.
  • the memory stores computer-readable instructions.
  • processors execute the computer-readable instructions, Implement the following steps:
  • the target subject detection area where the target subject in the preview image is located.
  • the target subject prediction area is determined according to the target subject detection area and the movement data of the target subject, and the detection image corresponding to the target subject prediction area is acquired.
  • the phase difference value of the detection image is acquired, and the phase difference value includes the phase difference value in the first direction and the phase difference value in the second direction.
  • the first direction and the second direction form a preset angle.
  • the lens is controlled to continuously focus on the moving target subject according to the phase difference value in the first direction and the phase difference value in the second direction.
  • Non-volatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Channel (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.

Abstract

A focusing method and apparatus, an electronic device and a computer-readable storage medium, the focusing method comprising: obtaining a target subject detection area in which a target subject is located in a preview image; when the target subject moves, determining a target subject prediction area according to the target subject detection area and movement data of the target subject, and obtaining a detection image corresponding to the target subject prediction region; using an image sensor to obtain phase differences of the detection image, the phase differences comprising a phase difference in a first direction and a phase difference in a second direction, and the first direction forming a preset included angle with the second direction; and according to the phase difference in the first direction and the phase difference in the second direction, controlling a lens to continuously focus on the moving target subject.

Description

追焦方法和装置、电子设备、计算机可读存储介质Focus tracking method and device, electronic equipment, and computer readable storage medium
相关申请的交叉引用Cross-references to related applications
本申请要求于2019年11月12日提交中国专利局,申请号为201911101390.0,申请名称为“追焦方法和装置、电子设备、计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application filed with the Chinese Patent Office on November 12, 2019, the application number is 201911101390.0, and the application name is "focus tracking method and device, electronic equipment, computer-readable storage medium", and its entire content Incorporated in this application by reference.
技术领域Technical field
本申请涉及影像领域,特别是涉及一种追焦方法和装置、电子设备、计算机可读存储介质。This application relates to the field of imaging, in particular to a method and device for tracking focus, electronic equipment, and computer-readable storage media.
背景技术Background technique
随着电子设备技术的发展,越来越多的用户通过电子设备拍摄图像。针对运动物体,为了保证拍摄的图像清晰,通常需要对电子设备的摄像模组进行追焦,即通过持续的调节镜头与图像传感器之间的距离,以使拍摄对象始终在焦平面上。追焦指的是当目标摄像头对拍摄对象进行对焦之后,在后续的拍摄过程中保持对拍摄对象的对焦的过程。传统的对焦方式包括相位检测自动对焦(英文:phase detection auto focus;简称:PDAF)。With the development of electronic device technology, more and more users take images through electronic devices. For moving objects, in order to ensure that the captured images are clear, it is usually necessary to focus on the camera module of the electronic device, that is, by continuously adjusting the distance between the lens and the image sensor, so that the subject is always on the focal plane. Focus tracking refers to the process of maintaining focus on the subject in the subsequent shooting process after the target camera focuses on the subject. Traditional focusing methods include phase detection auto focus (English: phase detection auto focus; referred to as PDAF).
发明内容Summary of the invention
根据本申请公开的各种实施例,提供一种追焦方法、装置、电子设备、计算机可读存储介质。According to various embodiments disclosed in the present application, a focus tracking method, device, electronic device, and computer-readable storage medium are provided.
一种追焦方法,应用于电子设备,所述电子设备包括:图像传感器和镜头,所述图像传感器包括阵列排布的多个像素点组,每个所述像素点组包括阵列排布的M*N个像素点;每个像素点对应一个感光单元,其中,M和N均为大于或等于2的自然数;所述方法包括:A tracking focus method is applied to an electronic device. The electronic device includes an image sensor and a lens. The image sensor includes a plurality of pixel point groups arranged in an array, and each of the pixel point groups includes M arranged in an array. *N pixels; each pixel corresponds to a photosensitive unit, where M and N are both natural numbers greater than or equal to 2; the method includes:
获取预览图像中的目标主体所处的目标主体检测区域;Acquiring the target subject detection area where the target subject in the preview image is located;
当所述目标主体移动时,根据所述目标主体检测区域和所述目标主体的移动数据确定目标主体预测区域,并获取所述目标主体预测区域对应的检测图像;When the target subject moves, determine a target subject prediction area according to the target subject detection area and the movement data of the target subject, and obtain a detection image corresponding to the target subject prediction area;
利用所述图像传感器获取所述检测图像的相位差值,所述相位差值包括第一方向的相位差值和第二方向的相位差值;所述第一方向与所述第二方向成预设夹角;及The image sensor is used to obtain the phase difference value of the detection image, the phase difference value includes a phase difference value in a first direction and a phase difference value in a second direction; Set an angle; and
根据所述第一方向的相位差值和所述第二方向的相位差值控制所述镜头持续对移动的所述目标主体进行对焦。The lens is controlled to continuously focus on the moving target subject according to the phase difference value in the first direction and the phase difference value in the second direction.
一种追焦装置,应用于电子设备,所述电子设备包括图像传感器和镜头,所述图像传感器包括阵列排布的多个像素点组,每个所述像素点组包括阵列排布的M*N个像素点;每个像素点对应一个感光单元,其中,M和N均为大于或等于2的自然数;包括:A tracking focus device is applied to an electronic device. The electronic device includes an image sensor and a lens. The image sensor includes a plurality of pixel point groups arranged in an array, and each of the pixel point groups includes M* arranged in an array. N pixels; each pixel corresponds to a photosensitive unit, where M and N are both natural numbers greater than or equal to 2; including:
识别模块,用于获取预览图像中的目标主体所处的目标主体检测区域Recognition module, used to obtain the target subject detection area where the target subject in the preview image is located
预测模块,用于当所述目标主体移动时,根据所述目标主体检测区域和所述目标主体的移动数据确定目标主体预测区域,并获取所述目标主体预测区域对应的检测图像;A prediction module, configured to determine a target subject prediction area according to the target subject detection area and movement data of the target subject when the target subject moves, and obtain a detection image corresponding to the target subject prediction area;
获取模块,用于利用所述图像传感器获取所述检测图像的相位差值,所述相位差值包括第一方向的相位差值和第二方向的相位差值;所述第一方向与所述第二方向成预设夹角;及The acquiring module is configured to acquire the phase difference value of the detection image by using the image sensor, the phase difference value including a phase difference value in a first direction and a phase difference value in a second direction; the first direction and the phase difference value The second direction is a preset angle; and
追焦模块,用于根据所述第一方向的相位差值和所述第二方向的相位差值控制所述镜头持续对移动的所述目标主体进行对焦。The focus tracking module is configured to control the lens to continuously focus on the moving target subject according to the phase difference value in the first direction and the phase difference value in the second direction.
一种电子设备,包括存储器及一个或多个处理器,所述存储器中储存有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行如所述的追焦方法的步骤。An electronic device, including a memory and one or more processors. The memory stores computer-readable instructions. When the computer-readable instructions are executed by the processor, the processor executes the Steps of tracking focus method.
一个或多个存储有计算机可读指令的计算机可读存储介质,计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行时实现如所述追焦方法的步骤。One or more computer-readable storage media storing computer-readable instructions. When the computer-readable instructions are executed by one or more processors, the one or more processors implement the steps of the focus tracking method when executed.
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本申请的其它特征和优点将从说明书、附图以及权利要求书变得明显。The details of one or more embodiments of the present application are set forth in the following drawings and description. Other features and advantages of this application will become apparent from the description, drawings and claims.
附图说明Description of the drawings
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些 附图获得其他的附图。In order to more clearly describe the technical solutions in the embodiments of the present application or the prior art, the following will briefly introduce the drawings that need to be used in the description of the embodiments or the prior art. Obviously, the drawings in the following description are only These are some embodiments of the present application. For those of ordinary skill in the art, other drawings can be obtained based on these drawings without creative work.
图1为相位检测自动对焦的原理示意图;Figure 1 is a schematic diagram of the principle of phase detection autofocus;
图2为在图像传感器包括的像素点中成对地设置相位检测像素点的示意图;FIG. 2 is a schematic diagram of phase detection pixel points arranged in pairs in the pixel points included in the image sensor; FIG.
图3为一个或多个实施例中图像传感器的部分结构示意图;FIG. 3 is a schematic diagram of a part of the structure of an image sensor in one or more embodiments;
图4为一个或多个实施例中像素点的结构示意图;4 is a schematic diagram of the structure of pixels in one or more embodiments;
图5为一个或多个实施例中电子设备的结构示意图;FIG. 5 is a schematic structural diagram of an electronic device in one or more embodiments;
图6为一个或多个实施例中像素点组上设置滤光片的示意图;Fig. 6 is a schematic diagram of a filter set on a pixel point group in one or more embodiments;
图7为一个或多个实施例中追焦方法的流程图;FIG. 7 is a flowchart of a focus tracking method in one or more embodiments;
图8为一个或多个实施例中步骤:第一方向的相位差值和第二方向的相位差值控制镜头持续对移动的目标主体进行对焦的流程图;FIG. 8 is a flowchart of steps in one or more embodiments: the phase difference value in the first direction and the phase difference value in the second direction control the lens to continuously focus on the moving target subject;
图9为一个或多个实施例中步骤:根据第一方向的相位差值和第二方向的相位差值获取目标离焦距离的流程图;FIG. 9 is a step in one or more embodiments: a flow chart of obtaining the target defocus distance according to the phase difference value in the first direction and the phase difference value in the second direction;
图10为一个或多个实施例中步骤:根据第一方向的相位差值和第二方向的相位差值获取目标相位差值的流程图;10 is a step in one or more embodiments: a flow chart of obtaining a target phase difference value according to the phase difference value in the first direction and the phase difference value in the second direction;
图11为一个或多个实施例中步骤:根据第一置信度和第二置信度的大小关系确定目标相位差值的流程图;FIG. 11 is a step in one or more embodiments: a flowchart of determining a target phase difference value according to the magnitude relationship between the first confidence level and the second confidence level;
图12为一个或多个实施例中步骤:获取检测图像的相位差值的流程图;Figure 12 is a step in one or more embodiments: a flow chart of obtaining the phase difference value of the detection image;
图13为一个或多个实施例中步骤:根据第一切分图像和第二切分图像对应的相位关系获取第一方向的相位差值和根据第三切分图像和第四切分图像对应的相位关系获取第二方向的相位差值的流程图;FIG. 13 shows the steps in one or more embodiments: acquiring the phase difference value in the first direction according to the phase relationship corresponding to the first segmented image and the second segmented image and corresponding to the third segmented image and the fourth segmented image A flow chart for obtaining the phase difference value in the second direction according to the phase relationship;
图14为一个或多个实施例中对焦装置的结构框图;FIG. 14 is a structural block diagram of a focusing device in one or more embodiments;
图15为一个或多个实施例中电子设备的框图。Figure 15 is a block diagram of an electronic device in one or more embodiments.
具体实施方式Detailed ways
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。In order to make the purpose, technical solutions, and advantages of this application clearer, the following further describes this application in detail with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the application, and not used to limit the application.
可以理解,本申请所使用的术语“第一”、“第二”等可在本文中用于描述各种元件,但这些元件不受这些术语限制。这些术语仅用于将第一个元件与另一个元件区分。举例来说,在不脱离本申请的范围的情况下,可以将第一方向称为第二方向,且类似地,可将第二方向称为第一方向。第一方向和第二方向两者都是方向,但其不是同一方向。It can be understood that the terms "first", "second", etc. used in this application can be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish the first element from another element. For example, without departing from the scope of the present application, the first direction may be referred to as the second direction, and similarly, the second direction may be referred to as the first direction. Both the first direction and the second direction are directions, but they are not the same direction.
在拍摄图像时,为了保证运动物体的图像拍摄清晰,通常需要对电子设备进行追焦,追焦指的是当目标摄像头对拍摄对象进行对焦之后,在后续的拍摄过程中保持对拍摄对象的对焦的过程。例如,在电子设备拍摄图像的预览过程中,对拍摄对象进行对焦之后,在后续获取的预览图像中仍然保持对拍摄对象的对焦,则获取的预览图像中的拍摄对象仍是清晰的成像。所谓“对焦”指的是调节电子设备的镜头与图像传感器之间的距离,从而使图像传感器成像清晰的过程。相位检测自动对焦(英文:phase detection auto focus;简称:PDAF)是一种比较常见的自动对焦技术。When shooting images, in order to ensure that the image of the moving object is captured clearly, it is usually necessary to follow the focus of the electronic device. The focus is to maintain the focus of the subject in the subsequent shooting process after the target camera focuses on the subject. the process of. For example, in the preview process of the image taken by the electronic device, after the subject is focused, the subject is still focused in the subsequent acquired preview image, and the subject in the acquired preview image is still clearly imaged. The so-called "focusing" refers to the process of adjusting the distance between the lens of the electronic device and the image sensor to make the image sensor clear. Phase detection auto focus (English: phase detection auto focus; referred to as PDAF) is a relatively common auto focus technology.
下面,本申请实施例将对PDAF技术的原理进行简要说明。Below, the embodiments of the present application will briefly describe the principle of PDAF technology.
图1为相位检测自动对焦(phase detection auto focus,PDAF)的原理示意图。如图1所示,M1为电子设备处于合焦状态时,图像传感器所处的位置,其中,合焦状态指的是成功对焦的状态。当图像传感器位于M1位置时,由物体W反射向镜头Lens的不同方向上的成像光线g在图像传感器上会聚,也即是,由物体W反射向镜头Lens的不同方向上的成像光线g在图像传感器上的同一位置处成像,此时,图像传感器成像清晰。Figure 1 is a schematic diagram of the principle of phase detection auto focus (PDAF). As shown in FIG. 1, M1 is the position of the image sensor when the electronic device is in the in-focus state, where the in-focus state refers to the state of successful focusing. When the image sensor is at the M1 position, the imaging light g reflected by the object W toward the lens Lens in different directions converges on the image sensor, that is, the imaging light g reflected by the object W toward the lens Lens in different directions is in the image The image is imaged at the same position on the sensor. At this time, the image of the image sensor is clear.
M2和M3为电子设备不处于合焦状态时,图像传感器所可能处于的位置,如图1所示,当图像传感器位于M2位置或M3位置时,由物体W反射向镜头Lens的不同方向上的成像光线g会在不同的位置成像。请参考图1,当图像传感器位于M2位置时,由物体W反射向镜头Lens的不同方向上的成像光线g在位置A和位置B分别成像,当图像传感器位于M3位置时,由物体W反射向镜头Lens的不同方向上的成像光线g在位置C和位置D分别成像,此时,图像传 感器成像不清晰。M2 and M3 are the possible positions of the image sensor when the electronic device is not in focus. As shown in Figure 1, when the image sensor is at the M2 position or the M3 position, the object W is reflected in different directions of the lens Lens. The imaging light g will be imaged at different positions. Please refer to Figure 1. When the image sensor is at the M2 position, the imaging light g reflected by the object W in different directions to the lens Lens is imaged at the position A and the position B respectively. When the image sensor is at the M3 position, the object W is reflected toward the lens. The imaging light g in different directions of the lens Lens is imaged at the position C and the position D respectively. At this time, the image of the image sensor is not clear.
在PDAF技术中,可以获取从不同方向射入镜头的成像光线在图像传感器中所成的像在位置上的差异,例如,如图1所示,可以获取位置A和位置B的差异,或者,获取位置C和位置D的差异;在获取到从不同方向射入镜头的成像光线在图像传感器中所成的像在位置上的差异之后,可以根据该差异以及摄像机中镜头与图像传感器之间的几何关系,得到离焦距离,所谓离焦距离指的是图像传感器当前所处的位置与合焦状态时图像传感器所应该处于的位置的距离;电子设备可以根据得到的离焦距离进行对焦。In PDAF technology, the difference in position of the image formed by the imaging light entering the lens from different directions in the image sensor can be obtained. For example, as shown in Figure 1, the difference between position A and position B can be obtained, or, Obtain the difference between position C and position D; after obtaining the difference in position of the image formed by the imaging light entering the lens from different directions in the image sensor, the difference and the difference between the lens and the image sensor in the camera The geometric relationship is used to obtain the defocus distance. The so-called defocus distance refers to the distance between the current position of the image sensor and the position where the image sensor should be in the in-focus state; the electronic device can focus according to the obtained defocus distance.
由此可知,合焦时,计算得到的PD值为0,反之算出的值越大,表示离合焦点的位置越远,值越小,表示离合焦点越近。采用PDAF对焦时,通过计算出PD值,再根据标定得到PD值与离焦距离之间的对应关系,可以求得离焦距离,然后根据离焦距离控制镜头移动达到合焦点,以实现对焦。It can be seen that when focusing, the calculated PD value is 0. On the contrary, the larger the calculated value, the farther the position of the clutch focal point is, and the smaller the value, the closer the clutch focal point. When using PDAF to focus, by calculating the PD value, and then obtaining the corresponding relationship between the PD value and the defocusing distance according to the calibration, the defocusing distance can be obtained, and then controlling the lens to move to the focal point according to the defocusing distance to achieve focusing.
相关技术中,可以在图像传感器包括的像素点中成对地设置一些相位检测像素点,如图2所示,图像传感器中可以设置有相位检测像素点对(以下称为像素点对)A,像素点对B和像素点对C。其中,在每个像素点对中,一个相位检测像素点进行左侧遮挡(英文:Left Shield),另一个相位检测像素点进行右侧遮挡(英文:Right Shield)。In the related art, some phase detection pixel points may be provided in pairs in the pixel points included in the image sensor. As shown in FIG. 2, a phase detection pixel point pair (hereinafter referred to as a pixel point pair) A may be provided in the image sensor. Pixel point pair B and pixel point pair C. Among them, in each pixel point pair, one phase detection pixel is subjected to left shielding (English: Left Shield), and the other phase detection pixel is subjected to right shielding (English: Right Shield).
对于进行了左侧遮挡的相位检测像素点而言,射向该相位检测像素点的成像光束中仅有右侧的光束才能在该相位检测像素点的感光部分(也即是未被遮挡的部分)上成像,对于进行了右侧遮挡的相位检测像素点而言,射向该相位检测像素点的成像光束中仅有左侧的光束才能在该相位检测像素点的感光部分(也即是未被遮挡的部分)上成像。这样,就可以将成像光束分为左右两个部分,通过对比左右两部分成像光束所成的像,即可得到相位差。For the phase detection pixel point that has been blocked on the left, only the right beam of the imaging beam directed to the phase detection pixel point can be in the photosensitive part of the phase detection pixel point (that is, the part that is not blocked). ). For the phase detection pixel that has been occluded on the right, only the left beam of the imaging beam directed at the phase detection pixel can be in the photosensitive part of the phase detection pixel (that is, not The occluded part) is imaged. In this way, the imaging beam can be divided into left and right parts, and the phase difference can be obtained by comparing the images formed by the left and right parts of the imaging beam.
然而,由于图像传感器中设置的相位检测像素点通常较为稀疏,因此,通过相位检测像素点只能获取到水平的相位差,无法计算水平纹理的场景,计算得到的PD值会产生混淆而得不出正确的结果,例如拍摄场景为一条水平线,根据PD特性会得到左右两张图像,但无法计算出PD值。However, because the phase detection pixels set in the image sensor are usually sparse, therefore, only the horizontal phase difference can be obtained through the phase detection pixels, and the horizontal texture scene cannot be calculated. The calculated PD value will cause confusion. The correct result is obtained. For example, the shooting scene is a horizontal line, and the left and right images will be obtained according to the PD characteristics, but the PD value cannot be calculated.
为了解决相位检测自动对焦针对一些水平纹理的场景无法计算得出PD值实现对焦的情况,本申请实施例中提供了一种成像组件,可以用来检测输出第一方向的相位差值和第二方向的相位差值,针对水平纹理场景,可采用第二方向的相位差值来实现对焦。In order to solve the situation that the phase detection autofocus cannot calculate the PD value to achieve focus for some horizontal texture scenes, an imaging component is provided in the embodiment of the application, which can be used to detect the phase difference value in the first direction and the second The phase difference value of the direction, for a horizontal texture scene, the phase difference value of the second direction can be used to achieve focusing.
在其中一个实施例中,本申请提供了一种成像组件。成像组件包括图像传感器。图像传感器可以为金属氧化物半导体元件(英文:Complementary Metal Oxide Semiconductor;简称:CMOS)图像传感器、电荷耦合元件(英文:Charge-coupled Device;简称:CCD)、量子薄膜传感器或者有机传感器等。In one of the embodiments, the present application provides an imaging assembly. The imaging component includes an image sensor. The image sensor may be a metal oxide semiconductor device (English: Complementary Metal Oxide Semiconductor; abbreviation: CMOS) image sensor, a charge-coupled device (English: Charge-coupled Device; abbreviation: CCD), a quantum thin film sensor, or an organic sensor.
图3为其中一个实施例中图像传感器的一部分的结构示意图。图像传感器300包括阵列排布的多个像素点组Z,每个像素点组Z包括阵列排布的多个像素点D,每个像素点D对应一个感光单元。多个像素点包括M*N个像素点,其中,M和N均为大于或等于2的自然数。每个像素点D包括阵列排布的多个子像素点d。也就是每个感光单元可以由多个阵列排布的感光元件组成。感光元件是一种能够将光信号转化为电信号的元件。在其中一个实施例中,感光元件可为光电二极管。本实施例中,每个像素点组Z包括2*2阵列排布的4个像素点D,每个像素点可包括2*2阵列排布的4个子像素点d。每个像素点D包括2*2个光电二极管,2*2个光电二极管与2*2阵列排布的4个子像素点d对应设置。每个光电二极管用于接收光信号并进行光电转换,从而将光信号转换为电信号输出。每个像素点D所包括的4个子像素点d与同一颜色的滤光片对应设置,因此每个像素点D对应于一个颜色通道,比如红色R通道,或者绿色通道G,或者蓝色通道B。Fig. 3 is a schematic diagram of a part of the image sensor in one of the embodiments. The image sensor 300 includes a plurality of pixel point groups Z arranged in an array, and each pixel point group Z includes a plurality of pixel points D arranged in an array, and each pixel point D corresponds to a photosensitive unit. The multiple pixels include M*N pixels, where both M and N are natural numbers greater than or equal to 2. Each pixel point D includes a plurality of sub-pixel points d arranged in an array. That is, each photosensitive unit can be composed of a plurality of photosensitive elements arranged in an array. The photosensitive element is an element that can convert light signals into electrical signals. In one of the embodiments, the photosensitive element may be a photodiode. In this embodiment, each pixel point group Z includes 4 pixel points D arranged in a 2*2 array, and each pixel point may include 4 sub-pixel points d arranged in a 2*2 array. Each pixel point D includes 2*2 photodiodes, and the 2*2 photodiodes are arranged correspondingly to the 4 sub-pixel points d arranged in a 2*2 array. Each photodiode is used to receive optical signals and perform photoelectric conversion, thereby converting the optical signals into electrical signals for output. The 4 sub-pixels d included in each pixel D are set corresponding to the same color filter, so each pixel D corresponds to a color channel, such as the red R channel, or the green channel G, or the blue channel B .
如图4所示,以每个像素点包括子像素点1、子像素点2、子像素点3和子像素点4为例,可将子像素点1和子像素点2合成,子像素点3和子像素点4合成,形成上下方向的PD像素对,检测水平边缘,得到第二方向的相位差值,即竖直方向的PD值(相位差值);将子像素点1和子像素点3合成,子像素点2和子像素点4合成,形成左右方向的PD像素对,可以检测竖直边缘,得到第一方向的相位差值,即水平方向的PD值(相位差值)。As shown in Figure 4, taking each pixel point including sub-pixel point 1, sub-pixel point 2, sub-pixel point 3, and sub-pixel point 4 as an example, sub-pixel point 1 and sub-pixel point 2 can be synthesized, and sub-pixel point 3 and sub-pixel point The pixel points 4 are synthesized to form a pair of PD pixels in the up and down direction, and the horizontal edge is detected to obtain the phase difference value in the second direction, that is, the vertical direction PD value (phase difference value); sub-pixel point 1 and sub-pixel point 3 are synthesized, The sub-pixel point 2 and the sub-pixel point 4 are combined to form a pair of PD pixels in the left and right directions, and vertical edges can be detected to obtain the phase difference value in the first direction, that is, the PD value (phase difference value) in the horizontal direction.
图5为其中一个实施例中电子设备的结构示意图。如图5所示,该电子设备包括微透镜 50、滤光片52和成像组件54。微透镜50、滤光片52和成像组件54依次位于入射光路上,即微透镜50设置在滤光片52之上,滤光片52设置在成像组件54上。Fig. 5 is a schematic structural diagram of an electronic device in one of the embodiments. As shown in FIG. 5, the electronic device includes a micro lens 50, a filter 52, and an imaging component 54. The micro lens 50, the filter 52 and the imaging component 54 are sequentially located on the incident light path, that is, the micro lens 50 is disposed on the filter 52, and the filter 52 is disposed on the imaging component 54.
滤光片52可包括红、绿、蓝三种,分别只能透过红色、绿色、蓝色对应波长的光线。一个滤光片52设置在一个像素点上。The filter 52 can include three types of red, green, and blue, and can only transmit light of corresponding wavelengths of red, green, and blue, respectively. One filter 52 is arranged on one pixel.
成像组件54包括图3中的图像传感器。The imaging component 54 includes the image sensor in FIG. 3.
透镜50用于接收入射光,并将入射光传输给滤光片52。滤光片52对入射光进行平滑处理后,将平滑处理后的光以像素为基础入射到成像组件54上。The lens 50 is used to receive incident light and transmit the incident light to the filter 52. After the filter 52 smoothes the incident light, the smoothed light is incident on the imaging component 54 on a pixel basis.
图像传感器中的感光单元通过光电效应将从滤光片52入射的光转换成电荷信号,并生成与电荷信号一致的像素信号。电荷信号与接收的光强度相一致。The photosensitive unit in the image sensor converts the light incident from the filter 52 into a charge signal through the photoelectric effect, and generates a pixel signal consistent with the charge signal. The charge signal is consistent with the received light intensity.
图6为其中一个实施例中像素点组上设置滤光片的示意图。像素点组Z包括按照两行两列的阵列排布方式进行排布的4个像素点D,其中,第一行第一列的像素点的颜色通道为绿色,也即是,第一行第一列的像素点上设置的滤光片为绿色滤光片;第一行第二列的像素点的颜色通道为红色,也即是,第一行第二列的像素点上设置的滤光片为红色滤光片;第二行第一列的像素点的颜色通道为蓝色,也即是,第二行第一列的像素点上设置的滤光片为蓝色滤光片;第二行第二列的像素点的颜色通道为绿色,也即是,第二行第二列的像素点上设置的滤光片为绿色滤光片。FIG. 6 is a schematic diagram of a filter set on the pixel point group in one of the embodiments. The pixel group Z includes 4 pixels D arranged in an array arrangement of two rows and two columns, wherein the color channel of the pixels in the first row and the first column is green, that is, the first row and the first row The filter set on the pixels in one column is a green filter; the color channel of the pixels in the first row and second column is red, that is, the filter set on the pixels in the first row and second column The filter is a red filter; the color channel of the pixel in the second row and the first column is blue, that is, the filter set on the pixel in the second row and the first column is a blue filter; The color channel of the pixel points in the second row and second column is green, that is, the filter set on the pixel points in the second row and second column is a green filter.
图7为其中一个实施例中追焦方法的流程图。如图7所示,追焦方法包括步骤702至步骤708。Fig. 7 is a flowchart of a focus tracking method in one of the embodiments. As shown in FIG. 7, the focus tracking method includes steps 702 to 708.
步骤702、获取预览图像中的目标主体所处的目标主体检测区域。Step 702: Obtain the target subject detection area where the target subject in the preview image is located.
预览图像指的是摄像头经过对焦之后得到的图像。主体是指各种对象,如人、花、猫、狗、牛、蓝天、白云、背景等。目标主体是指需要的主体,可根据需要选择。目标主体检测区域可以是根据目标主体的轮廓勾勒的区域,也可以是包围目标主体的矩形框或者圆框等框形。需要说明的是,目标主体区域的形状不作限定,目标主体区域内包括目标主体的大部分即可。The preview image refers to the image obtained after the camera is focused. The subject refers to various objects, such as people, flowers, cats, dogs, cows, blue sky, white clouds, backgrounds, etc. The target subject refers to the subject in need, which can be selected according to needs. The target body detection area may be an area outlined according to the outline of the target body, or may be a rectangular frame or a round frame surrounding the target body. It should be noted that the shape of the target body area is not limited, and the target body area includes most of the target body.
具体的,可以利用电子设备的摄像装置进行对焦获取预览图像,并对预览图像进行主体检测获取包括目标主体的目标主体检测区域。对焦指的是通过调整焦距从而使拍摄的物体成像清晰的过程。其中,焦距指的是从摄像头中的透镜光心到光聚集的焦点的距离。主体检测(salient object detection)是指面对一个场景时,自动地对感兴趣区域进行处理而选择性的忽略不感兴趣区域。本实施例中感兴趣区域称为目标主体检测区域。在其中一个实施例中,主体检测模型是预先采集大量的训练数据,将训练数据输入到包含有初始网络权重的主体检测模型进行训练得到的。主体检测模型可训练能够识别检测各种主体,如人、花、猫、狗、背景等。Specifically, a camera device of an electronic device may be used to focus and obtain a preview image, and subject detection of the preview image may be performed to obtain a target subject detection area including the target subject. Focusing refers to the process of adjusting the focal length to make the image of the photographed object clear. Among them, the focal length refers to the distance from the optical center of the lens in the camera to the focal point of light gathering. Salient object detection refers to automatically processing regions of interest when facing a scene and selectively ignoring regions that are not of interest. In this embodiment, the region of interest is referred to as the target subject detection region. In one of the embodiments, the subject detection model is obtained by pre-collecting a large amount of training data, and inputting the training data into the subject detection model including the initial network weight for training. The subject detection model can be trained to recognize and detect various subjects, such as people, flowers, cats, dogs, backgrounds, etc.
步骤704、当目标主体移动时,根据目标主体检测区域和目标主体的移动数据确定目标主体预测区域,并获取目标主体预测区域对应的检测图像。Step 704: When the target subject moves, determine the target subject prediction area according to the target subject detection area and the movement data of the target subject, and obtain a detection image corresponding to the target subject prediction area.
目标主体的移动数据为目标主体的运动速度、运动方向、运动轨迹等数据。目标主体的移动数据可以是利用训练好的神经网络模型获取。目标主体预测区域为预测下一个时序目标主体所在区域。检测图像指的是摄像设备将目标主体预测区域作为对焦区域采集的包括目标主体的图像。The movement data of the target subject is data such as the speed, direction of movement, and trajectory of the target subject. The movement data of the target subject can be obtained by using a trained neural network model. The target subject prediction area is the area where the next time-series target subject is predicted. The detected image refers to an image including the target subject collected by the imaging device using the target subject prediction area as the focus area.
具体的,当目标主体为可移动主体时,可以检测到目标主体移动,自动进行追焦。根据目标主体的运动数据和当前的目标主体检测区域预测下一个时序对应的目标主体预测区域。举例来说,可以在训练好的神经网络模型中输入第一张图像和第二张图像,第一张图像和第二张图像包括同一个目标主体,训练好的神经网络模型可以根据第一张图像和第二张图像包括的目标主体的不同移动数据,预测下一个时序目标主体的移动数据及目标主体预测区域;还可以是输入一张包括运动目标的第一图像,该第一图像包括:当前时序对应目标主体检测区域和目标主体的运动数据,对应的该网络模型能够输出第二图像,第二图像携带下一个时序对应的目标主体预测区域和目标主体的运动数据。对目标主体预测区域进行对焦,电子设备根据图像传感器中每个像素点组包括的像素点的像素信息获取检测图像。图像传感器包括的子像素点是一种能够将光信号转化为电信号的感光元件,可以根据子像素点输出的电信号 来获取该子像素点接收到的光信号的强度,根据子像素点接收到的光信号的强度即可得到该子像素点的像素信息。Specifically, when the target subject is a movable subject, the movement of the target subject can be detected, and the focus can be automatically tracked. According to the motion data of the target subject and the current target subject detection area, the target subject prediction area corresponding to the next time sequence is predicted. For example, you can input the first image and the second image in the trained neural network model. The first image and the second image include the same target subject. The trained neural network model can be based on the first image. The image and the second image include different movement data of the target subject, predicting the movement data of the next time sequence target subject and the target subject prediction area; it can also be inputting a first image including a moving target, and the first image includes: The current time sequence corresponds to the target subject detection area and the motion data of the target subject, and the corresponding network model can output a second image, which carries the target subject prediction area and the motion data of the target subject corresponding to the next time sequence. Focusing on the target subject prediction area, the electronic device obtains the detection image according to the pixel information of the pixel points included in each pixel point group in the image sensor. The sub-pixel included in the image sensor is a photosensitive element that can convert light signals into electrical signals. The intensity of the light signal received by the sub-pixel can be obtained according to the electrical signal output by the sub-pixel, and the intensity of the light signal received by the sub-pixel can be obtained according to the sub-pixel reception. The intensity of the obtained light signal can obtain the pixel information of the sub-pixel point.
步骤706、获取检测图像的相位差值,相位差值包括第一方向的相位差值和第二方向的相位差值。第一方向与第二方向成预设夹角。Step 706: Obtain a phase difference value of the detection image, where the phase difference value includes the phase difference value in the first direction and the phase difference value in the second direction. The first direction and the second direction form a preset angle.
具体的,获取检测图像的相位差值,该相位差值包括第一方向的相位差值和第二方向的相位差值。第一方向与第二方向成预设夹角
Figure PCTCN2020126139-appb-000001
可取值如30°、40°、45°、60°等角度。
Figure PCTCN2020126139-appb-000002
还可是90°,即当第一方向的相位差值是指水平方向上的相位差值,则第二方向的相位差值是指竖直方向上的相位差值。
Specifically, the phase difference value of the detection image is acquired, and the phase difference value includes the phase difference value in the first direction and the phase difference value in the second direction. The first direction and the second direction form a preset angle
Figure PCTCN2020126139-appb-000001
Possible values are 30°, 40°, 45°, 60° and other angles.
Figure PCTCN2020126139-appb-000002
It may also be 90°, that is, when the phase difference value in the first direction refers to the phase difference value in the horizontal direction, the phase difference value in the second direction refers to the phase difference value in the vertical direction.
步骤708、根据第一方向的相位差值和第二方向的相位差值控制镜头持续对移动的目标主体进行对焦。Step 708: Control the lens to continuously focus on the moving target subject according to the phase difference value in the first direction and the phase difference value in the second direction.
具体的,根据第一方向的相位差值和第二方向的相位差值能够得到与离焦距离值呈映射关系的目标相位差值,再由于目标相位差值与目标离焦距离值之间的对应关系可通过标定得到,获取第一方向的相位差值和第二方向的相位差值与目标相位差值,即可得到目标离焦距离。控制镜头根据目标离焦距离值持续对移动的目标主体进行对焦。追焦指的是当镜头对目标主体进行对焦之后,在后续的拍摄过程中保持对目标主体的对焦的过程,获取的检测图像中的目标主体保持清晰成像。Specifically, according to the phase difference value in the first direction and the phase difference value in the second direction, the target phase difference value that is in a mapping relationship with the defocus distance value can be obtained, and then due to the difference between the target phase difference value and the target defocus distance value The corresponding relationship can be obtained through calibration, and the phase difference value in the first direction and the phase difference value in the second direction and the target phase difference value can be obtained to obtain the target defocus distance. The control lens continuously focuses on the moving target subject according to the target defocus distance value. Focus tracking refers to the process of maintaining the focus on the target subject in the subsequent shooting process after the lens focuses on the target subject, and the target subject in the acquired detection image remains clear imaging.
本实施例提供的追焦方法通过获取预览图像中的目标主体所处的目标主体检测区域。当目标主体移动时,根据目标主体检测区域和目标主体的移动数据确定目标主体预测区域,并获取目标主体预测区域对应的检测图像。获取检测图像的相位差值,相位差值包括第一方向的相位差值和第二方向的相位差值。第一方向与第二方向成预设夹角。根据第一方向的相位差值和第二方向的相位差值控制镜头持续对移动的目标主体进行对焦。本申请提供的方案针对存在水平纹理或竖直纹理的场景都可以有效的利用相位差值进行追焦,提高了追焦的准确度和稳定度。The focus tracking method provided in this embodiment obtains the target subject detection area in the preview image where the target subject is located. When the target subject moves, the target subject prediction area is determined according to the target subject detection area and the movement data of the target subject, and the detection image corresponding to the target subject prediction area is acquired. The phase difference value of the detection image is acquired, and the phase difference value includes the phase difference value in the first direction and the phase difference value in the second direction. The first direction and the second direction form a preset angle. The lens is controlled to continuously focus on the moving target subject according to the phase difference value in the first direction and the phase difference value in the second direction. The solution provided by the present application can effectively use the phase difference value to perform focus tracking for scenes with horizontal textures or vertical textures, and improve the accuracy and stability of focus tracking.
在其中一个实施例中,步骤获取目标主体预测区域对应的检测图像,包括:控制镜头移动以使焦点对准目标主体预测区域的中心并采集目标主体预测区域对应的检测图像。In one of the embodiments, the step of acquiring the detection image corresponding to the target subject prediction area includes: controlling the movement of the lens so that the focus is on the center of the target subject prediction area and collecting the detection image corresponding to the target subject prediction area.
具体的,控制镜头移动以使焦点对准目标主体预测区域的中心并采集目标主体预测区域对应的检测图像,检测图像包括目标主体。举例来说,其过程如下:当目标主体预测区域为矩形,则将焦点对准矩形中心;当目标主体预测区域为圆形,则将焦点对准圆点;当目标主体预测区域为不规则图形,则将焦点对准目标主体预测区域的重心。Specifically, the lens is controlled to move so that the focus is on the center of the target subject prediction area and the detection image corresponding to the target subject prediction area is collected, and the detection image includes the target subject. For example, the process is as follows: when the target subject prediction area is rectangular, the focus is on the center of the rectangle; when the target subject prediction area is circular, the focus is on the dot; when the target subject prediction area is an irregular pattern , Then focus on the center of gravity of the target subject prediction area.
在其中一个实施例中,根据目标主体检测区域和目标主体的运动数据确定目标主体预测区域,包括:将第一图像输入至预测网络模型,第一图像携带目标主体检测区域和目标主体的运动数据的信息。获取预测网络模型输出的第二图像,第二图像标记有目标主体预测区域。In one of the embodiments, determining the target subject prediction area according to the target subject detection area and the motion data of the target subject includes: inputting the first image into the prediction network model, the first image carrying the target subject detection area and the motion data of the target subject Information. Obtain a second image output by the prediction network model, and the second image is marked with a target subject prediction area.
具体的,预测网络模型指的是已经训练完成的网络模型,输入一张包括运动目标的第一图像,该第一图像包括:当前时序对应目标主体检测区域和目标主体的运动数据。该网络模型能够输出第二图像,第二图像标记有下一个时序对应的目标主体预测区域,根据第二图像可以获取目标主体的运动数据。在其中一个实施例中,预测网络模型为基于循环神经网络算法建立的网络模型。循环神经网络具有记忆性、参数共享并且图灵完备(Turing completeness),因此在对序列的非线性特征进行学习时具有一定优势。循环神经网络在自然语言处理(Natural Language Processing,NLP)如语音识别、语言建模、机器翻译等领域有应用,也被用于各类时间序列预测。Specifically, the predictive network model refers to a network model that has been trained. A first image including a moving target is input. The first image includes: the current time sequence corresponding to the target subject detection area and the target subject's motion data. The network model can output a second image, the second image is marked with the target subject prediction area corresponding to the next time sequence, and the motion data of the target subject can be obtained from the second image. In one of the embodiments, the predictive network model is a network model established based on a recurrent neural network algorithm. Recurrent neural network has memory, parameter sharing and Turing completeness, so it has certain advantages when learning the nonlinear characteristics of the sequence. Recurrent neural networks have applications in natural language processing (NLP) such as speech recognition, language modeling, machine translation and other fields, and are also used in various time series predictions.
在其中一个实施例中,如图8所示,步骤第一方向的相位差值和第二方向的相位差值控制镜头持续对移动的目标主体进行对焦,包括:步骤802和步骤804。In one of the embodiments, as shown in FIG. 8, the phase difference value in the first direction and the phase difference value in the second direction control the lens to continuously focus on the moving target subject, including steps 802 and 804.
步骤802、根据第一方向的相位差值和第二方向的相位差值获取目标离焦距离。Step 802: Obtain the target defocus distance according to the phase difference value in the first direction and the phase difference value in the second direction.
具体的,根据第一方向的相位差值和第二方向的相位差值的大小关系或者携带的置信度信息确定出目标相位差值。而目标相位差值与目标离焦距离存在映射关系,将目标相位差值输入用于表征该映射关系的函数中,可以得到目标离焦距离。Specifically, the target phase difference value is determined according to the magnitude relationship between the phase difference value in the first direction and the phase difference value in the second direction or the carried confidence information. There is a mapping relationship between the target phase difference value and the target defocus distance, and the target phase difference value is input into a function used to characterize the mapping relationship, and the target defocus distance can be obtained.
步骤804、根据目标离焦距离控制电子设备的镜头移动持续对移动的目标主体进行对焦。Step 804: Control the lens movement of the electronic device according to the target defocus distance to continuously focus on the moving target subject.
具体的,目标离焦距离指的是图像传感器当前所处的位置与合焦状态时图像传感器所应 该处于的位置的距离;电子设备可以根据得到的目标离焦距离控制镜头移动至合焦位置进行追焦。在其中一个实施例中,上述追焦方法还可以包括:根据目标离焦距离值生成深度值。目标离焦距离值可以计算合焦状态时的像距,根据像距以及焦距可以得到物距,该物距即为深度值。Specifically, the target defocus distance refers to the distance between the current position of the image sensor and the position where the image sensor should be in the focus state; the electronic device can control the lens to move to the focus position according to the obtained target defocus distance Chase the focus. In one of the embodiments, the above-mentioned focus tracking method may further include: generating a depth value according to the target defocus distance value. The target defocus distance value can calculate the image distance in the in-focus state, and the object distance can be obtained according to the image distance and the focal length, and the object distance is the depth value.
在其中一个实施例中,如图9所示,步骤根据第一方向的相位差值和第二方向的相位差值获取目标离焦距离,包括:步骤902和步骤904。In one of the embodiments, as shown in FIG. 9, the step of obtaining the target defocus distance according to the phase difference value in the first direction and the phase difference value in the second direction includes: step 902 and step 904.
步骤902、根据第一方向的相位差值和第二方向的相位差值获取目标相位差值。Step 902: Obtain a target phase difference value according to the phase difference value in the first direction and the phase difference value in the second direction.
具体的,根据第一方向的相位差值和第二方向的相位差值的大小关系或者置信度信息能够确定出目标相位差值,根据目标相位差值可以获取目标离焦距离。举例来说,当第一方向的相位差值和第二方向的相位差值都存在时,可以求取第一方向的相位差值的置信度和第二方向的相位差值的置信度,从根据第一方向的相位差值和第二方向的相位差值选择一个相位差值作为目标相位差值,然后根据确定的目标相位差值从相位差值与离焦距离值之间的映射关系得到对应的目标离焦距离值。Specifically, the target phase difference value can be determined according to the magnitude relationship between the phase difference value in the first direction and the phase difference value in the second direction or the confidence information, and the target defocus distance can be obtained according to the target phase difference value. For example, when both the phase difference value in the first direction and the phase difference value in the second direction exist, the confidence level of the phase difference value in the first direction and the confidence level of the phase difference value in the second direction can be obtained from Select a phase difference value as the target phase difference value according to the phase difference value in the first direction and the phase difference value in the second direction, and then obtain the mapping relationship between the phase difference value and the defocus distance value according to the determined target phase difference value The corresponding target defocus distance value.
步骤904、根据目标相位差值获取目标离焦距离。Step 904: Obtain the target defocus distance according to the target phase difference value.
具体的,目标相位差值与目标离焦距离存在映射关系,将目标相位差值输入用于表征该映射关系的函数中,可以得到目标离焦距离。Specifically, there is a mapping relationship between the target phase difference value and the target defocus distance, and the target phase difference value is input into a function used to characterize the mapping relationship, and the target defocus distance can be obtained.
在其中一个实施例中,根据目标相位差值获取目标离焦距离,包括:根据已标定的离焦函数和目标相位差值计算目标离焦距离,已标定的离焦函数用于表征目标相位差值和目标离焦距离的关系。In one of the embodiments, obtaining the target defocus distance according to the target phase difference value includes: calculating the target defocus distance according to the calibrated defocus function and the target phase difference value, and the calibrated defocus function is used to characterize the target phase difference The relationship between the value and the target defocus distance.
具体的,目标离焦距离值与目标相位差值之间的对应关系如下:defocus=PD*slope(DCC),其中,离焦系数(Defocus Conversion Coefficient,DCC)可以由标定得到,PD为目标相位差值。根据已标定的离焦函数和目标相位差值能够计算得到目标离焦距离。目标相位差值与离焦距离值的对应关系的标定过程包括:将摄像模组的有效对焦行程切分为N(N≥3)等份,即(近焦DAC-远焦DAC)/N,以此覆盖马达的对焦范围;在每个对焦DAC(DAC可为0至1023)位置进行对焦,并记录当前对焦DAC位置的相位差;完成马达对焦行程后取一组N个的对焦DAC与获得的PD值进行做比;生成N个相近的比值K,将DAC与PD组成的二维数据进行拟合得到斜率为K的直线。Specifically, the corresponding relationship between the target defocus distance value and the target phase difference value is as follows: defocus=PD*slope(DCC), where the defocus coefficient (Defocus Conversion Coefficient, DCC) can be obtained by calibration, and PD is the target phase Difference. According to the calibrated defocus function and the target phase difference value, the target defocus distance can be calculated. The calibration process of the corresponding relationship between the target phase difference value and the defocus distance value includes: dividing the effective focus stroke of the camera module into N (N≥3) equal parts, namely (near focus DAC-far focus DAC)/N, In this way, the focus range of the motor is covered; focus is performed at each focus DAC (DAC can be 0 to 1023) position, and the phase difference of the current focus DAC position is recorded; after completing the motor focus stroke, take a group of N focus DACs Compare the PD values of, generate N similar ratios K, and fit the two-dimensional data composed of DAC and PD to obtain a straight line with slope K.
在其中一个实施例中,如图10所示,步骤根据第一方向的相位差值和第二方向的相位差值获取目标相位差值,包括:步骤1002至步骤1006。In one of the embodiments, as shown in FIG. 10, the step of obtaining the target phase difference value according to the phase difference value in the first direction and the phase difference value in the second direction includes: step 1002 to step 1006.
步骤1002、获取第一方向的相位差值对应的第一置信度。Step 1002: Obtain a first confidence degree corresponding to the phase difference value in the first direction.
具体的,第一方向的相位差值携带第一置信度的信息,第一置信度用于表征第一方向的相位差值的准确度。Specifically, the phase difference value in the first direction carries information about the first confidence level, and the first confidence level is used to characterize the accuracy of the phase difference value in the first direction.
步骤1004、获取第二方向的相位差值对应的第二置信度。Step 1004: Obtain a second confidence level corresponding to the phase difference value in the second direction.
具体的,第二方向的相位差值携带第二置信度的信息,第二置信度用于表征第二方向的相位差值的准确度。Specifically, the phase difference value in the second direction carries information about the second confidence level, and the second confidence level is used to characterize the accuracy of the phase difference value in the second direction.
步骤1006、根据第一置信度和第二置信度的大小关系确定目标相位差值。Step 1006: Determine the target phase difference value according to the magnitude relationship between the first confidence level and the second confidence level.
具体的,根据第一置信度和第二置信度的大小关系确定目标相位差值。举例来说,当第一方向的相位差值的置信度大于第二方向的相位差值的置信度时,标识第一方向的相位差值的准确度高于第二方向的相位差值的准确度,可以选择第一方向的相位差值作为目标相位差值;当第一方向的相位差值的置信度小于第二方向的相位差值的置信度时,标识第一方向的相位差值的准确度低于第二方向的相位差值的准确度,可以选择第二方向的相位差值作为目标相位差值;当第一方向的相位差值的置信度等于第二方向的相位差值的置信度时,标识第一方向的相位差值的准确度等于第二方向的相位差值的准确度,可以从第一方向的相位差值和第二方向的相位差值中选择相位差值较大值作为目标相位差值。Specifically, the target phase difference value is determined according to the magnitude relationship between the first confidence level and the second confidence level. For example, when the confidence level of the phase difference value in the first direction is greater than the confidence level of the phase difference value in the second direction, the accuracy of identifying the phase difference value in the first direction is higher than the accuracy of the phase difference value in the second direction. Degree, the phase difference value in the first direction can be selected as the target phase difference value; when the confidence level of the phase difference value in the first direction is less than the confidence level of the phase difference value in the second direction, the phase difference value in the first direction is identified The accuracy is lower than the accuracy of the phase difference in the second direction, the phase difference in the second direction can be selected as the target phase difference; when the confidence in the phase difference in the first direction is equal to that of the phase difference in the second direction For confidence, the accuracy of identifying the phase difference value in the first direction is equal to the accuracy of the phase difference value in the second direction. The phase difference value can be selected from the phase difference value in the first direction and the phase difference value in the second direction. The larger value is used as the target phase difference value.
在其中一个实施例中,如图11所示,步骤根据第一置信度和第二置信度的大小关系确定目标相位差值,包括:步骤1102至步骤1106。In one of the embodiments, as shown in FIG. 11, the step of determining the target phase difference value according to the magnitude relationship between the first confidence level and the second confidence level includes: step 1102 to step 1106.
步骤1102、当第一置信度大于第二置信度时,将第一置信度对应的第一方向的相位差值作为目标相位差值。Step 1102: When the first degree of confidence is greater than the second degree of confidence, use the phase difference value in the first direction corresponding to the first degree of confidence as the target phase difference value.
具体的,当第一方向的相位差值的置信度大于第二方向的相位差值的置信度时,选取第一方向的相位差值,根据第一方向的相位差值得到对应的离焦距离值,并确定移动方向为水平方向。Specifically, when the confidence level of the phase difference value in the first direction is greater than the confidence level of the phase difference value in the second direction, the phase difference value in the first direction is selected, and the corresponding defocus distance is obtained according to the phase difference value in the first direction. Value, and confirm that the moving direction is horizontal.
步骤1104、当第二置信度大于第一置信度时,将第二置信度对应的第二方向的相位差值作为目标相位差值。Step 1104: When the second degree of confidence is greater than the first degree of confidence, use the phase difference value in the second direction corresponding to the second degree of confidence as the target phase difference value.
具体的,当第一方向的相位差值的置信度小于第二方向的相位差值的置信度时,选取第二方向的相位差值,根据第二方向的相位差值得到对应的离焦距离值,并确定移动方向为竖直方向。Specifically, when the confidence level of the phase difference value in the first direction is less than the confidence level of the phase difference value in the second direction, the phase difference value in the second direction is selected, and the corresponding defocus distance is obtained according to the phase difference value in the second direction. Value, and determine that the moving direction is vertical.
步骤1106、当第一置信度等于第二置信度时,将第一方向相位差和第二方向相位差均作为目标相位差值。Step 1106: When the first degree of confidence is equal to the second degree of confidence, both the phase difference in the first direction and the phase difference in the second direction are used as the target phase difference value.
具体的,当第一方向的相位差值的置信度等于第二方向的相位差值的置信度时,可以根据第一方向的相位差值确定水平方向上的离焦距离值,以及根据第二方向的相位差值确定竖直方向上的离焦距离值,先按照水平方向上的离焦距离值移动,再按照竖直方向上的离焦距离值移动,或者先按照竖直方向上的离焦距离值移动,再按照水平方向上的离焦距离值移动。需要说明的是,对于存在水平纹理的场景,因水平方向上的PD像素对无法得到第一方向的相位差值,可比对竖直方向上的PD像素对,计算竖直方向上的第二方向的相位差值,根据第二方向的相位差值计算离焦距离值,再根据竖直方向上的离焦距离值控制镜头移动以实现对焦;对于存在竖直纹理的场景,因竖直方向上的PD像素对无法得到第二方向的相位差值,可比对水平方向上的PD像素对,计算水平方向上的第一方向的相位差值,根据第一方向的相位差值计算离焦距离值,再根据水平方向上的离焦距离值控制镜头移动以实现对焦。Specifically, when the confidence level of the phase difference value in the first direction is equal to the confidence level of the phase difference value in the second direction, the defocus distance value in the horizontal direction may be determined according to the phase difference value in the first direction, and the defocus distance value in the horizontal direction may be determined according to the second direction. The phase difference value of the direction determines the defocus distance value in the vertical direction. It first moves according to the defocus distance value in the horizontal direction, and then moves according to the defocus distance value in the vertical direction, or first moves according to the defocus distance value in the vertical direction. The focus distance value moves, and then moves according to the defocus distance value in the horizontal direction. It should be noted that for scenes with horizontal textures, because the PD pixel pair in the horizontal direction cannot obtain the phase difference value in the first direction, the PD pixel pair in the vertical direction can be compared to calculate the second direction in the vertical direction. According to the phase difference value in the second direction, the defocus distance value is calculated according to the phase difference value in the second direction, and then the lens movement is controlled according to the defocus distance value in the vertical direction to achieve focus; for scenes with vertical texture, because the vertical direction The PD pixel pair cannot get the phase difference value in the second direction. You can compare the PD pixel pair in the horizontal direction to calculate the phase difference value in the first direction in the horizontal direction, and calculate the defocus distance value based on the phase difference value in the first direction. , And then control the movement of the lens according to the defocus distance value in the horizontal direction to achieve focus.
在其中一个实施例中,如图12所示,步骤获取检测图像的相位差值,包括:步骤1202和步骤1204。In one of the embodiments, as shown in FIG. 12, the step of obtaining the phase difference value of the detection image includes: step 1202 and step 1204.
步骤1202、按照第一方向将检测图像切分为第一切分图像和第二切分图像。根据第一切分图像和第二切分图像对应的相位关系获取第一方向的相位差值。Step 1202: According to the first direction, the detected image is divided into a first segmented image and a second segmented image. The phase difference value in the first direction is obtained according to the phase relationship corresponding to the first segmented image and the second segmented image.
具体的,电子设备可以沿行的方向(图像坐标系中的x轴方向)对该目标图像进行切分处理,在沿行的方向对目标图像进行切分处理的过程中,切分处理的每一分割线都与行的方向垂直。沿行的方向对目标图像进行切分处理后得到的第一切分图像和第二切分图像,可以分别称为左图和右图。根据左图和右图中“相互匹配的像素”的相位差异,获取第一方向的相位差值。Specifically, the electronic device may perform segmentation processing on the target image along the line direction (the x-axis direction in the image coordinate system). In the process of segmenting the target image along the line direction, each segment of the target image can be segmented. A dividing line is perpendicular to the direction of the row. The first segmented image and the second segmented image obtained after segmenting the target image in the direction of the row can be referred to as the left image and the right image, respectively. Obtain the phase difference value in the first direction according to the phase difference of the "mutually matched pixels" in the left and right images.
步骤1204、按照第二方向将检测图像切分为第三切分图像和第四切分图像。根据第三切分图像和第四切分图像对应的相位关系获取第二方向的相位差值。Step 1204: According to the second direction, the detected image is divided into a third segmented image and a fourth segmented image. Acquire the phase difference value in the second direction according to the phase relationship corresponding to the third segmented image and the fourth segmented image.
具体的,电子设备可以沿列的方向(图像坐标系中的y轴方向)对该目标图像进行切分处理,在沿列的方向对目标图像进行切分处理的过程中,切分处理的每一分割线都与列的方向垂直。沿列的方向对目标图像进行切分处理后得到的第一切分图像和第二切分图像可以分别称为上图和下图。Specifically, the electronic device may perform segmentation processing on the target image along the direction of the column (the y-axis direction in the image coordinate system). In the process of segmenting the target image along the direction of the column, each segment of the target image is segmented. A dividing line is perpendicular to the direction of the column. The first segmented image and the second segmented image obtained after segmenting the target image in the direction of the column can be referred to as the upper image and the lower image, respectively.
在其中一个实施例中,第一方向为行方向,第二方向为列方向。步骤按照第一方向将检测图像切分为第一切分图像和第二切分图像,包括:按照第一方向对检测图像进行切分处理,得到多个图像区域,每个图像区域包括检测图像中的一行像素。从多个图像区域中获取多个第一切分图像区域和多个第二切分图像区域,第一切分图像区域包括检测图像中偶数行的像素,第二切分图像区域包括检测图像中奇数行的像素。利用多个第一切分图像区域拼接成第一切分图像,利用多个第二切分图像区域组成第二切分图像。In one of the embodiments, the first direction is the row direction, and the second direction is the column direction. The step of dividing the detected image into a first segmented image and a second segmented image according to the first direction includes: performing segmentation processing on the detected image according to the first direction to obtain multiple image regions, each of which includes a detected image A row of pixels in. Acquire multiple first segmented image areas and multiple second segmented image areas from multiple image areas. The first segmented image area includes even-numbered rows of pixels in the detected image, and the second segmented image area includes the detected image. Pixels in odd rows. A plurality of first segmented image areas are used to stitch together a first segmented image, and a plurality of second segmented image areas are used to form a second segmented image.
具体的,第一方向为行方向,按照第一方向对检测图像进行切分处理,能够得到多个图像区域,每个图像区域包括检测图像中的一行像素。从多个图像区域中获取多个第一切分图像区域和多个第二切分图像区域,第一切分图像区域指的是检测图像中偶数行的像素,第二切分图像区域指的是检测图像中奇数行的像素。将多个第一切分图像区域按照在检测图像中的位置依次进行拼接得到第一切分图像,将多个第二切分图像区域按照在检测图像中的位置依次进行拼接得到第二切分图像。Specifically, the first direction is the row direction, and the detection image is segmented according to the first direction to obtain multiple image regions, and each image region includes a row of pixels in the detection image. Obtain multiple first segmented image areas and multiple second segmented image areas from multiple image areas. The first segmented image area refers to the pixels of the even-numbered rows in the detection image, and the second segmented image area refers to It is to detect odd-numbered rows of pixels in the image. The multiple first segmented image areas are sequentially stitched according to their positions in the detected image to obtain the first segmented image, and multiple second segmented image areas are sequentially stitched according to their positions in the detected image to obtain the second segmented image. image.
步骤按照第二方向将检测图像切分为第三切分图像和第四切分图像,包括:按照第二方 向对检测图像进行切分处理,得到多个图像区域,每个图像区域包括检测图像中的一列像素。从多个图像区域中获取多个第三切分图像区域和多个第四切分图像区域,第三切分图像区域包括检测图像中偶数列的像素,第四切分图像区域包括检测图像中奇数列的像素。利用多个第三切分图像区域拼接成第三切分图像,利用多个第四切分图像区域组成第四切分图像。The step of dividing the detected image into a third segmented image and a fourth segmented image according to the second direction includes: performing segmentation processing on the detected image according to the second direction to obtain a plurality of image regions, and each image region includes a detected image A column of pixels in. Acquire multiple third segmented image areas and multiple fourth segmented image areas from multiple image areas, the third segmented image area includes even-numbered columns of pixels in the detected image, and the fourth segmented image area includes the detected image Pixels in odd columns. A plurality of third segmented image regions are used to stitch together a third segmented image, and a plurality of fourth segmented image regions are used to form a fourth segmented image.
具体的,第二方向为列方向,按照第列方向对检测图像进行切分处理,能够得到多个图像区域,每个图像区域包括检测图像中的一列像素。从多个图像区域中获取多个第三切分图像区域和多个第四切分图像区域,第三切分图像区域指的是检测图像中偶数列的像素,第四切分图像区域指的是检测图像中奇数列的像素。将多个第三切分图像区域按照在检测图像中的位置依次进行拼接得到第三切分图像,将多个第四切分图像区域按照在检测图像中的位置依次进行拼接得到第四切分图像。Specifically, the second direction is the column direction, and the detection image is segmented according to the column direction to obtain multiple image regions, and each image region includes a column of pixels in the detection image. Obtain multiple third segmented image areas and multiple fourth segmented image areas from multiple image areas. The third segmented image area refers to the pixels in the even-numbered column of the detection image, and the fourth segmented image area refers to It is to detect odd-numbered columns of pixels in the image. The multiple third segmented image areas are sequentially stitched according to their positions in the detected image to obtain the third segmented image, and the multiple fourth segmented image areas are sequentially stitched according to their positions in the detected image to obtain the fourth segmented image. image.
在其中一个实施例中,如图13所示,步骤根据第一切分图像和第二切分图像对应的相位关系获取第一方向的相位差值和根据第三切分图像和第四切分图像对应的相位关系获取第二方向的相位差值,包括:步骤1302和步骤1304。In one of the embodiments, as shown in FIG. 13, the step obtains the phase difference value in the first direction according to the phase relationship corresponding to the first segmented image and the second segmented image and according to the third segmented image and the fourth segmented image. Obtaining the phase difference value in the second direction by the phase relationship corresponding to the image includes: step 1302 and step 1304.
步骤1302、根据第一切分图像和第二切分图像中相互匹配的像素的位置差异,确定相互匹配的像素的相位差值。根据相互匹配的像素的相位差值确定第一方向的相位差值。Step 1302: Determine the phase difference value of the pixels that match each other according to the position difference of the pixels that match each other in the first segmented image and the second segmented image. The phase difference value in the first direction is determined according to the phase difference value of the pixels that match each other.
具体的,当第一切分图像包括的是偶数行的像素,第二切分图像包括的是奇数行的像素,第一切分图像中的像素a与第二切分图像中的像素b相互匹配,则根据相互匹配的像素a和像素b的相位差,可以确定第一方向的相位差值。Specifically, when the first segmented image includes even-numbered rows of pixels, the second segmented image includes odd-numbered rows, and the pixel a in the first segmented image and the pixel b in the second segmented image are mutually exclusive. Matching, the phase difference value in the first direction can be determined according to the phase difference of the pixel a and the pixel b that are matched with each other.
步骤1304、根据第三切分图像和第四切分图像中相互匹配的像素的位置差异,确定相互匹配的像素的相位差值。根据相互匹配的像素的相位差值确定第二方向的相位差值。Step 1304: Determine the phase difference value of the pixels that match each other according to the position difference of the pixels that match each other in the third segmented image and the fourth segmented image. The phase difference value in the second direction is determined according to the phase difference value of the pixels that match each other.
具体当第一切分图像包括的是偶数列的像素,第二切分图像包括的是奇数列的像素,第一切分图像中的像素a与第二切分图像中的像素b相互匹配,则根据相互匹配的像素a和像素b的相位差,可以确定第二方向的相位差值。Specifically, when the first segmented image includes even-numbered columns of pixels, the second segmented image includes odd-numbered columns, and the pixel a in the first segmented image and the pixel b in the second segmented image match each other, Then, the phase difference value in the second direction can be determined according to the phase difference of the pixel a and the pixel b that are matched with each other.
“相互匹配的像素”指的是由像素本身及其周围像素组成的像素矩阵相互相似。例如,第一切分图像中像素a和其周围的像素组成一个3行3列的像素矩阵,该像素矩阵的像素值为:"Matched pixels" means that the pixel matrix composed of the pixel itself and the surrounding pixels are similar to each other. For example, the pixel a and the surrounding pixels in the first segmented image form a pixel matrix with 3 rows and 3 columns, and the pixel value of the pixel matrix is:
Figure PCTCN2020126139-appb-000003
Figure PCTCN2020126139-appb-000003
第二切分图像中像素b和其周围的像素也组成一个3行3列的像素矩阵,该像素矩阵的像素值为:The pixel b and the surrounding pixels in the second segmented image also form a pixel matrix with 3 rows and 3 columns, and the pixel value of the pixel matrix is:
Figure PCTCN2020126139-appb-000004
Figure PCTCN2020126139-appb-000004
由上文可以看出,这两个矩阵是相似的,则可以认为像素a和像素b相互匹配。判断像素矩阵是否相似的方式很多,通常可对两个像素矩阵中的每个对应像素的像素值求差,再将求得的差值的绝对值进行相加,利用该相加的结果来判断像素矩阵是否相似,也即是,若该相加的结果小于预设的某一阈值,则认为像素矩阵相似,否则,则认为像素矩阵不相似。It can be seen from the above that the two matrices are similar, and it can be considered that the pixel a and the pixel b match each other. There are many ways to judge whether the pixel matrix is similar. Usually, the pixel value of each corresponding pixel in the two pixel matrices can be calculated, and then the absolute value of the difference obtained is added, and the result of the addition is used to determine Whether the pixel matrix is similar, that is, if the result of the addition is less than a preset threshold, the pixel matrix is considered to be similar; otherwise, the pixel matrix is considered to be dissimilar.
例如,对于上述两个3行3列的像素矩阵而言,可以分别将1和2求差,将15和15求差,将70和70求差,……,再将求得的差的绝对值相加,得到相加结果为3,该相加结果3小于预设的阈值,则认为上述两个3行3列的像素矩阵相似。For example, for the above two pixel matrices with 3 rows and 3 columns, the difference of 1 and 2, the difference of 15 and 15, the difference of 70 and 70, ..., and then the absolute difference Values are added, and the result of the addition is 3. If the result of the addition of 3 is less than the preset threshold, it is considered that the two pixel matrices with 3 rows and 3 columns are similar.
另一种判断像素矩阵是否相似的方式是利用sobel卷积核计算方式或者高拉普拉斯计算方式等方式提取其边缘特征,通过边缘特征来判断像素矩阵是否相似。Another way to judge whether the pixel matrix is similar is to use the Sobel convolution kernel calculation method or the high Laplacian calculation method to extract the edge characteristics, and judge whether the pixel matrix is similar by the edge characteristics.
在本申请实施例中,“相互匹配的像素的位置差异”指的是,相互匹配的像素中位于第一切分图像中的像素的位置和位于第二切分图像中的像素的位置的差异。如上述举例,相互匹配的像素a和像素b的位置差异指的是像素a在第一切分图像中的位置和像素b在第二切分图像中的位置的差异。In the embodiments of the present application, "the positional difference of pixels that match each other" refers to the difference between the positions of the pixels in the first segmented image and the positions of the pixels in the second segmented image among the matched pixels . As in the above example, the position difference between the pixel a and the pixel b that match each other refers to the difference between the position of the pixel a in the first segmented image and the position of the pixel b in the second segmented image.
相互匹配的像素分别对应于从不同方向射入镜头的成像光线在图像传感器中所成的不同 的像。例如,第一切分图像中的像素a与第二切分图像中的像素b相互匹配,其中,该像素a可以对应于图1中在A位置处所成的像,像素b可以对应于图1中在B位置处所成的像。The pixels that match each other correspond to the different images formed by the imaging light entering the lens from different directions in the image sensor. For example, the pixel a in the first segmented image and the pixel b in the second segmented image match each other, where the pixel a may correspond to the image formed at position A in FIG. 1, and the pixel b may correspond to FIG. In the image formed at position B.
由于相互匹配的像素分别对应于从不同方向射入镜头的成像光线在图像传感器中所成的不同的像,因此,根据相互匹配的像素的位置差异,即可确定该相互匹配的像素的相位差。Since the matched pixels correspond to the different images in the image sensor formed by the imaging light entering the lens from different directions, the phase difference of the matched pixels can be determined according to the position difference of the matched pixels. .
上述像素点组中的像素点的亮度值得到目标图像,将目标图像划分为两个切分图像后,通过像素匹配,可以快速的确定相互匹配的像素的相位差值,同时包含了丰富的相位差值,可以提高相位差值得精确度,提高对焦的准确度和稳定度。The brightness value of the pixel points in the above pixel point group obtains the target image. After the target image is divided into two segmented images, through pixel matching, the phase difference value of the matching pixels can be quickly determined, and it also contains a wealth of phases. The difference value can improve the accuracy of the phase difference value, and improve the accuracy and stability of focusing.
应该理解的是,虽然图7-13的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图7-13中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。It should be understood that although the various steps in the flowcharts of FIGS. 7-13 are displayed in sequence as indicated by the arrows, these steps are not necessarily executed in sequence in the order indicated by the arrows. Unless there is a clear description in this article, there is no strict order for the execution of these steps, and these steps can be executed in other orders. Moreover, at least some of the steps in Figure 7-13 may include multiple sub-steps or multiple stages. These sub-steps or stages are not necessarily executed at the same time, but can be executed at different times. These sub-steps or stages The execution order of is not necessarily performed sequentially, but may be performed alternately or alternately with at least a part of other steps or sub-steps or stages of other steps.
本申请实施例提供一种追焦装置,应用于电子设备,如图14所示,追焦装置包括:识别模块1402、预测模块1404、获取模块1406和追焦模块1408。The embodiment of the present application provides a focus tracking device, which is applied to electronic equipment. As shown in FIG. 14, the focus tracking device includes: an identification module 1402, a prediction module 1404, an acquisition module 1406, and a focus tracking module 1408.
识别模块1402,用于获取预览图像中的目标主体所处的目标主体检测区域。The recognition module 1402 is used to obtain the target subject detection area where the target subject in the preview image is located.
预览图像指的是摄像头经过对焦之后得到的图像。主体是指各种对象,如人、花、猫、狗、牛、蓝天、白云、背景等。目标主体是指需要的主体,可根据需要选择。目标主体检测区域可以是根据目标主体的轮廓勾勒的区域,也可以是包围目标主体的矩形框或者圆框等框形。需要说明的是,目标主体区域的形状不作限定,目标主体区域内包括目标主体的大部分即可。The preview image refers to the image obtained after the camera is focused. The subject refers to various objects, such as people, flowers, cats, dogs, cows, blue sky, white clouds, backgrounds, etc. The target subject refers to the subject in need, which can be selected according to needs. The target body detection area may be an area outlined according to the outline of the target body, or may be a rectangular frame or a round frame surrounding the target body. It should be noted that the shape of the target body area is not limited, and the target body area includes most of the target body.
具体的,识别模块1402用于利用电子设备进行对焦获取预览图像,并对预览图像进行主体检测获取包括目标主体的目标主体检测区域。对焦指的是通过调整焦距从而使拍摄的物体成像清晰的过程。焦距指的是从摄像头中的透镜光心到光聚集的焦点的距离。主体检测(salient object detection)是指面对一个场景时,自动地对感兴趣区域进行处理而选择性的忽略不感兴趣区域。本实施例中感兴趣区域称为目标主体检测区域。在其中一个实施例中,主体检测模型是预先采集大量的训练数据,将训练数据输入到包含有初始网络权重的主体检测模型进行训练得到的。主体检测模型可训练能够识别检测各种主体,如人、花、猫、狗、背景等。Specifically, the recognition module 1402 is configured to use an electronic device to focus and obtain a preview image, and perform subject detection on the preview image to obtain a target subject detection area including the target subject. Focusing refers to the process of adjusting the focal length to make the image of the photographed object clear. The focal length refers to the distance from the optical center of the lens in the camera to the focal point of the light. Salient object detection refers to automatically processing regions of interest when facing a scene and selectively ignoring regions that are not of interest. In this embodiment, the region of interest is referred to as the target subject detection region. In one of the embodiments, the subject detection model is obtained by pre-collecting a large amount of training data, and inputting the training data into the subject detection model including the initial network weight for training. The subject detection model can be trained to recognize and detect various subjects, such as people, flowers, cats, dogs, backgrounds, etc.
预测模块1404,用于当目标主体移动时,根据目标主体检测区域和目标主体的移动数据确定目标主体预测区域,并获取目标主体预测区域对应的检测图像。The prediction module 1404 is configured to determine the target subject prediction area according to the target subject detection area and the movement data of the target subject when the target subject moves, and obtain a detection image corresponding to the target subject prediction area.
目标主体的移动数据为目标主体的运动速度、运动方向、运动轨迹等数据。目标主体的移动数据可以是利用训练好的神经网络模型获取。目标主体预测区域为预测下一个时序目标主体所在区域。检测图像指的是电子设备将目标主体预测区域作为对焦区域采集的包括目标主体的图像。The movement data of the target subject is data such as the speed, direction of movement, and trajectory of the target subject. The movement data of the target subject can be obtained by using a trained neural network model. The target subject prediction area is the area where the next time-series target subject is predicted. The detected image refers to an image including the target subject collected by the electronic device using the target subject prediction area as the focus area.
具体的,预测模块1404用于当目标主体为可移动主体时,可以检测到目标主体移动,自动进行追焦。根据目标主体的运动数据和当前的目标主体检测区域预测下一个时序对应的目标主体预测区域。举例来说,可以在训练好的神经网络模型中输入第一张图像和第二张图像,第一张图像和第二张图像包括的同一个目标主体,训练好的神经网络模型可以根据第一张图像和第二张图像包括的目标主体的不同移动数据,能够预测下一个时序目标主体的移动数据及目标主体预测区域;还可以是输入一张包括运动目标的第一图像,该第一图像包括:当前时序对应目标主体检测区域和目标主体的运动数据,对应的该网络模型能够输出第二图像,第二图像携带下一个时序对应的目标主体预测区域和目标主体的运动数据。对目标主体预测区域进行对焦,电子设备根据图像传感器中每个像素点组包括的像素点的像素信息获取检测图像。图像传感器包括的子像素点是一种能够将光信号转化为电信号的感光元件,可以根据子像素点输出的电信号来获取该子像素点接收到的光信号的强度,根据子像素点接收到的光信号的强度即可得到该子像素点的像素信息。Specifically, the prediction module 1404 is configured to detect the movement of the target subject when the target subject is a movable subject, and automatically perform focus tracking. According to the motion data of the target subject and the current target subject detection area, the target subject prediction area corresponding to the next time sequence is predicted. For example, you can input the first image and the second image into the trained neural network model. The first image and the second image include the same target subject. The trained neural network model can be based on the first image. The image and the second image include different movement data of the target subject, which can predict the movement data of the next time series target subject and the target subject prediction area; it can also be inputting a first image including a moving target, the first image Including: the current time sequence corresponding to the target subject detection area and the motion data of the target subject, the corresponding network model can output a second image, and the second image carries the target subject prediction area and the motion data of the target subject corresponding to the next time sequence. Focusing on the target subject prediction area, the electronic device obtains the detection image according to the pixel information of the pixel points included in each pixel point group in the image sensor. The sub-pixel included in the image sensor is a photosensitive element that can convert light signals into electrical signals. The intensity of the light signal received by the sub-pixel can be obtained according to the electrical signal output by the sub-pixel. The intensity of the obtained light signal can obtain the pixel information of the sub-pixel point.
获取模块1406,用于获取检测图像的相位差值,相位差值包括第一方向的相位差值和第 二方向的相位差值。第一方向与第二方向成预设夹角。The obtaining module 1406 is used to obtain the phase difference value of the detection image, the phase difference value including the phase difference value in the first direction and the phase difference value in the second direction. The first direction and the second direction form a preset angle.
具体的,获取模块1406用于获取检测图像的相位差值,该相位差值包括第一方向的相位差值和第二方向的相位差值。第一方向与第二方向成预设夹角
Figure PCTCN2020126139-appb-000005
可取值如30°、40°、45°、60°等角度。
Figure PCTCN2020126139-appb-000006
还可是90°,即当第一方向的相位差值是指水平方向上的相位差值,则第二方向的相位差值是指竖直方向上的相位差值。
Specifically, the acquiring module 1406 is configured to acquire the phase difference value of the detection image, and the phase difference value includes the phase difference value in the first direction and the phase difference value in the second direction. The first direction and the second direction form a preset angle
Figure PCTCN2020126139-appb-000005
Possible values are 30°, 40°, 45°, 60° and other angles.
Figure PCTCN2020126139-appb-000006
It may also be 90°, that is, when the phase difference value in the first direction refers to the phase difference value in the horizontal direction, the phase difference value in the second direction refers to the phase difference value in the vertical direction.
追焦模块1408,用于根据第一方向的相位差值和第二方向的相位差值控制镜头持续对移动的目标主体进行对焦。The focus tracking module 1408 is configured to control the lens to continuously focus on the moving target subject according to the phase difference value in the first direction and the phase difference value in the second direction.
具体的,追焦模块1408用于根据第一方向的相位差值和第二方向的相位差值能够得到与离焦距离值呈映射关系的目标相位差值,再由于目标相位差值与目标离焦距离值之间的对应关系可通过标定得到,获取第一方向的相位差值和第二方向的相位差值与目标相位差值,即可得到目标离焦距离。控制镜头根据目标离焦距离值持续对移动的目标主体进行对焦。追焦指的是当镜头对目标主体进行对焦之后,在后续的拍摄过程中保持对目标主体的对焦的过程,获取的检测图像中的目标主体保持清晰成像。Specifically, the focus tracking module 1408 is configured to obtain the target phase difference value that is in a mapping relationship with the defocus distance value according to the phase difference value in the first direction and the phase difference value in the second direction. The correspondence between the focal distance values can be obtained by calibration, and the target defocus distance can be obtained by obtaining the phase difference value in the first direction and the phase difference value in the second direction and the target phase difference value. The control lens continuously focuses on the moving target subject according to the target defocus distance value. Focus tracking refers to the process of maintaining the focus on the target subject in the subsequent shooting process after the lens focuses on the target subject, and the target subject in the acquired detection image remains clear imaging.
本实施例提供的追焦装置通过获取预览图像中的目标主体所处的目标主体检测区域。当目标主体移动时,根据目标主体检测区域和目标主体的移动数据确定目标主体预测区域,并获取目标主体预测区域对应的检测图像。获取检测图像的相位差值,相位差值包括第一方向的相位差值和第二方向的相位差值。第一方向与第二方向成预设夹角。根据第一方向的相位差值和第二方向的相位差值控制镜头持续对移动的目标主体进行对焦。本申请提供的方案针对存在水平纹理或竖直纹理的场景都可以有效的利用相位差值进行追焦,提高了追焦的准确度和稳定度。The focus tracking device provided in this embodiment acquires the target subject detection area in the preview image where the target subject is located. When the target subject moves, the target subject prediction area is determined according to the target subject detection area and the movement data of the target subject, and the detection image corresponding to the target subject prediction area is acquired. The phase difference value of the detection image is acquired, and the phase difference value includes the phase difference value in the first direction and the phase difference value in the second direction. The first direction and the second direction form a preset angle. The lens is controlled to continuously focus on the moving target subject according to the phase difference value in the first direction and the phase difference value in the second direction. The solution provided by the present application can effectively use the phase difference value to perform focus tracking for scenes with horizontal textures or vertical textures, and improve the accuracy and stability of focus tracking.
在其中一个实施例中,预测模块用于控制镜头移动以使焦点对准目标主体预测区域的中心,基于焦点采集目标主体预测区域对应的检测图像。In one of the embodiments, the prediction module is used to control the movement of the lens so that the focus is at the center of the predicted region of the target subject, and the detection image corresponding to the predicted region of the target subject is collected based on the focus.
在其中一个实施例中,预测模块用于将第一图像输入至预测网络模型,第一图像携带目标主体检测区域和目标主体的运动数据的信息,获取预测网络模型输出的第二图像,第二图像携带目标主体预测区域的信息。In one of the embodiments, the prediction module is used to input the first image into the prediction network model, the first image carries the information of the target subject detection area and the motion data of the target subject, and the second image output by the prediction network model is obtained. The image carries the information of the predicted area of the target subject.
在其中一个实施例中,获取模块用于根据第一方向的相位差值和第二方向的相位差值获取目标离焦距离;根据目标离焦距离控制电子设备的镜头移动持续对移动的目标主体进行对焦。In one of the embodiments, the acquiring module is used to acquire the target defocus distance according to the phase difference value in the first direction and the phase difference value in the second direction; according to the target defocus distance, the lens movement of the electronic device is controlled to continue to the moving target subject Focus.
在其中一个实施例中,获取模块用于根据第一方向的相位差值和第二方向的相位差值获取目标相位差值;根据目标相位差值获取目标离焦距离。In one of the embodiments, the obtaining module is configured to obtain the target phase difference value according to the phase difference value in the first direction and the phase difference value in the second direction; and obtain the target defocus distance according to the target phase difference value.
在其中一个实施例中,获取模块用于根据已标定的离焦函数和目标相位差值计算目标离焦距离,已标定的离焦函数用于表征目标相位差值和目标离焦距离的关系。In one of the embodiments, the acquisition module is used to calculate the target defocus distance according to the calibrated defocus function and the target phase difference value, and the calibrated defocus function is used to characterize the relationship between the target phase difference value and the target defocus distance.
在其中一个实施例中,获取模块用于获取第一方向的相位差值对应的第一置信度;获取第二方向的相位差值对应的第二置信度;根据第一置信度和第二置信度的大小关系确定目标相位差值。In one of the embodiments, the acquiring module is configured to acquire a first confidence level corresponding to a phase difference value in a first direction; acquire a second confidence level corresponding to a phase difference value in a second direction; according to the first confidence level and the second confidence level The degree of relationship determines the target phase difference value.
在其中一个实施例中,获取模块用于当第一置信度大于第二置信度时,将第一置信度对应的第一方向的相位差值作为目标相位差值;当第二置信度大于第一置信度时,将第二置信度对应的第二方向的相位差值作为目标相位差值;当第一置信度等于第二置信度时,将第一方向相位差和第二方向相位差均作为目标相位差值。In one of the embodiments, the acquiring module is configured to use the phase difference value in the first direction corresponding to the first confidence degree as the target phase difference value when the first confidence degree is greater than the second confidence degree; When the degree of confidence is one, the phase difference in the second direction corresponding to the second degree of confidence is taken as the target phase difference value; when the first degree of confidence is equal to the second degree of confidence, the phase difference in the first direction and the phase difference in the second direction are both As the target phase difference value.
在其中一个实施例中,获取模块用于按照第一方向将检测图像切分为第一切分图像和第二切分图像。根据第一切分图像和第二切分图像对应的相位关系获取第一方向的相位差值;按照第二方向将检测图像切分为第三切分图像和第四切分图像。根据第三切分图像和第四切分图像对应的相位关系获取第二方向的相位差值。In one of the embodiments, the acquisition module is used to divide the detected image into the first segmented image and the second segmented image according to the first direction. The phase difference value in the first direction is obtained according to the phase relationship corresponding to the first segmented image and the second segmented image; the detected image is segmented into the third segmented image and the fourth segmented image according to the second direction. Acquire the phase difference value in the second direction according to the phase relationship corresponding to the third segmented image and the fourth segmented image.
在其中一个实施例中,获取模块用于按照第一方向对检测图像进行切分处理,得到多个图像区域,每个图像区域包括检测图像中的一行像素,从多个图像区域中获取多个第一切分图像区域和多个第二切分图像区域,第一切分图像区域包括检测图像中偶数行的像素,第二切分图像区域包括检测图像中奇数行的像素,利用多个第一切分图像区域拼接成第一切分图像,利用多个第二切分图像区域组成第二切分图像;按照第二方向对检测图像进行切分处理, 得到多个图像区域,每个图像区域包括检测图像中的一列像素;从多个图像区域中获取多个第三切分图像区域和多个第四切分图像区域,第三切分图像区域包括检测图像中偶数列的像素,第四切分图像区域包括检测图像中奇数列的像素;利用多个第三切分图像区域拼接成第三切分图像,利用多个第四切分图像区域组成第四切分图像。In one of the embodiments, the acquisition module is used to perform segmentation processing on the detection image according to the first direction to obtain multiple image areas, each image area includes a row of pixels in the detection image, and multiple image areas are obtained from the multiple image areas. The first segmented image area and a plurality of second segmented image areas, the first segmented image area includes pixels in even-numbered lines in the detection image, and the second segmented image area includes pixels in odd-numbered lines in the detection image. The segmented image areas are spliced into the first segmented image, and multiple second segmented image areas are used to form the second segmented image; the detection image is segmented according to the second direction to obtain multiple image areas, each image The area includes a column of pixels in the detection image; multiple third segmented image areas and multiple fourth segmented image areas are obtained from the multiple image areas, and the third segmented image area includes even-numbered columns of pixels in the detected image. The four-segmented image area includes pixels of odd columns in the detection image; a plurality of third-segmented image areas are used to stitch together a third segmented image, and a plurality of fourth-segmented image areas are used to form a fourth segmented image.
在其中一个实施例中,获取模块用于根据第一切分图像和第二切分图像中相互匹配的像素的位置差异,确定相互匹配的像素的相位差值。根据相互匹配的像素的相位差值确定第一方向的相位差值;根据第三切分图像和第四切分图像中相互匹配的像素的位置差异,确定相互匹配的像素的相位差值。根据相互匹配的像素的相位差值确定第二方向的相位差值。In one of the embodiments, the acquisition module is configured to determine the phase difference value of the pixels that match each other according to the position difference of the pixels that match each other in the first segmented image and the second segmented image. The phase difference value in the first direction is determined according to the phase difference values of the mutually matched pixels; the phase difference value of the mutually matched pixels is determined according to the position difference of the mutually matched pixels in the third segmented image and the fourth segmented image. The phase difference value in the second direction is determined according to the phase difference value of the pixels that match each other.
上述追焦装置中各个模块的划分仅用于举例说明,在其他实施例中,可将追焦装置按照需要划分为不同的模块,以完成上述追焦装置的全部或部分功能。The division of the various modules in the focus tracking device described above is only for illustration. In other embodiments, the focus tracking device may be divided into different modules as needed to complete all or part of the functions of the focus tracking device.
关于追焦装置的具体限定可以参见上文中对于追焦方法的限定,在此不再赘述。上述追焦装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于电子设备中的处理器中,也可以以软件形式存储于电子设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。For the specific definition of the focus tracking device, please refer to the above definition of the focus tracking method, which will not be repeated here. Each module in the above-mentioned tracking device can be implemented in whole or in part by software, hardware, and a combination thereof. The foregoing modules may be embedded in the form of hardware or independent of the processor in the electronic device, or may be stored in the memory of the electronic device in the form of software, so that the processor can call and execute the operations corresponding to the foregoing modules.
一种电子设备,包括存储器及一个或多个处理器,存储器中储存有计算机可读指令,计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行如上述各个实施例中的追焦方法。An electronic device that includes a memory and one or more processors. The memory stores computer-readable instructions. When the computer-readable instructions are executed by one or more processors, the one or more processors execute the various implementations described above. The focus tracking method in the example.
本申请实施例还提供了一种计算机可读存储介质。一个或多个包含计算机可执行指令的非易失性计算机可读存储介质,当所述计算机可执行指令被一个或多个处理器执行时,使得所述处理器执行上述各个实施例中的追焦方法。The embodiment of the present application also provides a computer-readable storage medium. One or more non-volatile computer-readable storage media containing computer-executable instructions, when the computer-executable instructions are executed by one or more processors, cause the processors to execute the tracking in each of the foregoing embodiments焦方法。 Focus method.
一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述各个实施例中的追焦方法。A computer program product containing instructions, when it runs on a computer, causes the computer to execute the focus tracking method in each of the above-mentioned embodiments.
在本申请的其中一个实施例中,提供了一种电子设备,该电子设备可以为电子设备,该电子设备可以为具有数字图像拍摄功能的电子设备,例如,该电子设备可以为智能手机、平板电脑、照相机或者摄像机等。其内部结构图可以如图15所示。该电子设备包括通过系统总线连接的处理器和存储器。其中,该电子设备的处理器用于提供计算和控制能力。该电子设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质可以存储有操作系统和计算机可读指令。该内存储器为非易失性存储介质中的操作系统和计算机可读指令的运行提供环境。该计算机可读指令被处理器执行时以实现本申请实施例提供的一种追焦方法。In one of the embodiments of the present application, an electronic device is provided. The electronic device may be an electronic device, and the electronic device may be an electronic device with a digital image capturing function. For example, the electronic device may be a smart phone or a tablet. Computer, camera or video camera, etc. The internal structure diagram can be as shown in Figure 15. The electronic device includes a processor and a memory connected through a system bus. Among them, the processor of the electronic device is used to provide calculation and control capabilities. The memory of the electronic device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium may store an operating system and computer readable instructions. The internal memory provides an environment for the operation of the operating system and computer-readable instructions in the non-volatile storage medium. When the computer-readable instruction is executed by the processor, the focus tracking method provided in the embodiment of the present application is realized.
除此以外,虽然图15未示出,该电子设备还可以包括镜头和图像传感器,其中,镜头可以由一组透镜组成,图像传感器可以为金属氧化物半导体元件(英文:Complementary Metal Oxide Semiconductor。简称:CMOS)图像传感器、电荷耦合元件(英文:Charge-coupled Device。简称:CCD)、量子薄膜传感器或者有机传感器等。该图像传感器可以通过总线与处理器连接,处理器可以通过图像传感器向其输出的信号来实现本申请实施例提供的一种追焦方法。In addition, although not shown in FIG. 15, the electronic device may also include a lens and an image sensor, where the lens may be composed of a set of lenses, and the image sensor may be a metal oxide semiconductor element (English: Complementary Metal Oxide Semiconductor. Abbreviation) : CMOS) image sensor, charge-coupled device (English: Charge-coupled Device. Abbreviation: CCD), quantum thin film sensor or organic sensor, etc. The image sensor may be connected to the processor through a bus, and the processor may implement a focus tracking method provided by the embodiment of the present application through a signal output from the image sensor to the processor.
本领域技术人员可以理解,图15中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的电子设备的限定,具体的电子设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。Those skilled in the art can understand that the structure shown in FIG. 15 is only a block diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation on the electronic device to which the solution of the present application is applied. The specific electronic device may Including more or fewer parts than shown in the figure, or combining some parts, or having a different arrangement of parts.
在本申请的一个实施例中,提供了一种电子设备,该电子设备包括存储器和一个或多个处理器,存储器中存储有计算机可读指令,一个或多个处理器执行计算机可读指令时实现以下步骤:In an embodiment of the present application, an electronic device is provided, the electronic device includes a memory and one or more processors. The memory stores computer-readable instructions. When one or more processors execute the computer-readable instructions, Implement the following steps:
获取预览图像中的目标主体所处的目标主体检测区域。当目标主体移动时,根据目标主体检测区域和目标主体的移动数据确定目标主体预测区域,并获取目标主体预测区域对应的检测图像。获取检测图像的相位差值,相位差值包括第一方向的相位差值和第二方向的相位差值。第一方向与第二方向成预设夹角。根据第一方向的相位差值和第二方向的相位差值控制镜头持续对移动的目标主体进行对焦。Obtain the target subject detection area where the target subject in the preview image is located. When the target subject moves, the target subject prediction area is determined according to the target subject detection area and the movement data of the target subject, and the detection image corresponding to the target subject prediction area is acquired. The phase difference value of the detection image is acquired, and the phase difference value includes the phase difference value in the first direction and the phase difference value in the second direction. The first direction and the second direction form a preset angle. The lens is controlled to continuously focus on the moving target subject according to the phase difference value in the first direction and the phase difference value in the second direction.
本实施例提供的计算机可读存储介质,其实现原理和技术效果与上述方法实施例类似,在此不再赘述。The implementation principles and technical effects of the computer-readable storage medium provided in this embodiment are similar to those of the foregoing method embodiments, and will not be repeated here.
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计 算机可读指令来指令相关的硬件来完成,的计算机可读指令可存储于一非易失性计算机可读取存储介质中,该计算机可读指令在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。A person of ordinary skill in the art can understand that all or part of the processes in the above-mentioned embodiment methods can be implemented by instructing relevant hardware through computer-readable instructions. The computer-readable instructions can be stored in a non-volatile computer readable. In the storage medium, when the computer-readable instructions are executed, they may include the processes of the above-mentioned method embodiments. Wherein, any reference to memory, storage, database or other media used in the embodiments provided in this application may include non-volatile and/or volatile memory. Non-volatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory. Volatile memory may include random access memory (RAM) or external cache memory. As an illustration and not a limitation, RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Channel (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。The above-mentioned embodiments only express several implementation manners of the present application, and the description is relatively specific and detailed, but it should not be understood as a limitation to the patent scope of the present application. It should be pointed out that for those of ordinary skill in the art, without departing from the concept of this application, several modifications and improvements can be made, and these all fall within the protection scope of this application. Therefore, the scope of protection of the patent of this application shall be subject to the appended claims.

Claims (37)

  1. 一种追焦方法,应用于电子设备,所述电子设备包括图像传感器和镜头,所述图像传感器包括阵列排布的多个像素点组,每个所述像素点组包括阵列排布的M*N个像素点;每个像素点对应一个感光单元,其中,M和N均为大于或等于2的自然数;所述方法包括:A tracking focus method is applied to an electronic device. The electronic device includes an image sensor and a lens. The image sensor includes a plurality of pixel point groups arranged in an array, and each of the pixel point groups includes M* arranged in an array. N pixels; each pixel corresponds to a photosensitive unit, where M and N are both natural numbers greater than or equal to 2; the method includes:
    获取预览图像中的目标主体所处的目标主体检测区域;Acquiring the target subject detection area where the target subject in the preview image is located;
    当所述目标主体移动时,根据所述目标主体检测区域和所述目标主体的移动数据确定目标主体预测区域,并获取所述目标主体预测区域对应的检测图像;When the target subject moves, determine a target subject prediction area according to the target subject detection area and the movement data of the target subject, and obtain a detection image corresponding to the target subject prediction area;
    利用所述图像传感器获取所述检测图像的相位差值,所述相位差值包括第一方向的相位差值和第二方向的相位差值;所述第一方向与所述第二方向成预设夹角;及The image sensor is used to obtain the phase difference value of the detection image, the phase difference value includes a phase difference value in a first direction and a phase difference value in a second direction; Set an angle; and
    根据所述第一方向的相位差值和所述第二方向的相位差值控制所述镜头持续对移动的所述目标主体进行对焦。The lens is controlled to continuously focus on the moving target subject according to the phase difference value in the first direction and the phase difference value in the second direction.
  2. 根据权利要求1所述的方法,其中,所述并获取所述目标主体预测区域对应的检测图像,包括:The method according to claim 1, wherein said and acquiring a detection image corresponding to said target subject prediction area comprises:
    控制所述镜头移动以使焦点对准所述目标主体预测区域的中心并采集所述目标主体预测区域对应的检测图像。Controlling the movement of the lens so that the focus is on the center of the target subject prediction area and collecting a detection image corresponding to the target subject prediction area.
  3. 根据权利要求1所述的方法,其中,所述根据所述目标主体检测区域和所述目标主体的运动数据确定目标主体预测区域,包括:The method according to claim 1, wherein the determining the target subject prediction area according to the target subject detection area and the motion data of the target subject comprises:
    将第一图像输入至预测网络模型,所述第一图像携带所述目标主体检测区域和所述目标主体的运动数据的信息;Inputting a first image into a prediction network model, the first image carrying information about the target subject detection area and the motion data of the target subject;
    获取所述预测网络模型输出的第二图像,所述第二图像标记有所述目标主体预测区域。Acquire a second image output by the prediction network model, where the second image is marked with the target subject prediction area.
  4. 根据权利要求3所述的方法,其中,所述预测网络模型为基于循环神经网络算法建立的网络模型。The method according to claim 3, wherein the predictive network model is a network model established based on a recurrent neural network algorithm.
  5. 根据权利要求1所述的方法,其中,所述第一方向的相位差值和所述第二方向的相位差值控制所述镜头持续对移动的所述目标主体进行对焦,包括:The method according to claim 1, wherein the phase difference value in the first direction and the phase difference value in the second direction controlling the lens to continuously focus on the moving target subject comprises:
    根据所述第一方向的相位差值和所述第二方向的相位差值获取目标离焦距离;Acquiring the target defocus distance according to the phase difference value in the first direction and the phase difference value in the second direction;
    根据所述目标离焦距离控制所述电子设备的镜头移动持续对移动的所述目标主体进行对焦。The lens movement of the electronic device is controlled according to the target defocus distance to continuously focus the moving target subject.
  6. 根据权利要求5所述的方法,其中,所述根据所述第一方向的相位差值和所述第二方向的相位差值获取目标离焦距离,包括:The method according to claim 5, wherein the obtaining the target defocus distance according to the phase difference value in the first direction and the phase difference value in the second direction comprises:
    根据所述第一方向的相位差值和所述第二方向的相位差值获取目标相位差值;Obtaining a target phase difference value according to the phase difference value in the first direction and the phase difference value in the second direction;
    根据所述目标相位差值获取所述目标离焦距离。Obtain the target defocus distance according to the target phase difference value.
  7. 根据权利要求6所述的方法,其中,所述根据所述目标相位差值获取所述目标离焦距离,包括:The method according to claim 6, wherein the obtaining the target defocus distance according to the target phase difference value comprises:
    根据已标定的离焦函数和所述目标相位差值计算所述目标离焦距离,所述已标定的离焦函数用于表征所述目标相位差值和所述目标离焦距离的关系。The target defocus distance is calculated according to the calibrated defocus function and the target phase difference value, and the calibrated defocus function is used to characterize the relationship between the target phase difference value and the target defocus distance.
  8. 根据权利要求5所述的方法,其中,所述根据所述第一方向的相位差值和所述第二方向的相位差值获取目标相位差值,包括:The method according to claim 5, wherein the obtaining the target phase difference value according to the phase difference value in the first direction and the phase difference value in the second direction comprises:
    获取所述第一方向的相位差值对应的第一置信度;Acquiring a first confidence level corresponding to the phase difference value in the first direction;
    获取所述第二方向的相位差值对应的第二置信度;Acquiring a second confidence level corresponding to the phase difference value in the second direction;
    根据所述第一置信度和所述第二置信度的大小关系确定所述目标相位差值。The target phase difference value is determined according to the magnitude relationship between the first confidence level and the second confidence level.
  9. 根据权利要求8所述的方法,其中,所述根据所述第一置信度和所述第二置信度的大小关系确定所述目标相位差值,包括:The method according to claim 8, wherein the determining the target phase difference value according to the magnitude relationship between the first confidence level and the second confidence level comprises:
    当所述第一置信度大于所述第二置信度时,将所述第一置信度对应的所述第一方向的相位差值作为所述目标相位差值;When the first degree of confidence is greater than the second degree of confidence, use the phase difference value in the first direction corresponding to the first degree of confidence as the target phase difference value;
    当所述第二置信度大于所述第一置信度时,将所述第二置信度对应的所述第二方向的相位差值作为所述目标相位差值;When the second confidence level is greater than the first confidence level, use the phase difference value in the second direction corresponding to the second confidence level as the target phase difference value;
    当所述第一置信度等于所述第二置信度时,将所述第一方向相位差和所述第二方向相位差均作为所述目标相位差值。When the first degree of confidence is equal to the second degree of confidence, both the first direction phase difference and the second direction phase difference are used as the target phase difference value.
  10. 根据权利要求1所述的方法,其中,所述利用所述图像传感器获取所述检测图像的相位差值,包括:The method according to claim 1, wherein said obtaining the phase difference value of the detection image by the image sensor comprises:
    按照所述第一方向将所述检测图像切分为第一切分图像和第二切分图像;根据所述第一切分图像和所述第二切分图像对应的相位关系获取所述第一方向的相位差值;The detection image is divided into a first segmented image and a second segmented image according to the first direction; the first segmented image is obtained according to the phase relationship corresponding to the second segmented image Phase difference in one direction;
    按照所述第二方向将所述检测图像切分为第三切分图像和第四切分图像;根据所述第三切分图像和所述第四切分图像对应的相位关系获取所述第二方向的相位差值。The detection image is divided into a third segmented image and a fourth segmented image according to the second direction; the first segmented image is acquired according to the phase relationship corresponding to the third segmented image and the fourth segmented image The phase difference between the two directions.
  11. 根据权利要求10所述的方法,其中,所述第一方向为行方向,所述第二方向为列方向;所述按照所述第一方向将所述检测图像切分为第一切分图像和第二切分图像,包括:The method according to claim 10, wherein the first direction is a row direction, and the second direction is a column direction; and the detection image is divided into a first segmented image according to the first direction And the second segmentation image, including:
    按照所述第一方向对所述检测图像进行切分处理,得到多个图像区域,每个所述图像区域包括所述检测图像中的一行像素;从多个所述图像区域中获取多个第一切分图像区域和多个第二切分图像区域,所述第一切分图像区域包括所述检测图像中偶数行的像素,所述第二切分图像区域包括所述检测图像中奇数行的像素;利用所述多个第一切分图像区域拼接成所述第一切分图像,利用所述多个第二切分图像区域组成所述第二切分图像;The detection image is segmented according to the first direction to obtain a plurality of image regions, each of the image regions includes a row of pixels in the detection image; a plurality of first images are obtained from the plurality of image regions A segmented image area and a plurality of second segmented image areas, the first segmented image area includes pixels of even-numbered lines in the detection image, and the second segmented image area includes odd-numbered lines in the detection image的pixels; using the plurality of first segmented image areas to stitch together into the first segmented image, using the multiple second segmented image areas to form the second segmented image;
    所述按照所述第二方向将所述检测图像切分为第三切分图像和第四切分图像,包括:The segmenting the detection image into a third segmented image and a fourth segmented image according to the second direction includes:
    按照所述第二方向对所述检测图像进行切分处理,得到多个图像区域,每个所述图像区域包括所述检测图像中的一列像素;从多个所述图像区域中获取多个第三切分图像区域和多个第四切分图像区域,所述第三切分图像区域包括所述检测图像中偶数列的像素,所述第四切分图像区域包括所述检测图像中奇数列的像素;利用所述多个第三切分图像区域拼接成所述第三切分图像,利用所述多个第四切分图像区域组成所述第四切分图像。The detection image is segmented according to the second direction to obtain a plurality of image regions, each of the image regions includes a column of pixels in the detection image; and a plurality of first images are obtained from the plurality of image regions A three-segmented image area and a plurality of fourth-segmented image areas, the third segmented image area includes pixels in even-numbered columns in the detection image, and the fourth segmented image area includes odd-numbered columns in the detection image的pixels; using the plurality of third segmented image areas to form the third segmented image, and using the plurality of fourth segmented image areas to form the fourth segmented image.
  12. 根据权利要求10所述的方法,其中,所述根据所述第一切分图像和所述第二切分图像对应的相位关系获取所述第一方向的相位差值和所述根据所述第三切分图像和所述第四切分图像对应的相位关系获取所述第二方向的相位差值,包括:The method according to claim 10, wherein the phase difference value in the first direction and the phase difference value in the first direction according to the phase relationship corresponding to the first divided image and the second divided image are obtained and the phase difference value according to the first divided image Obtaining the phase difference value in the second direction by the phase relationship corresponding to the three segmented image and the fourth segmented image includes:
    根据所述第一切分图像和所述第二切分图像中相互匹配的像素的位置差异,确定所述相互匹配的像素的相位差值;根据所述相互匹配的像素的相位差值确定第一方向的相位差值;Determine the phase difference value of the mutually matched pixels according to the position difference of the matched pixels in the first segmented image and the second segmented image; determine the phase difference value of the mutually matched pixels according to the phase difference value of the mutually matched pixels Phase difference in one direction;
    根据所述第三切分图像和所述第四切分图像中相互匹配的像素的位置差异,确定所述相互匹配的像素的相位差值;根据所述相互匹配的像素的相位差值确定第二方向的相位差值。Determine the phase difference value of the mutually matched pixels according to the position difference of the matched pixels in the third segmented image and the fourth segmented image; determine the phase difference value of the mutually matched pixels according to the phase difference value of the mutually matched pixels The phase difference between the two directions.
  13. 一种追焦装置,其中,应用于电子设备,所述电子设备包括图像传感器和镜头,所述图像传感器包括阵列排布的多个像素点组,每个所述像素点组包括阵列排布的M*N个像素点;每个像素点对应一个感光单元,其中,M和N均为大于或等于2的自然数,包括:A tracking focus device, wherein, applied to an electronic device, the electronic device includes an image sensor and a lens, the image sensor includes a plurality of pixel point groups arranged in an array, and each of the pixel point groups includes an array arranged M*N pixels; each pixel corresponds to a photosensitive unit, where M and N are both natural numbers greater than or equal to 2, including:
    识别模块,用于获取预览图像中的目标主体所处的目标主体检测区域Recognition module, used to obtain the target subject detection area where the target subject in the preview image is located
    预测模块,用于当所述目标主体移动时,根据所述目标主体检测区域和所述目标主体的移动数据确定目标主体预测区域,并获取所述目标主体预测区域对应的检测图像;A prediction module, configured to determine a target subject prediction area according to the target subject detection area and movement data of the target subject when the target subject moves, and obtain a detection image corresponding to the target subject prediction area;
    获取模块,用于利用所述图像传感器获取所述检测图像的相位差值,所述相位差值包括第一方向的相位差值和第二方向的相位差值;所述第一方向与所述第二方向成预设夹角;及The acquiring module is configured to acquire the phase difference value of the detection image by using the image sensor, the phase difference value including a phase difference value in a first direction and a phase difference value in a second direction; the first direction and the phase difference value The second direction is a preset angle; and
    追焦模块,用于根据所述第一方向的相位差值和所述第二方向的相位差值控制所述镜头持续对移动的所述目标主体进行对焦。The focus tracking module is configured to control the lens to continuously focus on the moving target subject according to the phase difference value in the first direction and the phase difference value in the second direction.
  14. 一种电子设备,包括存储器及一个或多个处理器,所述存储器中储存有计算机可读指令,所述计算机可读指令被所述一个或多个处理器执行时,使得所述一个或多个处理器执行以下步骤:An electronic device, including a memory and one or more processors. The memory stores computer-readable instructions. When the computer-readable instructions are executed by the one or more processors, the one or more Each processor performs the following steps:
    所述电子设备包括图像传感器和镜头,所述图像传感器包括阵列排布的多个像素点组,每个所述像素点组包括阵列排布的M*N个像素点;每个像素点对应一个感光单元,其中,M和N均为大于或等于2的自然数;The electronic device includes an image sensor and a lens, the image sensor includes a plurality of pixel point groups arranged in an array, and each pixel point group includes M*N pixel points arranged in an array; each pixel point corresponds to one The photosensitive unit, where M and N are both natural numbers greater than or equal to 2;
    获取预览图像中的目标主体所处的目标主体检测区域;Acquiring the target subject detection area where the target subject in the preview image is located;
    当所述目标主体移动时,根据所述目标主体检测区域和所述目标主体的移动数据确定目标主体预测区域,并获取所述目标主体预测区域对应的检测图像;When the target subject moves, determine a target subject prediction area according to the target subject detection area and the movement data of the target subject, and obtain a detection image corresponding to the target subject prediction area;
    利用所述图像传感器获取所述检测图像的相位差值,所述相位差值包括第一方向的相位差值和第二方向的相位差值;所述第一方向与所述第二方向成预设夹角;及The image sensor is used to obtain the phase difference value of the detection image, the phase difference value includes a phase difference value in a first direction and a phase difference value in a second direction; Set an angle; and
    根据所述第一方向的相位差值和所述第二方向的相位差值控制所述镜头持续对移动的所 述目标主体进行对焦。The lens is controlled to continuously focus on the moving target subject according to the phase difference value in the first direction and the phase difference value in the second direction.
  15. 根据权利要求14所述的电子设备,其中,所述处理器执行所述计算机可读指令时还执行以下步骤:The electronic device according to claim 14, wherein the processor further executes the following steps when executing the computer-readable instructions:
    控制所述镜头移动以使焦点对准所述目标主体预测区域的中心并采集所述目标主体预测区域对应的检测图像。Controlling the movement of the lens so that the focus is on the center of the target subject prediction area and collecting a detection image corresponding to the target subject prediction area.
  16. 根据权利要求14所述的电子设备,其中,所述处理器执行所述计算机可读指令时还执行以下步骤:The electronic device according to claim 14, wherein the processor further executes the following steps when executing the computer-readable instructions:
    将第一图像输入至预测网络模型,所述第一图像携带所述目标主体检测区域和所述目标主体的运动数据的信息;Inputting a first image into a prediction network model, the first image carrying information about the target subject detection area and the motion data of the target subject;
    获取所述预测网络模型输出的第二图像,所述第二图像标记有所述目标主体预测区域。Acquire a second image output by the prediction network model, where the second image is marked with the target subject prediction area.
  17. 根据权利要求16所述的电子设备,其中,所述预测网络模型为基于循环神经网络算法建立的网络模型。The electronic device according to claim 16, wherein the predictive network model is a network model established based on a recurrent neural network algorithm.
  18. 根据权利要求16所述的电子设备,其中,所述处理器执行所述计算机可读指令时还执行以下步骤:The electronic device according to claim 16, wherein the processor further executes the following steps when executing the computer-readable instructions:
    根据所述第一方向的相位差值和所述第二方向的相位差值获取目标离焦距离;Acquiring the target defocus distance according to the phase difference value in the first direction and the phase difference value in the second direction;
    根据所述目标离焦距离控制所述电子设备的镜头移动持续对移动的所述目标主体进行对焦。The lens movement of the electronic device is controlled according to the target defocus distance to continuously focus the moving target subject.
  19. 根据权利要求18所述的电子设备,其中,所述处理器执行所述计算机可读指令时还执行以下步骤:The electronic device according to claim 18, wherein the processor further executes the following steps when executing the computer-readable instructions:
    根据所述第一方向的相位差值和所述第二方向的相位差值获取目标相位差值;Obtaining a target phase difference value according to the phase difference value in the first direction and the phase difference value in the second direction;
    根据所述目标相位差值获取所述目标离焦距离。Obtain the target defocus distance according to the target phase difference value.
  20. 根据权利要求19所述的电子设备,其中,所述处理器执行所述计算机可读指令时还执行以下步骤:The electronic device according to claim 19, wherein the processor further executes the following steps when executing the computer-readable instructions:
    根据已标定的离焦函数和所述目标相位差值计算所述目标离焦距离,所述已标定的离焦函数用于表征所述目标相位差值和所述目标离焦距离的关系。The target defocus distance is calculated according to the calibrated defocus function and the target phase difference value, and the calibrated defocus function is used to characterize the relationship between the target phase difference value and the target defocus distance.
  21. 根据权利要求18所述的电子设备,其中,所述处理器执行所述计算机可读指令时还执行以下步骤:The electronic device according to claim 18, wherein the processor further executes the following steps when executing the computer-readable instructions:
    获取所述第一方向的相位差值对应的第一置信度;Acquiring a first confidence level corresponding to the phase difference value in the first direction;
    获取所述第二方向的相位差值对应的第二置信度;Acquiring a second confidence level corresponding to the phase difference value in the second direction;
    根据所述第一置信度和所述第二置信度的大小关系确定所述目标相位差值。The target phase difference value is determined according to the magnitude relationship between the first confidence level and the second confidence level.
  22. 根据权利要求21所述的电子设备,其中,所述处理器执行所述计算机可读指令时还执行以下步骤:The electronic device according to claim 21, wherein the processor further executes the following steps when executing the computer-readable instructions:
    当所述第一置信度大于所述第二置信度时,将所述第一置信度对应的所述第一方向的相位差值作为所述目标相位差值;When the first degree of confidence is greater than the second degree of confidence, use the phase difference value in the first direction corresponding to the first degree of confidence as the target phase difference value;
    当所述第二置信度大于所述第一置信度时,将所述第二置信度对应的所述第二方向的相位差值作为所述目标相位差值;When the second confidence level is greater than the first confidence level, use the phase difference value in the second direction corresponding to the second confidence level as the target phase difference value;
    当所述第一置信度等于所述第二置信度时,将所述第一方向相位差和所述第二方向相位差均作为所述目标相位差值。When the first degree of confidence is equal to the second degree of confidence, both the first direction phase difference and the second direction phase difference are used as the target phase difference value.
  23. 根据权利要求14所述的电子设备,其中,所述处理器执行所述计算机可读指令时还执行以下步骤:The electronic device according to claim 14, wherein the processor further executes the following steps when executing the computer-readable instructions:
    按照所述第一方向将所述检测图像切分为第一切分图像和第二切分图像;根据所述第一切分图像和所述第二切分图像对应的相位关系获取所述第一方向的相位差值;The detection image is divided into a first segmented image and a second segmented image according to the first direction; the first segmented image is obtained according to the phase relationship corresponding to the second segmented image Phase difference in one direction;
    按照所述第二方向将所述检测图像切分为第三切分图像和第四切分图像;根据所述第三切分图像和所述第四切分图像对应的相位关系获取所述第二方向的相位差值。The detection image is divided into a third segmented image and a fourth segmented image according to the second direction; the first segmented image is acquired according to the phase relationship corresponding to the third segmented image and the fourth segmented image The phase difference between the two directions.
  24. 根据权利要求14所述的电子设备,其中,所述第一方向为行方向,所述第二方向为列方向,所述处理器执行所述计算机可读指令时还执行以下步骤:The electronic device according to claim 14, wherein the first direction is a row direction and the second direction is a column direction, and the processor further executes the following steps when executing the computer-readable instruction:
    按照所述第一方向对所述检测图像进行切分处理,得到多个图像区域,每个所述图像区域包括所述检测图像中的一行像素;从多个所述图像区域中获取多个第一切分图像区域和多 个第二切分图像区域,所述第一切分图像区域包括所述检测图像中偶数行的像素,所述第二切分图像区域包括所述检测图像中奇数行的像素;利用所述多个第一切分图像区域拼接成所述第一切分图像,利用所述多个第二切分图像区域组成所述第二切分图像;The detection image is segmented according to the first direction to obtain a plurality of image regions, each of the image regions includes a row of pixels in the detection image; a plurality of first images are obtained from the plurality of image regions A segmented image area and a plurality of second segmented image areas, the first segmented image area includes pixels of even-numbered lines in the detection image, and the second segmented image area includes odd-numbered lines in the detection image的pixels; using the plurality of first segmented image areas to stitch together into the first segmented image, using the multiple second segmented image areas to form the second segmented image;
    按照所述第二方向对所述检测图像进行切分处理,得到多个图像区域,每个所述图像区域包括所述检测图像中的一列像素;从多个所述图像区域中获取多个第三切分图像区域和多个第四切分图像区域,所述第三切分图像区域包括所述检测图像中偶数列的像素,所述第四切分图像区域包括所述检测图像中奇数列的像素;利用所述多个第三切分图像区域拼接成所述第三切分图像,利用所述多个第四切分图像区域组成所述第四切分图像。The detection image is segmented according to the second direction to obtain a plurality of image regions, each of the image regions includes a column of pixels in the detection image; and a plurality of first images are obtained from the plurality of image regions A three-segmented image area and a plurality of fourth-segmented image areas, the third segmented image area includes pixels in even-numbered columns in the detection image, and the fourth segmented image area includes odd-numbered columns in the detection image的pixels; using the plurality of third segmented image areas to form the third segmented image, and using the plurality of fourth segmented image areas to form the fourth segmented image.
  25. 根据权利要求23所述的电子设备,其中,所述处理器执行所述计算机可读指令时还执行以下步骤:The electronic device according to claim 23, wherein the processor further executes the following steps when executing the computer-readable instructions:
    根据所述第一切分图像和所述第二切分图像中相互匹配的像素的位置差异,确定所述相互匹配的像素的相位差值;根据所述相互匹配的像素的相位差值确定第一方向的相位差值;Determine the phase difference value of the mutually matched pixels according to the position difference of the matched pixels in the first segmented image and the second segmented image; determine the phase difference value of the mutually matched pixels according to the phase difference value of the mutually matched pixels Phase difference in one direction;
    根据所述第三切分图像和所述第四切分图像中相互匹配的像素的位置差异,确定所述相互匹配的像素的相位差值;根据所述相互匹配的像素的相位差值确定第二方向的相位差值。Determine the phase difference value of the mutually matched pixels according to the position difference of the matched pixels in the third segmented image and the fourth segmented image; determine the phase difference value of the mutually matched pixels according to the phase difference value of the mutually matched pixels The phase difference between the two directions.
  26. 一个或多个存储有计算机可读指令的计算机可读存储介质,计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行以下步骤:One or more computer-readable storage media storing computer-readable instructions. When the computer-readable instructions are executed by one or more processors, the one or more processors perform the following steps:
    获取预览图像中的目标主体所处的目标主体检测区域;Acquiring the target subject detection area where the target subject in the preview image is located;
    当所述目标主体移动时,根据所述目标主体检测区域和所述目标主体的移动数据确定目标主体预测区域,并获取所述目标主体预测区域对应的检测图像;When the target subject moves, determine a target subject prediction area according to the target subject detection area and the movement data of the target subject, and obtain a detection image corresponding to the target subject prediction area;
    利用所述图像传感器获取所述检测图像的相位差值,所述相位差值包括第一方向的相位差值和第二方向的相位差值;所述第一方向与所述第二方向成预设夹角;所述图像传感器包括阵列排布的多个像素点组,每个所述像素点组包括阵列排布的M*N个像素点;每个像素点对应一个感光单元,其中,M和N均为大于或等于2的自然数;及The image sensor is used to obtain the phase difference value of the detection image, the phase difference value includes a phase difference value in a first direction and a phase difference value in a second direction; Set an angle; the image sensor includes a plurality of pixel point groups arranged in an array, each of the pixel point groups includes M*N pixel points arranged in an array; each pixel point corresponds to a photosensitive unit, where M And N are both natural numbers greater than or equal to 2; and
    根据所述第一方向的相位差值和所述第二方向的相位差值控制所述镜头持续对移动的所述目标主体进行对焦。The lens is controlled to continuously focus on the moving target subject according to the phase difference value in the first direction and the phase difference value in the second direction.
  27. 根据权利要求26所述的存储介质,其中,所述处理器执行所述计算机可读指令时还执行以下步骤:The storage medium according to claim 26, wherein the processor further executes the following steps when executing the computer-readable instructions:
    控制所述镜头移动以使焦点对准所述目标主体预测区域的中心并采集所述目标主体预测区域对应的检测图像。Controlling the movement of the lens so that the focus is on the center of the target subject prediction area and collecting a detection image corresponding to the target subject prediction area.
  28. 根据权利要求26所述的存储介质,其中,所述处理器执行所述计算机可读指令时还执行以下步骤:The storage medium according to claim 26, wherein the processor further executes the following steps when executing the computer-readable instructions:
    将第一图像输入至预测网络模型,所述第一图像携带所述目标主体检测区域和所述目标主体的运动数据的信息;Inputting a first image into a prediction network model, the first image carrying information about the target subject detection area and the motion data of the target subject;
    获取所述预测网络模型输出的第二图像,所述第二图像标记有所述目标主体预测区域。Acquire a second image output by the prediction network model, where the second image is marked with the target subject prediction area.
  29. 根据权利要求28所述的存储介质,其中,所述预测网络模型为基于循环神经网络算法建立的网络模型。The storage medium according to claim 28, wherein the predictive network model is a network model established based on a recurrent neural network algorithm.
  30. 根据权利要求26所述的存储介质,其中,所述处理器执行所述计算机可读指令时还执行以下步骤:The storage medium according to claim 26, wherein the processor further executes the following steps when executing the computer-readable instructions:
    根据所述第一方向的相位差值和所述第二方向的相位差值获取目标离焦距离;Acquiring the target defocus distance according to the phase difference value in the first direction and the phase difference value in the second direction;
    根据所述目标离焦距离控制所述电子设备的镜头移动持续对移动的所述目标主体进行对焦。The lens movement of the electronic device is controlled according to the target defocus distance to continuously focus the moving target subject.
  31. 根据权利要求30所述的存储介质,其中,所述处理器执行所述计算机可读指令时还执行以下步骤:The storage medium according to claim 30, wherein the processor further executes the following steps when executing the computer-readable instructions:
    根据所述第一方向的相位差值和所述第二方向的相位差值获取目标相位差值;Obtaining a target phase difference value according to the phase difference value in the first direction and the phase difference value in the second direction;
    根据所述目标相位差值获取所述目标离焦距离。Obtain the target defocus distance according to the target phase difference value.
  32. 根据权利要求31所述的存储介质,其中,所述处理器执行所述计算机可读指令时还执行以下步骤:The storage medium according to claim 31, wherein the processor further executes the following steps when executing the computer-readable instructions:
    根据已标定的离焦函数和所述目标相位差值计算所述目标离焦距离,所述已标定的离焦 函数用于表征所述目标相位差值和所述目标离焦距离的关系。The target defocus distance is calculated according to the calibrated defocus function and the target phase difference value, and the calibrated defocus function is used to characterize the relationship between the target phase difference value and the target defocus distance.
  33. 根据权利要求30所述的存储介质,其中,所述处理器执行所述计算机可读指令时还执行以下步骤:The storage medium according to claim 30, wherein the processor further executes the following steps when executing the computer-readable instructions:
    获取所述第一方向的相位差值对应的第一置信度;Acquiring a first confidence level corresponding to the phase difference value in the first direction;
    获取所述第二方向的相位差值对应的第二置信度;Acquiring a second confidence level corresponding to the phase difference value in the second direction;
    根据所述第一置信度和所述第二置信度的大小关系确定所述目标相位差值。The target phase difference value is determined according to the magnitude relationship between the first confidence level and the second confidence level.
  34. 根据权利要求33所述的存储介质,其中,所述处理器执行所述计算机可读指令时还执行以下步骤:The storage medium according to claim 33, wherein the processor further executes the following steps when executing the computer-readable instructions:
    当所述第一置信度大于所述第二置信度时,将所述第一置信度对应的所述第一方向的相位差值作为所述目标相位差值;When the first degree of confidence is greater than the second degree of confidence, use the phase difference value in the first direction corresponding to the first degree of confidence as the target phase difference value;
    当所述第二置信度大于所述第一置信度时,将所述第二置信度对应的所述第二方向的相位差值作为所述目标相位差值;When the second confidence level is greater than the first confidence level, use the phase difference value in the second direction corresponding to the second confidence level as the target phase difference value;
    当所述第一置信度等于所述第二置信度时,将所述第一方向相位差和所述第二方向相位差均作为所述目标相位差值。When the first degree of confidence is equal to the second degree of confidence, both the first direction phase difference and the second direction phase difference are used as the target phase difference value.
  35. 根据权利要求26所述的存储介质,其中,所述处理器执行所述计算机可读指令时还执行以下步骤:The storage medium according to claim 26, wherein the processor further executes the following steps when executing the computer-readable instructions:
    按照所述第一方向将所述检测图像切分为第一切分图像和第二切分图像;根据所述第一切分图像和所述第二切分图像对应的相位关系获取所述第一方向的相位差值;The detection image is divided into a first segmented image and a second segmented image according to the first direction; the first segmented image is obtained according to the phase relationship corresponding to the second segmented image Phase difference in one direction;
    按照所述第二方向将所述检测图像切分为第三切分图像和第四切分图像;根据所述第三切分图像和所述第四切分图像对应的相位关系获取所述第二方向的相位差值。The detection image is divided into a third segmented image and a fourth segmented image according to the second direction; the first segmented image is acquired according to the phase relationship corresponding to the third segmented image and the fourth segmented image The phase difference between the two directions.
  36. 根据权利要求26所述的存储介质,其中,所述第一方向为行方向,所述第二方向为列方向,所述处理器执行所述计算机可读指令时还执行以下步骤:The storage medium according to claim 26, wherein the first direction is a row direction and the second direction is a column direction, and the processor further executes the following steps when executing the computer-readable instruction:
    按照所述第一方向对所述检测图像进行切分处理,得到多个图像区域,每个所述图像区域包括所述检测图像中的一行像素;从多个所述图像区域中获取多个第一切分图像区域和多个第二切分图像区域,所述第一切分图像区域包括所述检测图像中偶数行的像素,所述第二切分图像区域包括所述检测图像中奇数行的像素;利用所述多个第一切分图像区域拼接成所述第一切分图像,利用所述多个第二切分图像区域组成所述第二切分图像;The detection image is segmented according to the first direction to obtain a plurality of image regions, each of the image regions includes a row of pixels in the detection image; a plurality of first images are obtained from the plurality of image regions A segmented image area and a plurality of second segmented image areas, the first segmented image area includes pixels of even-numbered lines in the detection image, and the second segmented image area includes odd-numbered lines in the detection image的pixels; using the plurality of first segmented image areas to stitch together into the first segmented image, using the multiple second segmented image areas to form the second segmented image;
    按照所述第二方向对所述检测图像进行切分处理,得到多个图像区域,每个所述图像区域包括所述检测图像中的一列像素;从多个所述图像区域中获取多个第三切分图像区域和多个第四切分图像区域,所述第三切分图像区域包括所述检测图像中偶数列的像素,所述第四切分图像区域包括所述检测图像中奇数列的像素;利用所述多个第三切分图像区域拼接成所述第三切分图像,利用所述多个第四切分图像区域组成所述第四切分图像。The detection image is segmented according to the second direction to obtain a plurality of image regions, each of the image regions includes a column of pixels in the detection image; and a plurality of first images are obtained from the plurality of image regions A three-segmented image area and a plurality of fourth-segmented image areas, the third segmented image area includes pixels in even-numbered columns in the detection image, and the fourth segmented image area includes odd-numbered columns in the detection image的pixels; using the plurality of third segmented image areas to form the third segmented image, and using the plurality of fourth segmented image areas to form the fourth segmented image.
  37. 根据权利要求35所述的存储介质,其中,所述处理器执行所述计算机可读指令时还执行以下步骤:The storage medium according to claim 35, wherein the processor further executes the following steps when executing the computer-readable instructions:
    根据所述第一切分图像和所述第二切分图像中相互匹配的像素的位置差异,确定所述相互匹配的像素的相位差值;根据所述相互匹配的像素的相位差值确定第一方向的相位差值;Determine the phase difference value of the mutually matched pixels according to the position difference of the matched pixels in the first segmented image and the second segmented image; determine the phase difference value of the mutually matched pixels according to the phase difference value of the mutually matched pixels Phase difference in one direction;
    根据所述第三切分图像和所述第四切分图像中相互匹配的像素的位置差异,确定所述相互匹配的像素的相位差值;根据所述相互匹配的像素的相位差值确定第二方向的相位差值。Determine the phase difference value of the mutually matched pixels according to the position difference of the matched pixels in the third segmented image and the fourth segmented image; determine the phase difference value of the mutually matched pixels according to the phase difference value of the mutually matched pixels The phase difference between the two directions.
PCT/CN2020/126139 2019-11-12 2020-11-03 Focusing method and apparatus, electronic device, and computer readable storage medium WO2021093637A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911101390.0A CN112866542B (en) 2019-11-12 2019-11-12 Focus tracking method and apparatus, electronic device, and computer-readable storage medium
CN201911101390.0 2019-11-12

Publications (1)

Publication Number Publication Date
WO2021093637A1 true WO2021093637A1 (en) 2021-05-20

Family

ID=75912138

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/126139 WO2021093637A1 (en) 2019-11-12 2020-11-03 Focusing method and apparatus, electronic device, and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN112866542B (en)
WO (1) WO2021093637A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114841862A (en) * 2022-06-07 2022-08-02 北京拙河科技有限公司 Image splicing method and system based on hundred million pixel array type camera

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11726392B2 (en) * 2020-09-01 2023-08-15 Sorenson Ip Holdings, Llc System, method, and computer-readable medium for autofocusing a videophone camera
CN117201935A (en) * 2022-05-25 2023-12-08 惠州Tcl移动通信有限公司 Lens focusing method, device, electronic equipment and computer readable storage medium
CN116847194B (en) * 2023-09-01 2023-12-08 荣耀终端有限公司 Focusing method and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090284645A1 (en) * 2006-09-04 2009-11-19 Nikon Corporation Camera
US20160205311A1 (en) * 2015-01-14 2016-07-14 Emanuele Mandelli Phase-detect autofocus
CN106031154A (en) * 2014-02-19 2016-10-12 三星电子株式会社 Method for processing image and electronic apparatus therefor
CN106357969A (en) * 2015-07-13 2017-01-25 宏达国际电子股份有限公司 Image capturing device and auto-focus method thereof
CN109922270A (en) * 2019-04-17 2019-06-21 德淮半导体有限公司 Phase focus image sensor chip
CN110248095A (en) * 2019-06-26 2019-09-17 Oppo广东移动通信有限公司 A kind of focusing mechanism, focusing method and storage medium
CN110248097A (en) * 2019-06-28 2019-09-17 Oppo广东移动通信有限公司 Chase after burnt method, apparatus, terminal device, computer readable storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5387856B2 (en) * 2010-02-16 2014-01-15 ソニー株式会社 Image processing apparatus, image processing method, image processing program, and imaging apparatus
JP5764884B2 (en) * 2010-08-16 2015-08-19 ソニー株式会社 Imaging device and imaging apparatus
WO2015141084A1 (en) * 2014-03-18 2015-09-24 富士フイルム株式会社 Imaging device, and focus control method
KR102374112B1 (en) * 2015-07-15 2022-03-14 삼성전자주식회사 An image sensor including an auto focusing pixel, and an image processing system including the same
CN106973206B (en) * 2017-04-28 2020-06-05 Oppo广东移动通信有限公司 Camera shooting module group camera shooting processing method and device and terminal equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090284645A1 (en) * 2006-09-04 2009-11-19 Nikon Corporation Camera
CN106031154A (en) * 2014-02-19 2016-10-12 三星电子株式会社 Method for processing image and electronic apparatus therefor
US20160205311A1 (en) * 2015-01-14 2016-07-14 Emanuele Mandelli Phase-detect autofocus
CN106357969A (en) * 2015-07-13 2017-01-25 宏达国际电子股份有限公司 Image capturing device and auto-focus method thereof
CN109922270A (en) * 2019-04-17 2019-06-21 德淮半导体有限公司 Phase focus image sensor chip
CN110248095A (en) * 2019-06-26 2019-09-17 Oppo广东移动通信有限公司 A kind of focusing mechanism, focusing method and storage medium
CN110248097A (en) * 2019-06-28 2019-09-17 Oppo广东移动通信有限公司 Chase after burnt method, apparatus, terminal device, computer readable storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114841862A (en) * 2022-06-07 2022-08-02 北京拙河科技有限公司 Image splicing method and system based on hundred million pixel array type camera

Also Published As

Publication number Publication date
CN112866542B (en) 2022-08-12
CN112866542A (en) 2021-05-28

Similar Documents

Publication Publication Date Title
WO2021093637A1 (en) Focusing method and apparatus, electronic device, and computer readable storage medium
CN110248096B (en) Focusing method and device, electronic equipment and computer readable storage medium
JP6855587B2 (en) Devices and methods for acquiring distance information from a viewpoint
US10257502B2 (en) Methods and apparatus for controlling light field capture
JP6509027B2 (en) Object tracking device, optical apparatus, imaging device, control method of object tracking device, program
EP4013033A1 (en) Method and apparatus for focusing on subject, and electronic device, and storage medium
CN110248097B (en) Focus tracking method and device, terminal equipment and computer readable storage medium
US20140002606A1 (en) Enhanced image processing with lens motion
CN110191287B (en) Focusing method and device, electronic equipment and computer readable storage medium
WO2021093635A1 (en) Image processing method and apparatus, electronic device, and computer readable storage medium
CN112866553B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN112866510B (en) Focusing method and device, electronic equipment and computer readable storage medium
WO2021093312A1 (en) Imaging assembly, focusing method and apparatus, and electronic device
CN112866675B (en) Depth map generation method and device, electronic equipment and computer-readable storage medium
CN112866655B (en) Image processing method and device, electronic equipment and computer readable storage medium
WO2021093528A1 (en) Focusing method and apparatus, and electronic device and computer readable storage medium
CN112866545B (en) Focusing control method and device, electronic equipment and computer readable storage medium
WO2021093502A1 (en) Phase difference obtaining method and apparatus, and electronic device
CN112866547B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN112862880A (en) Depth information acquisition method and device, electronic equipment and storage medium
CN112866552B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN112866551B (en) Focusing method and device, electronic equipment and computer readable storage medium
TW202014665A (en) Position inspection method and computer program product
CN112866544B (en) Phase difference acquisition method, device, equipment and storage medium
CN112866543B (en) Focusing control method and device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20887753

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20887753

Country of ref document: EP

Kind code of ref document: A1