WO2020259474A1 - Procédé et appareil de suivi de mise au point, équipement terminal, et support d'enregistrement lisible par ordinateur - Google Patents

Procédé et appareil de suivi de mise au point, équipement terminal, et support d'enregistrement lisible par ordinateur Download PDF

Info

Publication number
WO2020259474A1
WO2020259474A1 PCT/CN2020/097616 CN2020097616W WO2020259474A1 WO 2020259474 A1 WO2020259474 A1 WO 2020259474A1 CN 2020097616 W CN2020097616 W CN 2020097616W WO 2020259474 A1 WO2020259474 A1 WO 2020259474A1
Authority
WO
WIPO (PCT)
Prior art keywords
subject
current
movement
image
area
Prior art date
Application number
PCT/CN2020/097616
Other languages
English (en)
Chinese (zh)
Inventor
康健
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2020259474A1 publication Critical patent/WO2020259474A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/675Focus control based on electronic image sensor signals comprising setting of focusing regions

Definitions

  • This application relates to the field of computer technology, in particular to a focus tracking method, device, terminal equipment, and computer-readable storage medium.
  • a focus tracking method device, terminal device, and computer-readable storage medium are provided.
  • a focus tracking method includes:
  • a focus tracking device includes:
  • the historical image subject area acquisition module configured to acquire the first subject area detected by the subject in the first shot image
  • the subject prediction area module is used to obtain the position movement data of the current shooting terminal, and perform a matching movement on the position of the first subject area according to the position movement data to determine the subject prediction area in the currently browsed image;
  • the second subject area determining module is configured to perform subject detection on the currently browsed image according to the subject prediction area to obtain the second subject area;
  • the first distance driving module is configured to drive the lens to move in the same direction according to the movement direction of the current photographing terminal in the optical axis direction when it is determined that the current photographing terminal is moving in the optical axis direction according to the position movement data. A location;
  • the focusing module is used for focusing on the second subject area using the first position as a starting point, and taking an image at the target focusing position.
  • a terminal device includes a memory and one or more processors.
  • the memory stores computer-readable instructions.
  • the one or more processors execute The following steps:
  • One or more computer-readable storage media storing computer-readable instructions.
  • the one or more processors When the computer-readable instructions are executed by one or more processors, the one or more processors perform the following steps: Obtain the subject detection in the first captured image The obtained first body area;
  • Figure 1 is a block diagram of the internal structure of a mobile device in one or more embodiments
  • Figure 2 is a flowchart of a focus tracking method in another or more embodiments
  • Figure 3 is a schematic diagram of a device interface in one or more embodiments
  • Fig. 4 is a structural block diagram of a focus tracking device in one or more embodiments
  • Fig. 5 is a schematic diagram of the internal structure of a terminal device in one or more embodiments.
  • first, second, etc. used in this application can be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish the first element from another element.
  • first client may be referred to as the second client, and similarly, the second client may be referred to as the first client. Both the first client and the second client are clients, but they are not the same client.
  • the focusing method in the embodiment of this application can be applied to a terminal device.
  • the terminal device can be a computer device with a camera, a personal digital assistant, a tablet computer, a smart phone, a wearable device, etc.
  • the camera in the terminal device takes an image, it will automatically focus to ensure that the captured image is clear.
  • shooting moving objects avoiding traditional methods of shooting objects that are prone to prediction errors or recognition failures, which will cause the focus tracking failure to form a blurry out of focus image.
  • the foregoing terminal device may include an image processing circuit, which may be implemented by hardware and/or software components, and may include various processing units that define an ISP (Image Signal Processing, image signal processing) pipeline.
  • Figure 1 is a schematic diagram of an image processing circuit in one of the embodiments. As shown in FIG. 1, for ease of description, only various aspects of the image processing technology related to the embodiments of the present application are shown.
  • the image processing circuit includes an ISP processor 140 and a control logic 150.
  • the image data captured by the imaging device 110 is first processed by the ISP processor 140, and the ISP processor 140 analyzes the image data to capture image statistics that can be used to determine and/or one or more control parameters of the imaging device 110.
  • the imaging device 110 may include a camera having one or more lenses 112, an image sensor 114, and an actuator 116.
  • the actuator 116 can drive the lens 112 to move.
  • the image sensor 114 may include a color filter array (such as a Bayer filter).
  • the image sensor 114 may obtain the light intensity and wavelength information captured by each imaging pixel of the image sensor 114, and provide a set of raw materials that can be processed by the ISP processor 140. Image data.
  • the sensor 120 (such as a gyroscope) can provide the collected image processing parameters (such as anti-shake parameters) to the ISP processor 140 based on the interface type of the sensor 120.
  • the sensor 120 interface may utilize SMIA (Standard Mobile Imaging Architecture) interface, other serial or parallel camera interfaces, or a combination of the above interfaces.
  • SMIA Standard Mobile Imaging Architecture
  • the image sensor 114 may also send raw image data to the sensor 120, and the sensor 120 may provide the raw image data to the ISP processor 140 based on the interface type of the sensor 120, or the sensor 120 may store the raw image data in the image memory 130.
  • the ISP processor 140 processes the original image data pixel by pixel in multiple formats.
  • each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the ISP processor 140 may perform one or more image processing operations on the original image data, and collect statistical information about the image data. Among them, the image processing operations can be performed with the same or different bit depth accuracy.
  • the ISP processor 140 may also receive image data from the image memory 130.
  • the sensor 120 interface sends the original image data to the image memory 130, and the original image data in the image memory 130 is provided to the ISP processor 140 for processing.
  • the image memory 130 may be a part of a memory device, a storage device, or an independent dedicated memory in a terminal device, and may include DMA (Direct Memory Access, direct memory access) features.
  • DMA Direct Memory Access, direct memory access
  • the ISP processor 140 may perform one or more image processing operations, such as temporal filtering.
  • the processed image data can be sent to the image memory 130 for additional processing before being displayed.
  • the ISP processor 140 receives processed data from the image memory 130, and performs image data processing in the original domain and in the RGB and YCbCr color spaces on the processed data.
  • the image data processed by the ISP processor 140 may be output to the display 170 for viewing by a user and/or further processed by a graphics engine or a GPU (Graphics Processing Unit, graphics processor).
  • the output of the ISP processor 140 can also be sent to the image memory 130, and the display 170 can read image data from the image memory 130.
  • the image memory 130 may be configured to implement one or more frame buffers.
  • the output of the ISP processor 140 may be sent to the encoder/decoder 160 in order to encode/decode image data.
  • the encoded image data can be saved and decompressed before being displayed on the display 170 device.
  • the encoder/decoder 160 may be implemented by a CPU or GPU or a coprocessor.
  • the statistical data determined by the ISP processor 140 may be sent to the control logic 150 unit.
  • the statistical data may include image sensor 114 statistical information such as automatic exposure, automatic white balance, automatic focus, flicker detection, black level compensation, and lens 112 shading correction.
  • the control logic 150 may include a processor and/or a microcontroller that executes one or more routines (such as firmware). The one or more routines can determine the control parameters and ISP processing of the imaging device 110 based on the received statistical data.
  • the control parameters of the device 140 may include sensor 120 control parameters (such as gain, integration time for exposure control, anti-shake parameters, etc.), camera flash control parameters, lens 112 control parameters (such as focal length for focusing or zooming), or these The combination of parameters.
  • the control logic 150 can output the control parameters of the lens 112 to the actuator 116, and the actuator 116 drives the lens 112 to move according to the control parameters.
  • the ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (for example, during RGB processing), and lens 112 shading correction parameters.
  • Fig. 2 is a flowchart of a focus tracking method in one of the embodiments.
  • a focus tracking method which is applied to the above-mentioned terminal device as an example, specifically includes:
  • Step 202 Acquire a first subject area detected by the subject in the first captured image.
  • the subject is a target subject
  • the first captured image is a historical captured image
  • the first subject area can be obtained by identifying the location of the subject through a subject detection algorithm.
  • the subject detection algorithm can be customized, such as background subtraction algorithm, deep learning algorithm, face detection algorithm, etc., to obtain the first subject area in the first captured image, where the first subject area can be a regular or irregular area.
  • face recognition for example, the remaining face after removing the background is the target subject.
  • the subject recognition of the first captured image can be performed through the neural network algorithm of deep learning.
  • the neural network adjusts the parameters of the neural network according to the detected prediction area, so as to train the subject detection network that can accurately identify the subject area.
  • the training image further includes the target category, and the subject detection network that can accurately identify the subject area and category is obtained by training.
  • the subject detection network can be implemented by deep learning algorithms such as CNN (Convolutional Neural Network, Convolutional Neural Network), DNN (Deep Neural Network, Deep Neural Network), or RNN (Recurrent Neural Network, Recurrent Neural Network), etc.
  • CNN Convolutional Neural Network, Convolutional Neural Network
  • DNN Deep Neural Network, Deep Neural Network
  • RNN Recurrent Neural Network, Recurrent Neural Network
  • Step 204 Obtain the position movement data of the current shooting terminal, and perform a matching movement on the position of the first subject area according to the position movement data to determine the subject prediction area in the currently browsed image.
  • the position movement data is the position movement data generated by changes in the position and direction of the current photographing terminal, and the position movement data of the current photographing terminal can be collected in real time by a detection device such as a gyroscope.
  • a detection device such as a gyroscope.
  • the subject In the process of image or video shooting, the subject is often imaged in the center of the image. When the subject moves, the camera will always face the subject, so the direction and speed of the camera's movement and the direction and speed of the subject's movement exist Matching relationship, so that the position of the first subject area can be moved in the direction and speed matching the position movement data to determine the subject prediction area in the current browse image.
  • the subject prediction area is used to predict the approximate location range of the subject after moving.
  • Step 206 Perform subject detection on the current browse image according to the subject prediction area to obtain a second subject area.
  • the subject prediction area predicts the approximate range of the subject in the current browsing image, so that subject detection can be performed on the current browsing image within the subject prediction area or within the preset distance range of the subject prediction area to obtain the second subject area. Since there is no need to detect the entire area of the currently browsed image, only the detection is performed in a small range based on the subject prediction area, which greatly improves the efficiency of subject area recognition.
  • the subject detection method can be self-defined, and the subject detection method can be the same as or different from that in step 202. As shown in FIG. 3, it is a schematic diagram of detecting the second subject area in an embodiment, and focusing is only performed on the subject area during focusing.
  • Step 208 When it is determined that the current photographing terminal is moving in the optical axis direction according to the position movement data, drive the lens to move in the same direction according to the moving direction of the current photographing terminal in the optical axis direction to reach the first position.
  • the optical axis direction refers to the direction perpendicular to the shooting plane. If the shooting plane is taken as the xy plane, the optical axis direction is the z axis. If the object plane moves, that is, moves in the xy plane, the focal plane is still the same. The focus value statistics can be performed on the second subject area. If the current shooting terminal moves in the direction of the optical axis, when shooting through the camera, you need to focus first to find the focus position of the lens.
  • the lens refers to the optical element used to change the optical path in the camera, and is generally divided into a convex lens and a concave lens.
  • In-focus refers to the state in which the subject is clearly imaged when the photo is taken, and the in-focus position is the position of the lens when in-focus. If the distance between the current shooting terminal and the object is close, the lens is driven to move forward. If the distance between the current shooting terminal and the object is far away, the lens is driven to move backward, because the lens moves in advance, which can reduce the focusing time because the distance to the quasi-focus is shortened.
  • the moving distance can be customized and is generally less than a preset threshold.
  • the lens is driven to the first position by the motor. For example, if the current lens position is 200 and the target focus position is 300, the lens position needs to be moved from 200 to 210 by a motor. Since moving to 210 is closer to the target focus position, focusing time is reduced.
  • Step 210 Focus on the second subject area with the first position as a starting point, and take an image at the target focus position.
  • the focus tracking method in this embodiment obtains the first subject area detected by the subject in the first captured image, obtains the position movement data of the current photographing terminal, and moves the position of the first subject area according to the position movement data to match Determine the subject prediction area in the currently browsed image, and perform subject detection on the currently viewed image according to the subject prediction area to obtain the second subject area.
  • the moving direction of the optical axis drives the lens to move in the same direction to the first position, focus on the second subject area with the first position as the starting point, take images at the target focus position, and accurately identify according to the position movement data of the shooting terminal
  • the subject area in the image when focusing, the focus accuracy is improved by the subject area, so that when focusing, the subject can be accurately and quickly found to provide the focus position, and the focus action can be quickly completed according to the position movement data, which improves the accuracy of the focus tracking .
  • step 204 includes: acquiring the gyroscope data of the current shooting terminal, determining the movement direction of the current shooting terminal according to the gyroscope data, acquiring the accelerometer data of the current shooting terminal, and determining the current shooting terminal according to the accelerometer data Moving speed, calculating the target moving distance of the subject according to the moving speed, moving the first subject area in the moving direction by the target moving distance, and determining the subject prediction area in the current browse image according to the moved first subject area.
  • the gyroscope is an angular motion detection device for detecting angular velocity.
  • the current camera terminal obtains the angular velocity data output by the gyroscope in the process of taking an image.
  • the electronic device can analyze the moving direction of the current shooting terminal based on the angular velocity data.
  • the angle relative to the initial state x, y, z is the transformation angle of the current shooting terminal, such as the initial angle 0,0,0 when the phone is not moving, and after the phone moves : +x,40 means to rotate 40 degrees in the x direction.
  • the accelerometer is used to measure acceleration.
  • It can determine the current moving speed of the camera based on the initial speed and acceleration, calculate the target moving distance of the subject according to the moving speed and moving time, and move the first subject area in the moving direction by the target moving distance, You can use the moved first subject area as the subject prediction area in the current browsing image, or perform graphic processing on the moved first subject area, such as zooming in, as the subject prediction area in the current browsing image.
  • the target moving distance of the subject is quickly calculated, so as to quickly and accurately predict the area where the subject is located.
  • step 204 includes: acquiring multiple position movement data of the current shooting terminal at different moments, determining the terminal movement trajectory of the current shooting terminal according to the multiple position movement data, and determining the corresponding subject movement trajectory according to the terminal movement trajectory , Determine the subject prediction area of the first subject area in the current browse image according to the subject movement track.
  • multiple position movement data of the current camera terminal at different times can be acquired through the positioning device.
  • the position data of the current camera terminal at different times can be collected, and the position coordinates corresponding to each position data can be formed according to the established coordinate system.
  • the coordinate and function simulation algorithm determines the direction corresponding to the position data, and forms the terminal movement track of the current shooting terminal.
  • the terminal movement trajectory can be used as the subject movement trajectory, or the matching relationship between the terminal movement and the subject movement can be obtained, and the corresponding subject movement trajectory can be determined according to the terminal movement trajectory, wherein the matching relationship can be measured by calibration.
  • the position data of the current shooting terminal at different moments are input into the trained neural network model.
  • the trained neural network model can determine the terminal movement trajectory of the current shooting terminal through the relationship between the subject position of the front and rear frames.
  • the subject movement track predicts the position of the subject in the next frame, thereby obtaining the subject prediction area.
  • the subject prediction area is determined by the movement trajectory, and the subject movement trajectory can be accurately obtained when the amount of historical position movement data is sufficient, so that the subject prediction area can be quickly determined, which is convenient and efficient.
  • step 206 includes: obtaining a visible light image of the currently browsed image; generating a center weight map corresponding to the visible light image, wherein the weight value of the center weight map gradually decreases from the center to the edge;
  • the center weight map is input into the subject detection model to obtain the confidence map of the subject area, where the subject detection model is a model obtained by training in advance based on the visible light map, the center weight map and the corresponding labeled subject mask map of the same scene ; Determine the second subject area in the visible light map according to the subject area confidence map and the subject prediction area.
  • the visible light image refers to a color image.
  • the visible light image corresponding to the currently viewed image can be obtained by previewing and shooting a certain scene through the color camera, and the ISP processor or the central processing unit can obtain the visible light image.
  • the central weight map refers to a map used to record the weight value of each pixel in the visible light image.
  • the weight value recorded in the center weight map gradually decreases from the center to the four sides, that is, the center weight is the largest, and the weight gradually decreases toward the four sides.
  • the weight value from the center pixel of the visible light image to the edge pixel of the image is gradually reduced by the center weight map.
  • the ISP processor or the central processing unit can generate a corresponding central weight map according to the size of the visible light map.
  • the weight value of the center weight map gradually decreases from the center to the four sides.
  • the center weight map can be generated using a Gaussian function, a first-order equation, or a second-order equation.
  • the Gaussian function may be a two-dimensional Gaussian function.
  • the subject detection model is obtained by pre-collecting a large amount of training data, and inputting the training data to the subject detection model containing the initial network weights for training.
  • Each set of training data includes visible light map, center weight map and annotated subject mask map.
  • the visible light map and the center weight map are used as the input of the trained subject detection model, and the labeled subject mask map is used as the ground truth that the trained subject detection model expects to output.
  • the subject detection model can be trained to recognize and detect various subjects, such as people, flowers, cats, dogs, backgrounds, etc.
  • the ISP processor or the central processing unit can input the visible light map and the center weight map into the subject detection model, and the subject area confidence map can be obtained by performing the detection.
  • the subject area confidence map is used to record the probability of the subject which can be recognized. For example, the probability of a certain pixel belonging to a person is 0.8, the probability of a flower is 0.1, and the probability of a background is 0.1.
  • the ISP processor or central processing unit can determine the subject area in the visible light map according to the positional relationship between the subject area confidence map and the subject prediction area.
  • the confidence of the confidence map of the subject area in or around the subject prediction area can be weighted to increase, and then the subject with the highest confidence is selected as the subject in the visible light map. If there is only one subject, the subject will be used as the target subject. For each subject, one or more subjects can be selected as the target subject as required, so as to obtain one or more second subject areas.
  • step 208 includes: establishing the movement coordinate system of the current photographing terminal with the optical axis as the Z axis, and when the position movement data includes the movement of the Z axis, determining the position of the current photographing terminal according to the position movement data
  • the current movement direction of the Z axis drives the lens to move the first focusing distance to the first position in the current movement direction, and the first focusing distance is less than the preset threshold.
  • the current movement direction is the positive or negative direction of the Z axis, and the lens is driven to move in the same direction as the current movement direction by the first focusing distance to reach the first position.
  • the preset threshold can be customized, for example, it is defined as 5, which ensures that it moves to the target focus position and improves the focusing efficiency.
  • the method further includes: when the position movement data includes the movement of the Z axis, determining the current movement distance of the current shooting terminal on the Z axis according to the position movement data, and driving the lens to move in the current movement direction and the current movement distance The matched second focus distance reaches the first position.
  • the matching relationship between the Z-axis position movement data and the focus movement distance can be calibrated in advance. For example, when the current terminal moves x distance on the Z axis and the lens moves y distance in the same direction, the imaging is clear, then the Z axis is recorded
  • the first matching relationship pair xy between the position movement data and the focusing movement distance can simultaneously record multiple matching relationship pairs, and generate the matching relationship between the Z-axis position movement data and the focusing movement distance according to the multiple matching relationship pairs.
  • This matching relationship is used when the current movement distance of the Z axis is known, and the focus movement distance corresponding to the lens is calculated, so that the lens can be driven to move in the current movement direction by a second focus distance matching the current movement distance to the first position.
  • the focus distance that the lens needs to move corresponding to the movement can be quickly and accurately determined.
  • step 210 includes: obtaining the corresponding phase difference value according to the second subject area, determining the first focus position of the lens according to the phase difference value, and driving the lens to move from the first position to the first focus Position: Drive the lens to scan according to the preset focus distance to determine the target focus position, and drive the lens to the target focus position.
  • the first in-focus position of the lens can be acquired through phase focusing.
  • the phase focusing is achieved by detecting the phase offset of the captured image to achieve autofocus, which specifically includes: acquiring the corresponding phase difference value according to the second subject area; The phase difference value determines the first focus position of the lens.
  • phase focusing two pixels dedicated to phase focusing are installed in the image sensor to form a pixel pair. Two images are formed by these two pixels, and the position of the object is determined by the phase relationship of the two images, so as to quickly find the focus position of the lens. Since the first in-focus position found by the phase focusing is within a certain position range where the image is clear, it is often necessary to further accurately search for the lens position that makes the image clearer.
  • a more precise focus position can be determined by a fine scan.
  • the preset focus distance refers to the distance that the lens is driven to move.
  • the preset focus distance usually takes a relatively small value to ensure that the distance the lens moves does not exceed the target focus position.
  • the first focusing position is 260
  • the target focusing position is 400.
  • the preset focus distance may be 5, and after driving the lens to move the preset focus distance, the position of the lens is 265. You can move multiple times to determine the target focus position.
  • the precise scanning process may include: driving the lens to move the preset focus distance, and obtaining the focus (Focus Value, FV) value of the imaged image every time the lens moves the preset focus distance; determining the second focus position according to the obtained focus value .
  • the focus value refers to the value of image clarity. Generally, the larger the focus value, the clearer the image; the smaller the focus value, the more blurred the image.
  • a focus tracking device includes a historical image subject area acquisition module 302, a subject prediction area module 304, a second subject area determination module 306, a first distance driving module 308, and a focusing module 310. among them:
  • the historical image subject area acquisition module 302 is configured to acquire the first subject area detected by the subject in the first captured image
  • the subject prediction area module 304 is configured to obtain the position movement data of the current shooting terminal, and perform a matching movement on the position of the first subject area according to the position movement data to determine the subject prediction area in the currently browsed image.
  • the second subject area determining module 306 is configured to perform subject detection on the current browse image according to the subject prediction area to obtain the second subject area.
  • the first distance driving module 308 is configured to drive the lens to move in the same direction according to the movement direction of the current photographing terminal in the optical axis direction when it is determined that the current photographing terminal moves in the optical axis direction according to the position movement data. position.
  • the focusing module 310 is configured to focus on the second subject area using the first position as a starting point, and take an image at the target focusing position.
  • the subject prediction area module 304 is also used to obtain gyroscope data of the current photographing terminal, determine the movement direction of the current photographing terminal according to the gyroscope data, and obtain accelerometer data of the current photographing terminal; determine according to the accelerometer data The current moving speed of the camera terminal; calculate the target moving distance of the subject according to the moving speed; move the first subject area in the moving direction by the target moving distance, and determine the subject prediction area in the current browse image according to the moved first subject area .
  • the subject prediction area module 304 is also used to obtain multiple position movement data of the current photographing terminal at different moments, and determine the terminal movement trajectory of the current photographing terminal according to the multiple position movement data; Determine the corresponding subject movement trajectory; determine the subject prediction area of the first subject area in the current browse image according to the subject movement trajectory.
  • the second main body area determining module 306 is also used to obtain the visible light image of the currently browsed image; generate a center weight map corresponding to the visible light image, wherein the weight value of the center weight map gradually decreases from the center to the edge ; Input the visible light map and the center weight map into the subject detection model to obtain a confidence map of the subject area, where the subject detection model is based on the visible light map, the center weight map and the corresponding labeled subject mask of the same scene in advance The model obtained by training the image; the second subject area in the visible light image is determined according to the confidence map of the subject area.
  • the first distance driving module 308 is also used to establish the movement coordinate system of the current shooting terminal with the optical axis direction as the Z axis; when the position movement data includes the movement of the Z axis, it is determined according to the position movement data The current movement direction of the current camera terminal on the Z axis; the lens is driven to move in the current movement direction by a first focusing distance to reach the first position, and the first focusing distance is less than a preset threshold.
  • the first distance driving module 308 is further configured to determine the current movement distance of the current camera terminal on the Z axis according to the position movement data when the position movement data includes the movement of the Z axis; The movement direction moves to the first position by a second focusing distance matching the current movement distance.
  • the focusing module 310 is further configured to obtain the corresponding phase difference value according to the second subject area; determine the first focus position of the lens according to the phase difference value, and drive the lens to move from the first position to the first focus Position: Drive the lens to scan according to the preset focus distance to determine the target focus position, and drive the lens to the target focus position.
  • Each module in the above focusing device can be implemented in whole or in part by software, hardware and a combination thereof.
  • the foregoing modules may be embedded in the form of hardware or independent of the processor in the computer device, or may be stored in the memory of the computer device in the form of software, so that the processor can call and execute the operations corresponding to the foregoing modules.
  • Fig. 5 is a schematic diagram of the internal structure of a terminal device in one of the embodiments.
  • the terminal device includes a processor and a memory connected via a system bus.
  • the processor is used to provide computing and control capabilities to support the operation of the entire terminal device.
  • the memory may include a non-volatile storage medium and internal memory.
  • the non-volatile storage medium stores an operating system and computer readable instructions.
  • the computer-readable instructions can be executed by the processor to implement a focus tracking method provided in the following embodiments.
  • the internal memory provides a cache operating environment for the operating system and computer readable instructions in the non-volatile storage medium.
  • the terminal device can be a mobile phone, a tablet computer, or a personal digital assistant or a wearable device.
  • each module in the focus tracking device may be in the form of computer readable instructions.
  • the computer-readable instructions can be run on a terminal or server.
  • the program module formed by the computer readable instructions can be stored in the memory of the terminal or the server.
  • one or more processors included in the terminal device execute the computer-readable instructions stored in the memory to implement the focus tracking methods in the foregoing embodiments.
  • the embodiment of the present application also provides a computer-readable storage medium.
  • One or more non-volatile computer-readable storage media containing computer-executable instructions when the computer-executable instructions are executed by one or more processors, cause the processors to perform the tracking in each of the foregoing embodiments Focus method.
  • Non-volatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory may include random access memory (RAM), which acts as external cache memory.
  • RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
  • SRAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDR SDRAM double data rate SDRAM
  • ESDRAM enhanced SDRAM
  • SLDRAM synchronous Link (Synchlink) DRAM
  • Rambus direct RAM
  • DRAM direct memory bus dynamic RAM
  • RDRAM memory bus dynamic RAM

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

Procédé et appareil de suivi de mise au point, équipement terminal et support d'enregistrement lisible par ordinateur. Le procédé comprend les étapes consistant à : obtenir une première région de corps principal qui est obtenue au moyen d'une détection de corps principal dans une première image photographiée ; obtenir les données de mouvement de position du terminal photographique actuel, et en fonction des données de mouvement de position, effectuer un mouvement mis en correspondance sur la position de la première région de corps principal de façon à déterminer une région de prédiction de corps principal dans l'image parcourue actuelle ; en fonction de la région de prédiction de corps principal, réaliser une détection de corps principal sur l'image parcourue actuelle pour obtenir une seconde région de corps principal ; lors de la détermination, en fonction des données de mouvement de position, que le terminal photographique actuel se déplace dans une direction d'axe optique, commandant, en fonction de la direction de mouvement du terminal photographique actuel dans la direction de l'axe optique, une lentille pour se déplacer vers une première position dans la même direction ; et réaliser une mise au point sur la seconde région de corps principal à l'aide de la première position en tant que point de départ, et photographier une image au niveau d'une position de mise au point cible.
PCT/CN2020/097616 2019-06-28 2020-06-23 Procédé et appareil de suivi de mise au point, équipement terminal, et support d'enregistrement lisible par ordinateur WO2020259474A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910571895.7A CN110248097B (zh) 2019-06-28 2019-06-28 追焦方法、装置、终端设备、计算机可读存储介质
CN201910571895.7 2019-06-28

Publications (1)

Publication Number Publication Date
WO2020259474A1 true WO2020259474A1 (fr) 2020-12-30

Family

ID=67890100

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/097616 WO2020259474A1 (fr) 2019-06-28 2020-06-23 Procédé et appareil de suivi de mise au point, équipement terminal, et support d'enregistrement lisible par ordinateur

Country Status (2)

Country Link
CN (1) CN110248097B (fr)
WO (1) WO2020259474A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117132646A (zh) * 2023-10-26 2023-11-28 湖南自兴智慧医疗科技有限公司 基于深度学习的分裂相自动对焦系统

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110650291B (zh) * 2019-10-23 2021-06-08 Oppo广东移动通信有限公司 目标追焦方法和装置、电子设备、计算机可读存储介质
CN112866546B (zh) * 2019-11-12 2022-09-27 Oppo广东移动通信有限公司 对焦方法和装置、电子设备、计算机可读存储介质
CN112866510B (zh) * 2019-11-12 2022-06-10 Oppo广东移动通信有限公司 对焦方法和装置、电子设备、计算机可读存储介质
CN112866542B (zh) * 2019-11-12 2022-08-12 Oppo广东移动通信有限公司 追焦方法和装置、电子设备、计算机可读存储介质
CN111556248B (zh) * 2020-05-09 2021-09-03 Tcl移动通信科技(宁波)有限公司 拍摄方法、装置、存储介质及移动终端
KR20220111526A (ko) * 2021-02-02 2022-08-09 자이메드 주식회사 실시간 생체 이미지 인식 방법 및 장치
CN113067981B (zh) * 2021-03-25 2022-11-29 浙江大华技术股份有限公司 相机的焦距调整方法和相机
CN113724338B (zh) * 2021-08-31 2024-05-03 上海西井科技股份有限公司 基于球台拍摄移动对象的方法、系统、设备及存储介质
CN114554086B (zh) * 2022-02-10 2024-06-25 支付宝(杭州)信息技术有限公司 一种辅助拍摄方法、装置及电子设备
CN115334240B (zh) * 2022-08-11 2024-02-20 深圳传音控股股份有限公司 图像拍摄方法、智能终端及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120182462A1 (en) * 2011-01-19 2012-07-19 Samsung Electronics Co., Ltd. Auto-focusing apparatus
CN106131415A (zh) * 2016-07-19 2016-11-16 广东欧珀移动通信有限公司 二维码图像扫描方法、装置及移动终端
CN107124557A (zh) * 2017-05-31 2017-09-01 广东欧珀移动通信有限公司 对焦方法、装置、计算机可读存储介质和终端
CN107566741A (zh) * 2017-10-26 2018-01-09 广东欧珀移动通信有限公司 对焦方法、装置、计算机可读存储介质和计算机设备
CN108259739A (zh) * 2017-12-29 2018-07-06 维沃移动通信有限公司 一种图像拍摄的方法、装置及移动终端
CN108712609A (zh) * 2018-05-17 2018-10-26 Oppo广东移动通信有限公司 对焦处理方法、装置、设备及存储介质
CN110248101A (zh) * 2019-07-19 2019-09-17 Oppo广东移动通信有限公司 对焦方法和装置、电子设备、计算机可读存储介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9742980B2 (en) * 2013-11-01 2017-08-22 Canon Kabushiki Kaisha Focus control apparatus and control method therefor
CN104573715B (zh) * 2014-12-30 2017-07-25 百度在线网络技术(北京)有限公司 图像主体区域的识别方法及装置
JP6508954B2 (ja) * 2015-01-28 2019-05-08 キヤノン株式会社 撮像装置、レンズユニット、撮像装置の制御方法、及びプログラム
CN107172352B (zh) * 2017-06-16 2020-04-24 Oppo广东移动通信有限公司 对焦控制方法、装置、计算机可存储介质和移动终端

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120182462A1 (en) * 2011-01-19 2012-07-19 Samsung Electronics Co., Ltd. Auto-focusing apparatus
CN106131415A (zh) * 2016-07-19 2016-11-16 广东欧珀移动通信有限公司 二维码图像扫描方法、装置及移动终端
CN107124557A (zh) * 2017-05-31 2017-09-01 广东欧珀移动通信有限公司 对焦方法、装置、计算机可读存储介质和终端
CN107566741A (zh) * 2017-10-26 2018-01-09 广东欧珀移动通信有限公司 对焦方法、装置、计算机可读存储介质和计算机设备
CN108259739A (zh) * 2017-12-29 2018-07-06 维沃移动通信有限公司 一种图像拍摄的方法、装置及移动终端
CN108712609A (zh) * 2018-05-17 2018-10-26 Oppo广东移动通信有限公司 对焦处理方法、装置、设备及存储介质
CN110248101A (zh) * 2019-07-19 2019-09-17 Oppo广东移动通信有限公司 对焦方法和装置、电子设备、计算机可读存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117132646A (zh) * 2023-10-26 2023-11-28 湖南自兴智慧医疗科技有限公司 基于深度学习的分裂相自动对焦系统
CN117132646B (zh) * 2023-10-26 2024-01-05 湖南自兴智慧医疗科技有限公司 基于深度学习的分裂相自动对焦系统

Also Published As

Publication number Publication date
CN110248097B (zh) 2021-02-23
CN110248097A (zh) 2019-09-17

Similar Documents

Publication Publication Date Title
WO2020259474A1 (fr) Procédé et appareil de suivi de mise au point, équipement terminal, et support d'enregistrement lisible par ordinateur
CN111147741B (zh) 基于对焦处理的防抖方法和装置、电子设备、存储介质
WO2020259179A1 (fr) Procédé de mise au point, dispositif électronique et support d'informations lisible par ordinateur
CN110428366B (zh) 图像处理方法和装置、电子设备、计算机可读存储介质
CN110248096B (zh) 对焦方法和装置、电子设备、计算机可读存储介质
WO2021057652A1 (fr) Procédé et appareil de focalisation, dispositif électronique et support de stockage lisible par ordinateur
CN110536057B (zh) 图像处理方法和装置、电子设备、计算机可读存储介质
EP3496383A1 (fr) Procédé, appareil et dispositif de traitement d'images
WO2020088133A1 (fr) Procédé et appareil de traitement d'image, dispositif électronique et support de stockage lisible par ordinateur
EP3480784B1 (fr) Procédé et dispositif de traitement d'images
EP3798975B1 (fr) Procédé et appareil d'identification de sujet, dispositif électronique et support d'enregistrement lisible par ordinateur
CN109712192B (zh) 摄像模组标定方法、装置、电子设备及计算机可读存储介质
CN110650291B (zh) 目标追焦方法和装置、电子设备、计算机可读存储介质
JP6577703B2 (ja) 画像処理装置及び画像処理方法、プログラム、記憶媒体
CN109963080B (zh) 图像采集方法、装置、电子设备和计算机存储介质
CN109544620A (zh) 图像处理方法和装置、计算机可读存储介质和电子设备
CN109951638A (zh) 摄像头防抖系统、方法、电子设备和计算机可读存储介质
US20220222830A1 (en) Subject detecting method and device, electronic device, and non-transitory computer-readable storage medium
CN109660718B (zh) 图像处理方法和装置、电子设备、计算机可读存储介质
CN108111768B (zh) 控制对焦的方法、装置、电子设备及计算机可读存储介质
CN113875219B (zh) 图像处理方法和装置、电子设备、计算机可读存储介质
CN111246100B (zh) 防抖参数的标定方法、装置和电子设备
CN109598764A (zh) 摄像头标定方法和装置、电子设备、计算机可读存储介质
WO2023236508A1 (fr) Procédé et système d'assemblage d'images basés sur une caméra ayant un réseau d'un milliard de pixels
CN109559352B (zh) 摄像头标定方法、装置、电子设备和计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20833637

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20833637

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20833637

Country of ref document: EP

Kind code of ref document: A1