WO2022228259A1 - 一种目标追踪方法及相关装置 - Google Patents

一种目标追踪方法及相关装置 Download PDF

Info

Publication number
WO2022228259A1
WO2022228259A1 PCT/CN2022/088093 CN2022088093W WO2022228259A1 WO 2022228259 A1 WO2022228259 A1 WO 2022228259A1 CN 2022088093 W CN2022088093 W CN 2022088093W WO 2022228259 A1 WO2022228259 A1 WO 2022228259A1
Authority
WO
WIPO (PCT)
Prior art keywords
electronic device
target
tracking
target subject
camera
Prior art date
Application number
PCT/CN2022/088093
Other languages
English (en)
French (fr)
Inventor
张俪耀
黄雅琳
刘蒙
张超
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP22794730.6A priority Critical patent/EP4322518A4/en
Publication of WO2022228259A1 publication Critical patent/WO2022228259A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • H04N23/684Vibration or motion blur correction performed by controlling the image sensor readout, e.g. by controlling the integration time
    • H04N23/6842Vibration or motion blur correction performed by controlling the image sensor readout, e.g. by controlling the integration time by controlling the scanning position, e.g. windowing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/69Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/22Details of telephonic subscriber devices including a touch pad, a touch sensor or a touch detector
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/52Details of telephonic subscriber devices including functional features of a camera

Definitions

  • the present application relates to the field of electronic technologies, and in particular, to a target tracking method and related devices.
  • the electronic device cannot cache all the video image information like an offline program and use the image frame motion path of the future frame to give the most perfect (taking into account tracking and stabilization) output frame position for the current frame.
  • the embodiments of the present application provide a target tracking method and a related device, which can stably track a target body.
  • the present application provides a target tracking method, the method comprising: an electronic device enables a target tracking mode; the electronic device determines a target subject to be tracked according to an original image collected by a camera of the electronic device; the electronic device is in a display area A first tracking screen is displayed in the first tracking screen, and the first tracking screen includes the target subject, and the target subject is located at the first position of the viewing area of the camera; when the target subject moves, the electronic device displays a second tracking screen in the display area, and the second tracking screen It is displayed that the target subject is located at the second position of the viewing area of the camera.
  • the electronic device can perform target tracking on the collected original image. After the electronic device determines the target subject in the original image, when the target subject moves from the first position to the second position, the display area of the electronic device will always be in the display area. Display the target subject on the screen.
  • the original image collected by the camera refers to all image data that can be collected by the camera.
  • the embodiments of the present application track the target subject based on the image range of the original image, thereby realizing a larger tracking range.
  • the position of the electronic device is not moved, and the viewing area of the camera remains unchanged.
  • the tracking of the target subject in the embodiment of the present application may be to track the moving target subject when the electronic device is not moving, which can improve the stability of the target subject being displayed on the output screen.
  • the moving target subject can also be tracked when the electronic device is in motion, and the target subject can always be displayed in the output screen; it can also be the case where the electronic device is moving, the non-moving target subject can be tracked. Track and keep the target subject on the output screen.
  • the original image is not subjected to anti-shake or deblurring processing, or the original image includes all objects in the viewing area of the camera.
  • the embodiments of the present application track the target subject based on the image range of the original image, thereby realizing a larger tracking range.
  • the electronic device can also track the target subject.
  • the method further includes: the electronic device displays the original image captured by the camera in a preview frame, and the preview frame occupies a part or all of the display area.
  • the preview frame may be a picture-in-picture frame, which is superimposed and displayed on the display area, so that the user can more intuitively see the comparison between the original image and the tracking image in the display area.
  • the preview frame can also occupy all the pictures in the display area, for example, first display the preview frame in all the pictures in the display area, and then superimpose the preview frame on the display area.
  • the electronic device determining the target subject to be tracked according to the original image collected by the camera of the electronic device includes: the electronic device receives a first operation of the user in the preview frame, and the first operation indicates the user selected The target subject; the electronic device determines the target subject to be tracked according to the first operation.
  • the original image captured by the camera is displayed in the preview frame, and the image displayed in the preview frame includes the target subject.
  • the electronic device may determine the target subject based on the user's first operation on the preview frame. The click operation of the display position of the target body. That is, the electronic device can determine the target subject based on the user operation.
  • the electronic device determining the target subject to be tracked according to the original image collected by the camera of the electronic device includes: the electronic device, based on a preset target detection algorithm, detects the original image collected by the camera of the electronic device. Automatic target detection is performed to determine the target subject to be tracked.
  • the preset target detection algorithm may be for a specific category of objects, such as a target detection algorithm for detecting people, such as a target detection algorithm for detecting animals, such as a target detection algorithm for detecting objects, such as a target for detecting moving objects detection algorithms, etc.
  • the preview frame further includes an output frame, and the image in the output frame corresponds to the picture displayed in the display area.
  • the output box is used to indicate which area in the original image the picture displayed in the current display area is.
  • the method further includes: the electronic device determines a guide point in the output frame, and the guide point indicates the display position of the target body; the electronic device displays the target body at the first position on the first position according to the guide point.
  • the guide point is used to determine the display position of the target body in the output frame. If the guide point is at the center point of the output frame, the target body is displayed at the center position of the first tracking screen in the first tracking screen, and in the second tracking screen In the tracking screen, the target body is displayed at the center of the second tracking screen. In this way, the electronic device can stably display the target subject at the position where the guidance point is located, so as to achieve the effect of stable tracking.
  • the electronic device determining the guide point in the output frame includes: the electronic device determines the guide point in the output frame according to a default setting, or the electronic device receives a second operation of the user, and the second operation of the user instructs the user The position of the selected guide point in the output box.
  • the electronic device can determine the guide point through default settings and user actions.
  • the electronic device displays the target subject at the first position on the first tracking screen according to the guide point, or displays the target subject at the second position on the second tracking screen, comprising: determining by the electronic device: The motion path of the target body; the electronic device determines the difference polyline between the target body and the guide point based on the motion path of the target body; the electronic device determines the motion path of the background in the original image; the electronic device determines the motion path based on the background motion path and the difference polyline to determine Smooth path; the electronic device wraps the original image based on the smooth path; the electronic device displays the warp-processed image in the display area, and the picture displayed in the display area corresponds to the image in the output box.
  • the algorithm principle of the electronic device to achieve stable tracking of the target subject is described here. Based on the idea that the foreground drives the background, the electronic device drives the background movement through the movement of the target subject, so that the target subject moves into the output screen. In the process of solving the smooth path of the background, the electronic device determines the difference polyline between the moving path of the target subject and the guide point, and uses the difference polyline as a guide reference item for smoothing the background to obtain the smooth path of the background. Warp the original image collected by the electronic device based on the smooth path, so that the target subject can be displayed stably in the position of the guide point in the output frame.
  • the one-time solution of the smooth path of the background by the electronic device taking into account the two aspects of anti-shake smoothing and tracking, the tracking of the target subject and the path smoothing of the background can share the same widest cropping boundary (that is, the boundary of the original image) .
  • the tracking of the target subject is realized while the background is smoothed, and the followability and smoothness of the tracking result are taken into account.
  • turning on the target tracking mode by the electronic device includes: the electronic device detects a third operation of the user, and the third operation includes an operation of increasing the zoom magnification or a switch of the user directly turning on the target tracking mode.
  • the electronic device detects a third operation of the user, and the third operation includes an operation of increasing the zoom magnification or a switch of the user directly turning on the target tracking mode.
  • the operation of increasing the zoom magnification indicates the display magnification selected by the user; the electronic device displays the first tracking picture or the second tracking picture in the display area according to the display magnification.
  • the camera is a telephoto camera.
  • the electronic device uses a telephoto camera to collect image data.
  • the increased zoom magnification is greater than the preset magnification.
  • the preset magnification may be, for example, 15 times.
  • the second tracking picture when the second position is an edge position of the viewing area of the camera, the second tracking picture includes the target subject. Due to the one-time solution of the smooth path of the background by the electronic device, the two aspects of anti-shake smoothing and tracking are simultaneously considered, and the tracking of the target subject and the path smoothing of the background can share the same widest cropping boundary (that is, the boundary of the original image). ). In this way, even if the target subject is at the edge position in the original image, the electronic device can also track the target subject, which solves the problem that even if the electronic device captures the image of the target subject because the position of the target subject is at the edge of the shooting area of the electronic device It is also impossible to track the target subject.
  • embodiments of the present application provide an electronic device, including: one or more processors and one or more memories; the one or more memories are coupled with one or more processors; the one or more memories
  • the memory is used to store computer program code, the computer program code includes computer instructions; when the computer instructions are executed on the processor, the electronic device causes the electronic device to execute the target tracking method in any possible implementation manner of any of the above aspects.
  • an embodiment of the present application provides a computer storage medium, including computer instructions, when the computer instructions are run on an electronic device, the communication apparatus is made to execute the target tracking method in any of the possible implementations of any of the above aspects .
  • an embodiment of the present application provides a target tracking method, the method includes: starting target tracking; collecting an original image; determining a target subject to be tracked according to the original image; outputting information of a first tracking screen, the first tracking The screen includes a target subject, and the target subject is located at a first position in the viewing area; information of a second tracking screen is output, and the second tracking screen shows that the target subject is located at a second position in the viewing area.
  • an embodiment of the present application provides a camera module, characterized in that it includes an input unit, an output unit, and at least one camera;
  • the input unit is used for starting target tracking according to the instruction of the electronic device
  • At least one camera is used to collect the original image, and determine the target subject to be tracked according to the original image
  • the output unit is used for outputting information of a first tracking picture, the first tracking picture includes a target subject, and the target subject is located at a first position in the framing area; and, outputting information of a second tracking picture, the second tracking picture shows that the target subject is located in the framing area second location in the area.
  • At least one camera includes a processing unit, and the at least one camera is further configured to determine the movement path of the target subject through the processing unit; the at least one camera is further configured to determine the target subject and guide the target subject based on the movement path of the target subject through the processing unit difference polyline of points; at least one camera is also used for determining the motion path of the background in the original image by the processing unit; at least one camera is also used for determining a smooth path based on the motion path of the background and the difference polyline by the processing unit; at least one The camera is also used for wrapping the original image based on the smooth path through the processing unit; the output unit is also used for outputting the warp-processed image in the display area, and the picture displayed in the display area corresponds to the image in the output frame.
  • an embodiment of the present application provides a computer program product, which, when the computer program product runs on a computer, enables the computer to execute the target tracking method in any possible implementation manner of any one of the foregoing aspects.
  • FIG. 1a and FIG. 1b are schematic schematic diagrams of a target tracking method provided by an embodiment of the present application.
  • FIG. 2 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • 3a and 3b are schematic diagrams of a group of application interfaces provided by the embodiments of the present application.
  • 4a to 4f are another set of schematic diagrams of application interfaces provided by the embodiments of the present application.
  • 5a and 5b are schematic diagrams of another set of application interfaces provided by the embodiments of the present application.
  • Fig. 6a and Fig. 6b are another set of schematic diagrams of application interfaces provided by this embodiment of the present application.
  • FIG. 7a to 7d are schematic diagrams of another set of application interfaces provided by the embodiments of the present application.
  • FIGS. 8a to 8d are schematic diagrams of another set of application interfaces provided by the embodiments of the present application.
  • FIG. 9 is a method flowchart of a target tracking method provided by an embodiment of the present application.
  • FIG. 10 is a schematic schematic diagram of another target tracking method provided by an embodiment of the present application.
  • FIG. 11 is a schematic schematic diagram of another target tracking method provided by an embodiment of the present application.
  • FIGS. 12a and 12b are schematic schematic diagrams of another target tracking method provided by an embodiment of the present application.
  • FIG. 13 is a method flow chart of another target tracking method provided by an embodiment of the present application.
  • FIG. 14 is a method flowchart of yet another target tracking method provided by an embodiment of the present application.
  • first and second are only used for descriptive purposes, and should not be construed as implying or implying relative importance or implying the number of indicated technical features. Therefore, the features defined as “first” and “second” may explicitly or implicitly include one or more of the features. In the description of the embodiments of the present application, unless otherwise specified, the “multiple” The meaning is two or more. The orientation or positional relationship indicated by the terms “middle”, “left”, “right”, “upper”, “lower”, etc.
  • anti-shake and tracking are two major technologies for realizing stable tracking of the target in the real-time collected image.
  • anti-shake is to cut the collected image based on a target point or target area, so that the target point or target area always moves in a smooth way in a certain area of the screen, so that the displayed screen is stable; tracking is
  • the target subject is determined in the real-time captured image through algorithms such as subject detection and recognition or feature matching.
  • the captured image can be cropped so that the target subject is always displayed in the output screen (for example, at the center position).
  • Anti-shake and tracking are separate algorithms. If tracking is implemented first and then anti-shake is implemented, slight hand shake will also be magnified into a huge jump in the preview image at high magnification. Therefore, in the original input image without anti-shake. Executing the subject detection algorithm on the above will greatly reduce the accuracy of detecting the target subject, and it is difficult to accurately identify the same subject on the previous and subsequent frames based on the position on the original input image.
  • the detection accuracy can be improved by executing the subject detection algorithm on the image that has been anti-shake.
  • the detection and movable range during tracking are limited, and it is difficult to utilize the larger boundary of the original input image.
  • the path of the target subject becomes unsmooth again due to the jumping of the cropping frame, and secondary smoothing is required, resulting in more lag of the peaks and delays.
  • the outermost image frame 1 is the original image collected by the electronic device 100 through the camera. Since the electronic device 100 does not shoot directly with the object to be photographed most of the time when collecting images, the electronic device 100 needs to first Perform image projection transformation (image warp) on the captured original image to make the image appear flat. For example, the camera of the electronic device 100 captures a two-dimensional code image in a lateral direction, and the electronic device 100 needs to warp the captured two-dimensional code image first, then a two-dimensional code image in a positive direction can be obtained, thereby realizing identification of the QR code.
  • image projection transformation image warp
  • the dotted frame 2 is an image obtained by the electronic device 100 warp transforming the original image.
  • the electronic device 100 performs anti-shake processing on the wrapped image, and finally the image frame after anti-shake cropping is the image frame 3 in FIG. 1a.
  • the image range that can be used for tracking processing becomes the range shown in the image frame 3.
  • the target subject is at position 1, even if the electronic device 100 has captured the target subject , the target subject cannot be tracked, that is, the target subject visible in the original image cannot be tracked due to the reduced image range after the anti-shake processing; the electronic device 100 can only track the target subject when the target subject is at position 2 to the target subject. It can be seen that the electronic device 100 cannot utilize the larger range provided by the original image when tracking and detecting the target subject.
  • tracking the target subject by the electronic device 100 requires further cropping of the area where the target subject is located. Since the subject recognition in different image frames generally has positional deviations, the cropped area will shake. In order to prevent the cropped adjacent image frames from jumping due to different cropped regions, it is necessary to perform a second anti-shake smoothing on the cropped image.
  • path 1 is the motion path of the display image after secondary cropping relative to the original image
  • path 2 is the anti-shake smoothing path of the display image after secondary cropping under ideal conditions
  • Path 3 is the anti-shake smoothing path of the displayed image after the second cropping in the actual situation. It can be seen that, in fact, such a processing method will cause the peak lag, resulting in delayed image display and poor user experience.
  • the embodiment of the present application proposes a target tracking method based on path optimization.
  • the electronic device 100 drives the background movement through the movement of the target subject (foreground) to move the target subject into the output screen.
  • the electronic device 100 uses the moving path of the target body as a guide reference item for smoothing the background, so as to obtain the smooth path of the background.
  • the electronic device 100 warps the original image collected by the electronic device based on the smooth path, so that the target subject can be stably displayed in the output screen.
  • the one-time solution of the smooth path of the background by the electronic device 100 takes into account the two aspects of anti-shake smoothing and tracking simultaneously, so that the tracking of the target subject and the path smoothing of the background can share the same widest cropping boundary (that is, the boundary of the original image). ).
  • the tracking of the target subject is realized while the background is smoothed, and the followability and smoothness of the tracking result are taken into account.
  • target tracking can be performed on the image data collected in real time, and the tracking may refer to tracking a moving object when the shooting device is not moving, and the moving object is always displayed in the output screen; it may also be a shooting device In the case of motion, the moving object is tracked, and the moving object is always displayed in the output screen; it can also be in the case of motion of the shooting device, the non-moving object is tracked, and the non-moving object is always displayed in the output screen.
  • the target subject can be tracked in real time without moving the photographing device, and a picture or video containing the target subject can be obtained.
  • the electronic device 100 includes a photographing device.
  • FIG. 2 shows a schematic structural diagram of an exemplary electronic device 100 provided by an embodiment of the present application.
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2 , mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone jack 170D, sensor module 180, buttons 190, motor 191, indicator 192, camera 193, display screen 194, and Subscriber identification module (subscriber identification module, SIM) card interface 195 and so on.
  • SIM Subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light. Sensor 180L, bone conduction sensor 180M, etc.
  • the structures illustrated in the embodiments of the present application do not constitute a specific limitation on the electronic device 100 .
  • the electronic device 100 may include more or less components than shown, or combine some components, or separate some components, or arrange different components.
  • the illustrated components may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units, for example, the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU) Wait. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • application processor application processor, AP
  • modem processor graphics processor
  • graphics processor graphics processor
  • ISP image signal processor
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • the controller may be the nerve center and command center of the electronic device 100 .
  • the controller can generate an operation control signal according to the instruction operation code and timing signal, and complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 110 for storing instructions and data.
  • the memory in processor 110 is cache memory. This memory may hold instructions or data that have just been used or recycled by the processor 110 . If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby increasing the efficiency of the system.
  • the processor 110 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transceiver (universal asynchronous transmitter) receiver/transmitter, UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and / or universal serial bus (universal serial bus, USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transceiver
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB universal serial bus
  • the interface connection relationship between the modules illustrated in the embodiments of the present application is only a schematic illustration, and does not constitute a structural limitation of the electronic device 100 .
  • the electronic device 100 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
  • the charging management module 140 is used to receive charging input from the charger.
  • the power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 .
  • the power management module 141 receives input from the battery 142 and/or the charging management module 140, and supplies power to the processor 110, the internal memory 121, the external memory, the display screen 194, the camera 193, and the wireless communication module 160.
  • the wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modulation and demodulation processor, the baseband processor, and the like.
  • the electronic device 100 implements a display function through a GPU, a display screen 194, an application processor, and the like.
  • the GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
  • Display screen 194 is used to display images, videos, and the like.
  • Display screen 194 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (active-matrix organic light).
  • LED diode AMOLED
  • flexible light-emitting diode flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (quantum dot light emitting diodes, QLED) and so on.
  • the electronic device 100 may include one or N display screens 194 , where N is a positive integer greater than one.
  • the display screen 194 displays the interface content currently output by the system.
  • the content of the interface is a preview interface provided by the camera application.
  • the preview interface can display images captured by the camera 193 in real time, and the preview interface can also display images captured by the camera 193 in real time and processed by the GPU.
  • the electronic device 100 may implement a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
  • the ISP is used to process the data fed back by the camera 193 .
  • the shutter is opened, the light is transmitted to the camera photosensitive element through the lens, the light signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye.
  • ISP can also perform algorithm optimization on image noise, brightness, and skin tone.
  • ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be provided in the camera 193 .
  • Camera 193 is used to capture still images or video.
  • the object is projected through the lens to generate an optical image onto the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other formats of image signals.
  • the electronic device 100 may include 1 or N cameras 193 , where N is a positive integer greater than 1.
  • the camera 193 may include a telephoto camera, a wide-angle camera, an ultra-wide-angle camera, and the like.
  • the electronic device 100 collects images in real time through the camera 193, and can use different cameras to collect images under different magnifications.
  • an ultra-wide-angle The image is an image with a magnification of 0.5x, and the displayed image at a magnification of 0.5x to 1x is obtained by cropping the original image (image with a magnification of 0.5x); at a magnification of 1x to 3.5x, you can Using a wide-angle camera, the original image captured by the wide-angle camera is an image with a magnification of 1x, and the displayed image at a magnification of 1x to 3.5x is obtained by cropping the original image (image with a magnification of 1x); at 3.5
  • a telephoto camera can be used.
  • the original image captured by the telephoto camera is an image with a magnification of 3.5x. magnification image) obtained by cropping.
  • the electronic device 100 can obtain a display image with any magnification by cropping the original image.
  • a digital signal processor is used to process digital signals, in addition to processing digital image signals, it can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the frequency point energy and so on.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 100 may support one or more video codecs.
  • the electronic device 100 can play or record videos of various encoding formats, such as: Moving Picture Experts Group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4 and so on.
  • MPEG Moving Picture Experts Group
  • MPEG2 moving picture experts group
  • MPEG3 MPEG4
  • MPEG4 Moving Picture Experts Group
  • the NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • Applications such as intelligent cognition of the electronic device 100 can be implemented through the NPU, such as image recognition, face recognition, speech recognition, text understanding, and the like.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100 .
  • the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example to save files like music, video etc in external memory card.
  • Internal memory 121 may be used to store computer executable program code, which includes instructions.
  • the processor 110 executes various functional applications and data processing of the electronic device 100 by executing the instructions stored in the internal memory 121 .
  • the internal memory 121 may include a storage program area and a storage data area.
  • the storage program area can store an operating system, an application program required for at least one function (such as a sound playback function, an image playback function, etc.), and the like.
  • the storage data area may store data (such as audio data, phone book, etc.) created during the use of the electronic device 100 and the like.
  • the internal memory 121 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (UFS), and the like.
  • the electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playback, recording, etc.
  • the audio module 170 is used for converting digital audio information into analog audio signal output, and also for converting analog audio input into digital audio signal.
  • Speaker 170A also referred to as a "speaker” is used to convert audio electrical signals into sound signals.
  • the receiver 170B also referred to as “earpiece”, is used to convert audio electrical signals into sound signals.
  • the microphone 170C also called “microphone” or “microphone”, is used to convert sound signals into electrical signals.
  • the earphone jack 170D is used to connect wired earphones.
  • the pressure sensor 180A is used to sense pressure signals, and can convert the pressure signals into electrical signals.
  • the gyro sensor 180B may be used to determine the motion attitude of the electronic device 100 .
  • the angular velocity of the electronic device 100 about three axes may be determined by the gyro sensor 180B.
  • the gyro sensor 180B can be used for image stabilization. Exemplarily, when the shutter is pressed, the gyro sensor 180B detects the shaking angle of the electronic device 100, calculates the distance that the lens module needs to compensate according to the angle, and allows the lens to offset the shaking of the electronic device 100 through reverse motion to achieve anti-shake.
  • the gyro sensor 180B can also be used for navigation and somatosensory game scenarios.
  • the air pressure sensor 180C is used to measure air pressure.
  • the magnetic sensor 180D includes a Hall sensor.
  • the acceleration sensor 180E can detect the magnitude of the acceleration of the electronic device 100 in various directions (generally three axes). The magnitude and direction of gravity can be detected when the electronic device 100 is stationary. It can also be used to identify the posture of electronic devices, and can be used in applications such as horizontal and vertical screen switching, pedometers, etc. In some optional embodiments of the present application, the acceleration sensor 180E may be used to capture the acceleration value generated when the user's finger touches the display screen (or the user's finger taps the rear side frame of the rear shell of the electronic device 100 ), and converts the acceleration value It is transmitted to the processor, so that the processor can identify the part of the user's finger through which the user's operation is input.
  • the electronic device 100 can measure the distance through infrared or laser. In some embodiments, when shooting a scene, the electronic device 100 can use the distance sensor 180F to measure the distance to achieve fast focusing.
  • Proximity light sensor 180G may include, for example, light emitting diodes (LEDs) and light detectors, such as photodiodes.
  • LEDs light emitting diodes
  • photodiodes such as photodiodes
  • the ambient light sensor 180L is used to sense ambient light brightness.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the temperature sensor 180J is used to detect the temperature.
  • Touch sensor 180K also called “touch panel”.
  • the touch sensor 180K may be disposed on the display screen 194 , and the touch sensor 180K and the display screen 194 form a touch screen, also called a “touch screen”.
  • the bone conduction sensor 180M can acquire vibration signals.
  • the keys 190 include a power-on key, a volume key, and the like.
  • Motor 191 can generate vibrating cues.
  • the indicator 192 can be an indicator light, which can be used to indicate the charging state, the change of the power, and can also be used to indicate a message, a missed call, a notification, and the like.
  • the SIM card interface 195 is used to connect a SIM card. The SIM card can be contacted and separated from the electronic device 100 by inserting into the SIM card interface 195 or pulling out from the SIM card interface 195 .
  • the following describes an implementation form of the target tracking method provided in the present application on the display interface of the electronic device 100 in conjunction with application scenarios.
  • the display interface for enabling the target tracking function of the electronic device 100 is introduced.
  • Figure 3a illustrates an exemplary user interface on an electronic device for displaying a list of applications.
  • Figure 3a includes a status bar 201 and a display interface 202, wherein the status bar 201 may include: one or more signal strength indicators 203 of mobile communication signals (also known as cellular signals), wireless fidelity (Wi-Fi) ) signal of one or more of a signal strength indicator 204, a Bluetooth indicator 205, a battery status indicator 206, a time indicator 207.
  • a Bluetooth indicator 205 is displayed on the display interface of the electronic device.
  • the display interface 202 displays a plurality of application icons.
  • the display interface 202 includes an application icon of the camera 208 .
  • the electronic device detects a user operation acting on the application icon of the camera 208, the electronic device displays an application interface provided by the camera application.
  • the application interface of the camera 208 is shown in FIG. 4b .
  • the application interface may include: a display area 210 , a magnification adjustment area 211 , a function bar 212 , a mode selection area 213 , a gallery icon 214 , a shooting icon 215 , and a switching icon 216 .
  • the display image in the display area 210 is an image captured by the electronic device through the camera in real time. At this time, the display image in the display area 210 includes a part of a person's body, a tree, and a bell on the tree.
  • the camera currently used by the electronic device may be the default camera set by the camera application, and the camera currently used by the electronic device may also be the camera used when the camera application was closed last time.
  • the magnification adjustment area 211 which may also be referred to as a focal length adjustment area, is used to adjust the shooting focal length of the camera, thereby adjusting the display magnification of the display screen of the display area 210 .
  • the magnification adjustment area 211 includes an adjustment slider 211A, the adjustment slider 211A is used to indicate the display magnification, and the adjustment slider 211A is currently 5x, indicating that the current display magnification is 5 times.
  • the user can zoom in or zoom out the display screen of the display area 210 by sliding the adjustment slider 211A in the magnification adjustment area 211 .
  • the display magnification of the display image in the display area 210 is enlarged, so that the characters under the current 5 magnifications are not in the display area; by sliding the adjustment slider 211A downward, the display magnification of the display image in the display area 210 is reduced , which can make the characters under the current 5x magnification can be completely displayed in the display area.
  • the user can also use other shortcuts to adjust the display magnification of the display image in the display area 210 .
  • spreading two fingers apart can enlarge the display magnification, and pinch two fingers together can reduce the display magnification.
  • the function bar 212 is used to provide shortcut functions of the camera application, including, for example, enabling Smart Vision (icon 212A), switching flash (icon 212B), enabling AI Master of Photography (icon 212C), switching color mode (icon 212D), and enabling camera settings interface (icon 212E), etc.
  • the mode selection area 213 is used to provide different shooting modes. According to the different shooting modes selected by the user, the cameras and shooting parameters enabled by the electronic device are also different. Can include night scene mode, portrait mode, photo mode, video mode, professional mode and more. In FIG. 3b, the icon of the photographing mode is marked, which is used to prompt the user that the current mode is the photographing mode. in,
  • the electronic device 100 can improve the ability to present details in bright parts and dark parts, control noise, and present more picture details.
  • the electronic device 100 adapts to most photographing scenarios, and can automatically adjust photographing parameters according to the current environment.
  • the electronic device 100 can be used to capture a video.
  • the electronic device may display other selection modes, such as panorama mode (to achieve automatic stitching, the electronic device stitches multiple photos taken continuously into one photo, Realize the effect of expanding the viewing angle of the picture), HDR mode (automatic continuous shooting underexposure, normal exposure, overexposure three photos, and select the best part to combine into one photo) and so on.
  • the electronic device 100 When detecting a user operation acting on an application icon of any mode in the mode selection area 303 (eg night scene mode, photographing mode, video recording mode, etc.), the electronic device 100 can enter the corresponding mode in response to the operation.
  • the image displayed in the display area 210 is the image processed in the current mode.
  • the mode icons in the mode selection area 213 are not limited to virtual icons, and can also be selected through physical buttons deployed on the photographing device/electronic device, so that the photographing device enters the corresponding mode.
  • the gallery icon 214 when a user operation acting on the gallery icon 214 is detected, in response to the operation, the electronic device may enter a gallery of the electronic device 100, and the gallery may include photos and videos that have been taken.
  • the gallery icon 214 may be displayed in different forms, for example, after the electronic device saves the image currently captured by the camera, the gallery icon 214 displays a thumbnail of the image.
  • the gallery can be entered through a user operation (eg, touch operation, gesture operation, etc.) on the gallery icon 214 .
  • the switch icon 216 can be used to switch between the front camera and the rear camera.
  • the shooting direction of the front camera is the same as the display direction of the screen of the electronic device used by the user, and the shooting direction of the rear camera is opposite to the display direction of the screen of the electronic device used by the user.
  • the electronic device 100 Based on the user interface provided by the camera application in FIG. 3b, when the electronic device 100 receives a user operation for the user interface, the electronic device 100 triggers the target tracking mode, exemplarily displaying the interface shown in FIG. 4a.
  • the user operation may be that the user slides the adjustment slider 211A in the magnification adjustment area 211 upward, and the display magnification of the display screen in the display area 210 is enlarged.
  • the display magnification of the electronic device 100 is enlarged to not less than the preset magnification, the electronic device 100
  • the target tracking mode is triggered, and the interface shown in Figure 4a is exemplarily displayed.
  • the preset magnification may be, for example, 15 times.
  • the user operation may be that the user separates two fingers in the display area 210 to enlarge the display magnification.
  • the electronic device 100 triggers the target tracking mode.
  • the exemplary display The interface shown in Figure 4a.
  • the embodiment of the present application may also enable the electronic device 100 to trigger the target tracking mode in other manners.
  • Fig. 3b can also include a shortcut icon that triggers the target tracking mode, and the user operation can be a click operation for the shortcut icon.
  • the electronic device 100 receives the click operation and triggers the target tracking mode.
  • An exemplary display is as follows: The interface shown in Figure 4a.
  • FIG. 4a exemplarily shows an application interface for triggering the electronic device 100 to turn on the target tracking mode.
  • the application interface displayed by the electronic device 100 may be the application interface shown in FIG. 4a.
  • the adjustment slider 211A in the magnification adjustment area 211 is currently 15x, indicating that the current display magnification is 15 times.
  • the application interface shown in FIG. 4a includes a display area 220, a preview frame 221 and a guide slogan 222. Since the current display magnification is 15 times, the display image of the display area 220 is compared with that of the display area 210 in FIG. 3b. The display is magnified 15 times.
  • the picture displayed in the preview frame 221 is the original image captured by the camera in real time.
  • the preview frame 221 is a display area in the display area 220 and is generally suspended on the display screen in the display area 220 .
  • the image displayed in the preview frame 211 is larger than the displayed image range at a magnification of 5 (the displayed image in the display area 210 in Fig. 3b).
  • the image displayed when the display magnification is 5 times is not the original image captured by the camera, but the image displayed after cropping the original image captured by the camera.
  • the image displayed in the preview frame is the original image collected by the camera in real time, indicating that the tracking range in this embodiment of the present application is the range of the original image collected by the camera.
  • the picture displayed in the preview frame 221 at this time includes a character (complete), a tree and a bell on the tree.
  • the picture in the dotted box 221B in the preview box 221 is the picture displayed in the display area 220 , indicating which part of the picture captured by the camera in real time is the picture currently displayed in the display area 220 . Since the current display magnification is 15 times, the image displayed in the display area 220 at this time includes trees and bells on the trees, and there is no person.
  • the exit icon 221A is used to close the preview frame 221. When the electronic device 100 detects a user operation on the exit icon 221A, the electronic device 100 closes the preview frame 221. Optionally, the electronic device 100 closes the preview frame 221 and the guide slogan 222 at the same time.
  • the guide slogan 222 provides an exemplary way of turning on the target tracking mode, and displays the text "click the dashed box to track the target" to prompt the user, and the dashed box is the dashed box 221B.
  • the dotted box 221B may indicate different information based on different display forms (display color, display shape). For example, when the display color of the dotted box 221B is the first color, it means that the current electronic device 100 does not detect the target subject. If the display color of the box 221B is the second color, it means that the electronic device 100 has detected the target body.
  • the guide slogan 222 may display the text "Click the virtual box to track the target" to prompt the user.
  • clicking on the dotted frame (dotted frame 221B) to track the target is just a way to enable the target tracking mode, and has nothing to do with the display screen in the dotted frame 221B.
  • the target subject detected by the electronic device 100 is a person
  • the electronic device 100 tracks the person.
  • the electronic device 100 when the electronic device 100 detects the target body, the electronic device 100 may highlight the target body in the preview box 221, for example, display another dotted frame to frame the target body, so as to prompt the user of the electronic device 100 detected target subjects.
  • the electronic device 100 can enable the target tracking mode.
  • Fig. 4b exemplarily shows an application interface of a target tracking mode.
  • FIG. 4a may also include a shortcut icon for enabling the target tracking mode. The electronic device 100 receives a click operation for the shortcut icon, starts the target tracking mode, and exemplarily displays the application interface shown in FIG. 4b. .
  • the application interface shown in FIG. 4b includes a display area 230, a preview frame 221 and a guide slogan 223, wherein,
  • the picture displayed in the preview frame 221 is the original image captured by the camera in real time.
  • the electronic device 100 turns on the target tracking mode, and the electronic device 100 recognizes the target subject in the image currently captured by the camera (that is, the person in the image displayed in the preview frame 211 ), and displays the solid line frame 221C in the preview frame 221 , wherein , the picture in the solid line frame 221C is the picture displayed in the display area 230, which indicates which part of the picture captured by the camera in real time is the picture displayed in the current display area 230.
  • the solid line frame may also be referred to as screen output box.
  • the character is displayed in the left area of the display area 230 at this time.
  • the solid-line frame 221C selects the position of the left edge in the preview frame 221, and in the solid-line frame 211C, the target subject is in the solid-line frame 221C area to the left.
  • the display position of the solid line frame 221C moves with the target body.
  • the guide slogan 223 provides a way to exit the target tracking mode, prompting the user to "click the solid box to exit the tracking", the solid box being the solid line box 221C.
  • the electronic device 100 detects the user operation on the solid line frame 221C, the electronic device 100 exits the target tracking mode.
  • the electronic device 100 displays the user interface shown in FIG. 4a at this time; 100 closes the preview frame 221 and the guide slogan 223 at the same time, and displays a display screen in the display area 230 as shown in the display area 220 in FIG. 4a.
  • the display area 220 in FIG. 4a displays the image obtained by enlarging the image collected by the camera to 15 times, and intercepts the most central part of the image collected by the camera.
  • the electronic device 100 acquires the image currently displayed in the display area 220 .
  • the target tracking mode is turned on, and in the case where the images captured by the camera are the same image (the same image in the preview frame 221 in FIG. 4a and FIG. 4b), the image displayed in the display area 230 is the image captured by the camera which includes the part of the target body.
  • the electronic device 100 acquires the image currently displayed in the display area 230 . In this way, the effect of obtaining the picture/video of the target subject without the mobile electronic device 100 can be achieved, thereby realizing the tracking of the target subject.
  • the embodiment of the present application does not limit the display interface and user operation of how to activate the target tracking function.
  • the electronic device 100 does not move, the target body moves, and the electronic device 100 can track the target body.
  • the picture displayed in the preview frame 221 is the original image captured by the camera in real time.
  • the background (trees) in the preview frame 221 does not change, and it can be seen that the character in Fig. 4c has moved to the right for a period compared to Fig. 4b distance, but the position of the solid frame 221D and the solid frame 221C relative to the preview frame 221 is the same.
  • the picture in the solid line frame 221D is the picture displayed in the display area 240 , indicating which part of the picture currently displayed in the display area 240 is captured by the camera in real time.
  • the solid line frame 221D selects the position of the left edge of the preview frame 221, and the target subject is in the middle area of the solid line frame 221D.
  • the target subject continues to move, as shown in FIG. 4d.
  • FIG. 4d compared with the image captured in FIG. 4c, since the electronic device 100 does not move, in the preview frame 221
  • the background (trees) does not change, it can be seen that the character in Figure 4d has moved a distance to the right compared to Figure 4c, and the solid line frame 221E follows the target subject (character) to move, that is, the target subject is in the solid line.
  • the picture in the solid line frame 221E is the picture displayed in the display area 250 . At this time, a person is displayed in the middle area of the display area 250 .
  • the target body continues to move, as shown in FIG. 4e, in FIG. 4e, compared with the image captured in FIG. 4d, since the electronic device 100 does not move, the preview frame 221
  • the background (trees) has not changed, it can be seen that the character in Figure 4e has moved a distance to the upper right direction compared to Figure 4d, and the solid line frame 221F follows the target subject (character) to move, that is, the target subject is in the real The area still in the middle of the wire frame 221F.
  • the picture in the solid line frame 221F is the picture displayed in the display area 260 . At this time, the person is displayed in the middle area of the display area 260 .
  • the target subject continues to move, as shown in FIG. 4f .
  • FIG. 4f compared with the image captured in FIG. 4e , since the electronic device 100 does not move, in the preview frame 221
  • the background (trees) does not change, it can be seen that the figure in Figure 4f has moved a distance to the right compared to Figure 4e, and the figure has moved to the edge of the shooting range of the electronic device 100 at this time, and the solid line frame 221G follows.
  • the target subject person moves. Since the target subject is located at the right edge of the screen displayed in the preview frame 211, the solid-line frame 221G selects the position of the right edge in the preview frame 221.
  • the target subject The area to the right in the solid line frame 221G.
  • the picture in the solid line frame 221G is the picture displayed in the display area 270 .
  • the person is displayed in the right area of the display area 270 .
  • the electronic device 100 tends to always display the target subject in the center of the display area.
  • the electronic device 100 takes the center coordinate of the output frame as the guide point. After the electronic device 100 determines the guide point, it always displays the guidance of the target subject at the position of the guide point, so that the target subject is displayed in the center of the display area. area effect.
  • the center and scale of the output frame are fixed relative to the original image captured by the electronic device 100.
  • the size of the output frame is related to the display magnification of the electronic device 100. The larger the display magnification of the electronic device 100, the smaller the output frame is relative to the original image. ; The smaller the display magnification of the electronic device 100, the larger the output frame relative to the original image.
  • the guide point is a pixel coordinate point for the original image collected by the electronic device 100. If the guide point is selected in the output frame, the electronic device 100 can always display the target subject in the guide point in the output frame. point location.
  • FIGS. 4b to 4f since the position of the target subject in FIG. 4b is at the left edge of the shooting range of the electronic device 100, combined with the current display magnification (15 times) and the display scale of the display area, the target subject cannot be Displayed in the middle of the display area, and the target subject is displayed at the left edge of the display area.
  • the electronic device 100 tracks the target subject and displays the target subject in the center of the display area.
  • the target subject When the target subject continues to move to the right and moves to the position of the right edge of the shooting range of the electronic device 100, as shown in Figure 4f, combined with the current display magnification (15 times) and the display scale of the display area, the target subject cannot be displayed on the display. In the middle of the display area, the target subject is displayed at the right edge of the display area.
  • the electronic device 100 tends to always display the target subject in the center of the display area. Due to factors such as slow data processing speed, the actual display situation will not be idealized as shown in Figure 4b to Figure 4f.
  • the target subject may not be displayed stably on the display. In the middle of the area, the target subject can be displayed in the vicinity of the upper, lower, left and right areas around the middle within a certain error range.
  • the electronic device 100 when the electronic device 100 detects a user operation on the solid-line frame 221G, the electronic device 100 exits the target tracking mode.
  • the electronic device 100 displays the user operation as shown in FIG. 5b. interface.
  • the picture displayed in the preview frame 221 is the original image captured by the camera in real time. Since the current display magnification is 15 times, the display image in the display area 280 is enlarged by 15 times compared to the display image in the preview frame 221 .
  • the picture in the dashed box 221H in the preview box 221 is the picture displayed in the display area 280 , indicating which part of the image captured by the camera in real time is the picture currently displayed in the display area 220 .
  • the guide slogan 224 in Fig. 5b is "click the dashed box to track the target", and the dashed box is the dashed box 221H.
  • the electronic device 100 when the electronic device 100 detects a user operation on the exit icon 221K, the electronic device 100 closes the preview frame 221, optionally, at this time, the electronic device 100 displays the display as shown in FIG. 6b User Interface.
  • the electronic device 100 closes the preview frame 221 and the guide banner 224 at the same time, and displays the display screen in the display area 290 as in the display area 280 in FIG. 6a.
  • the electronic device 100 tends to always display the target subject in the middle area of the display area.
  • the electronic device 100 can also limit the target subject to any area of the display area, the electronic device 100 can use any coordinate point of the display area as a guide point, and always use the target subject's The movement is guided to the guiding point, so as to achieve the effect of displaying the target subject in any area of the display area.
  • the electronic device 100 may determine the guidance point based on the received user operation. For example, the electronic device 100 provides a guide point setting input box, and the electronic device 100 receives the pixel coordinate point input by the user as the guide point; for example, when the electronic device 100 receives a click operation on the display area, the electronic device 100 determines that the click operation corresponds to The click position of is the position of the guide point.
  • 7a to 7d are taken as examples to illustrate the display interface in the case where the electronic device 100 limits the target subject to the left area of the display area.
  • the electronic device 100 when the electronic device 100 detects a user operation on the dotted frame 221B, the electronic device 100 turns on the target tracking mode, and the electronic device 100 displays the user interface shown in FIG. 7a.
  • the electronic device 100 determines a guide point in the display area 310, and always guides the movement of the target body to the guide point, and the guide point is located in the left area of the display area.
  • the picture displayed in the preview frame 311 is an image captured by the camera in real time, and the display magnification is 5 times.
  • the electronic device 100 turns on the target tracking mode, the electronic device 100 recognizes the target subject in the image currently captured by the camera (that is, the person in the image displayed in the preview frame 311 ), and determines the position of the solid line frame 331A in the preview frame 311 , the solid line frame 331A includes the target subject recognized by the electronic device 100. Since the electronic device 100 limits the target subject to the left region of the display area (the position of the guide point), the target subject is offset in the solid line frame 331A. area on the left. The picture in the solid line frame 331A is the picture displayed in the display area 310 . At this time, the character is displayed at the position of the guide point in the display area 320 .
  • the user interface shown in FIG. 7a is the same as that shown in FIG. 4b, so the description in the user interface in FIG. 7a may refer to the description in the above-mentioned FIG. 4b, which will not be repeated here.
  • the electronic device 100 does not move, the target body moves, and the electronic device 100 tracks the target body.
  • Fig. 7b it can be seen that the character in Fig. 7b has moved a certain distance to the upper right side compared to Fig. 7a, and the solid line frame 311B moves with the target subject (character), that is, the target subject is in the solid line frame
  • the 311B is still in the left-leaning area.
  • the picture in the solid line frame 311B is the picture displayed in the display area 320 , and at this time, the character is displayed at the position of the guide point in the display area 320 .
  • the target subject continues to move, as shown in FIG. 7c , it can be seen that the character in FIG. 7c has moved a certain distance to the lower right side compared to that in FIG. 7b , and the solid line frame 311C follows
  • the target subject (person) moves, that is, the target subject is still in the leftward region in the solid line frame 311C.
  • the picture in the solid line frame 311C is the picture displayed in the display area 330 .
  • the character is displayed at the position of the guide point in the display area 330 .
  • the target subject When the electronic device 100 does not move, the target subject continues to move, as shown in Fig. 7d, it can be seen that the character in Fig. 7d has moved a distance to the upper right side compared to Fig. 7c, and the character has moved to the On the right edge of the shooting range of the electronic device 100, the solid-line frame 311D moves with the target subject (person), then the solid-line frame 311D selects the position of the right edge in the preview frame 211, and in the solid-line frame 311D, the target subject displays The area to the right in the solid line frame 311D. Since the position of the target subject in FIG.
  • the picture in the solid line frame 311D is the picture displayed in the display area 340 , and at this time, a person is displayed in the area on the right edge of the display area 340 .
  • the electronic device 100 can use any coordinate point in the display area as a guide point, and always guide the movement of the target subject to the guide point, so as to achieve the effect of displaying the target subject in any area of the display area.
  • the above describes how the electronic device 100 tracks the target subject when the electronic device 100 recognizes a target object and uses the target object as the target subject.
  • the following describes how to determine the target subject when the electronic device 100 recognizes that the image captured by the camera includes multiple target objects.
  • the electronic device 100 may determine a unique target subject among the multiple target objects according to a preset rule.
  • the preset rule may be the target object at the most central position in the currently collected image; the preset rule may also be the target object occupying the largest image area in the currently collected image; and so on.
  • the electronic device 100 may provide selection icons for the multiple target objects, and the electronic device 100 receives the user's selection icon for the multiple target objects. Select the user operation of the icon, analyze the click position of the user operation, and determine the target subject among the multiple target objects.
  • Figure 8a shows a possible user interface provided by the camera application.
  • the display image in the display area 400 is an image captured by the electronic device through the camera in real time. At this time, the display image in the display area 400 includes part of the body of person 1, person 2, a tree and a bell on the tree.
  • the electronic device 100 Based on the user interface provided by the camera application in FIG. 8a, when the electronic device 100 receives a user operation for the user interface, the electronic device 100 triggers the target tracking mode, exemplarily displaying the interface shown in FIG. 8b.
  • the user operation may be that the user separates two fingers in the display area 400 to enlarge the display magnification.
  • the display magnification of the electronic device 100 is enlarged to not less than the preset magnification, the electronic device 100 triggers the target tracking mode.
  • the exemplary display The interface shown in Figure 4a.
  • the preset magnification may be, for example, 15 times.
  • Fig. 8a can also include a shortcut icon that triggers the target tracking mode, and the user operation can be a click operation for the shortcut icon.
  • the electronic device 100 receives the click operation and triggers the target tracking mode.
  • An exemplary display is as follows. The interface shown in Figure 8b.
  • the picture displayed in the preview frame 411 is the image captured by the camera in real time. It can be seen that the image displayed in the preview frame 411 is larger than the displayed image range at a magnification of 5 (as shown in the display area 410 in FIG. 8b). image) larger.
  • the picture displayed in the preview frame 411 includes character 1 (complete), character 2, a tree, and a bell on the tree.
  • the picture in the dotted box 411A in the preview box 411 is the picture displayed in the display area 410 , indicating which part of the picture captured by the camera in real time is the picture currently displayed in the display area 410 . Since the current display magnification is 15 times, the image displayed in the display area 410 at this time includes the trees and the bells on the trees, and the upper half of the body of the character 2 .
  • the electronic device 100 When the electronic device 100 detects a user operation on the dotted frame 411A, the electronic device 100 enables the target tracking mode, and the electronic device 100 identifies the target subject in the image currently captured by the camera (ie, the image displayed in the preview frame 411). As shown in FIG. 8c , at this time, the electronic device 100 recognizes two target objects, namely person 1 and person 2 .
  • the electronic device 100 displays an area 411B and an area 411C, where the area 411B indicates one target object person 1 and the area 411C indicates another target object person 2 .
  • the electronic device 100 displays a guide slogan 412 for prompting the user to "click the target object to determine the tracking subject".
  • the electronic device 100 determines the person 1 as the target subject
  • the electronic device 100 determines the person 2 as the target subject.
  • the electronic device 100 determines that the character 1 is the target subject, and displays the user interface shown in FIG. 8d.
  • the picture displayed in the preview frame 411 is the original image captured by the camera in real time.
  • the preview frame 411 includes a solid-line frame 411D, and the solid-line frame 411D includes the target subject (person 1) recognized by the electronic device 100. It can be considered that the solid-line frame 411D indicates the tracking of the target subject by the electronic device 100.
  • the target subject is The movement is made, and the solid line frame 411D moves with the target body.
  • the picture in the solid line frame 411D is the picture displayed in the display area 430 , which indicates which part of the picture captured by the camera in real time is the picture currently displayed in the display area 430 .
  • the display area 420 in FIG. 8c displays the image obtained by enlarging the image collected by the camera to 15 times, and the most central part of the image collected by the camera is intercepted.
  • the electronic device 100 acquires the image currently displayed in the display area 420 .
  • the target tracking mode is turned on.
  • the image displayed in the display area 430 is one of the images captured by the camera. Include the part of the target subject (person 1).
  • the electronic device 100 acquires the image currently displayed in the display area 430 . In this way, the effect of obtaining the picture/video of the target subject without the mobile electronic device 100 can be achieved, thereby realizing the tracking of the target subject.
  • the electronic device 100 when the electronic device 100 receives a user operation for the user interface, the electronic device 100 triggers the target tracking mode, and the electronic device 100 can directly display as shown in FIG. 8c interface shown.
  • the manner in which the electronic device 100 determines the target subject from the multiple target objects is not limited to the foregoing exemplary manner, which is not limited in this application.
  • FIG. 9 provides an embodiment of the present application.
  • a method flow chart of a target tracking method, the steps of the target tracking method may include:
  • the electronic device 100 acquires image data, and determines a target subject in the image data.
  • the electronic device 100 obtains image data, wherein the electronic device 100 can obtain image data collected in real time by a camera.
  • the display screen in the preview frame 221 in FIG. 4a is the image data collected by the electronic device 100 in real time through the camera.
  • the electronic device 100 may also acquire image data sent by other devices, and may also be image data stored by the electronic device 100 .
  • the electronic device 100 performs subject recognition on the acquired image data based on a target detection algorithm (which may also be an algorithm such as target recognition, tracking, or feature matching) and determines the target subject, or the electronic device 100 can also determine the target subject based on the received user operation.
  • the target subject in this image data.
  • the electronic device 100 displays the first interface.
  • the first interface displays the image captured by the electronic device 100 through the camera.
  • the first interface may be a preview interface of a camera application.
  • the electronic device 100 receives the first operation on the first interface, and in response to the first operation, the electronic device 100 triggers the target tracking mode to determine the target subject in the acquired image data.
  • the first operation may be a touch operation, a voice operation, a hovering gesture operation, etc., which is not limited herein.
  • the first interface is the preview interface of the camera application as shown in FIG. 3b
  • the first operation may be a user operation of increasing the display magnification
  • the electronic device 100 displays the interface shown in FIG. 4a, and the electronic device 100
  • the target subject in the acquired image data is determined, for example, the person in the preview frame 211 in FIG. 4a is the target subject.
  • the electronic device 100 collects image data in real time through a camera, and performs subject recognition on the collected image data.
  • the electronic device 100 performs target detection on the collected image through a target detection algorithm, and acquires one or more target objects in the image, where the one or more target objects may be people, animals, objects, buildings, and so on.
  • the electronic device 100 determines a target subject among the one or more target objects. It should be noted that, the electronic device 100 performs subject recognition on the collected image data, and the collected image data is considered to be the background part except for the part of the target subject.
  • the electronic device 100 determines a target subject in the one or more target objects according to a preset rule.
  • the preset rule may be the target object at the most central position in the currently collected image; the preset rule may also be the target object occupying the largest image area in the currently collected image; and so on.
  • the electronic device 100 when the electronic device 100 performs target detection on the acquired images through a target detection algorithm, the electronic device 100 analyzes and processes multiple frames of images, and determines the moving objects in the multiple frames of images, that is, the one or more targets
  • the object is a moving object. It should be noted that, in the collected image data, except for the part of the one or more target objects, the rest is considered as the background part.
  • the electronic device 100 after detecting one or more target objects in the image, the electronic device 100 provides the user with a selection icon of the one or more target objects on the display interface, and the electronic device 100 receives the user's target for the one or more target objects.
  • the electronic device 100 determines the movement path of the target body.
  • the electronic device 100 After the electronic device 100 recognizes the subject and determines the target subject, it analyzes and tracks the movement path formed by the target subject based on the multi-frame images, and the movement path is also called the subject movement path.
  • the image frames collected by the electronic device 100 are N+2 frames
  • the picture displayed by the electronic device 100 in the display area in real time is the Nth frame image collected by the electronic device 100, where N is a positive integer.
  • the multi-frame images here refer to N+2 frames of images, and the electronic device 100 determines the movement path of the target subject based on the N+2 frames of images collected.
  • the motion path of a target body is described by the motion path in the X-axis direction and the motion path in the Y-axis direction.
  • the embodiment of the present application takes the motion path in the X-axis direction as an example, as shown in FIG. 10
  • FIG. 10 shows Line graph of the motion path of the target subject and background.
  • the ordinate is the pixel coordinate in the x-axis direction
  • the abscissa is the number of image frames in the order of time t.
  • Path 1 in FIG. 10 is the movement path of the target body in the x-axis direction, and the movement path of the target body may be drawn based on a target point on the target body.
  • the electronic device 100 acquires the pixel coordinates of the target point on each frame of the original image, and the electronic device 100 determines the target point in the x-axis direction in different image frames based on the pixel coordinates of the target point in the x-axis direction in different image frames.
  • the motion path on as the motion path of the target body in the X-axis direction.
  • the motion path of the target subject may also be drawn based on an area on the original image, where the area is of a fixed size, and the image of the target subject is always within this area.
  • the electronic device 100 obtains the pixel coordinates of a target point in the area on each frame of the image (the center point of the area is taken as an example below), and the electronic device 100 is based on the center point of the area on different image frames in the x-axis direction.
  • the pixel coordinates of determine the motion path of the center point in the x-axis direction in different image frames, as the motion path of the target subject.
  • the movement path of the target body may also be drawn based on multiple target points on the target body.
  • the electronic device 100 obtains the pixel coordinates of the multiple target points on each frame of the original image, and the electronic device 100 determines each of the multiple target points based on the pixel coordinates of the multiple target points in the x-axis direction on different image frames.
  • the motion path of a target point in the x-axis direction in different image frames.
  • the electronic device 100 weights the motion paths of the multiple target points, and determines at least one motion path as the motion path of the target subject.
  • the electronic device 100 determines the motion path of the background.
  • the electronic device 100 performs subject recognition on the collected image data, and in the collected image data, except for the part of the target subject, the rest is regarded as the background part.
  • the electronic device 100 analyzes and tracks a motion path formed by the background based on the multi-frame images, and the motion path is also referred to as a background motion path, wherein the multi-frame images here refers to N+2 frames of images.
  • a background motion path is described by the motion path in the X-axis direction and the motion path in the Y-axis direction.
  • the embodiment of this application takes the motion path in the X-axis direction as an example, as shown in FIG. 10 , the path 2 in FIG. 10
  • the movement path of the background may be drawn based on a target point in the background.
  • the electronic device 100 obtains the pixel coordinates of the target point on each frame of image, and the electronic device 100 determines that the target point is in the x-axis direction in different image frames based on the pixel coordinates of the target point in the x-axis direction in different image frames.
  • the motion path is used as the motion path of the background in the X-axis direction.
  • the motion path of the background may also be drawn based on multiple target points in the background.
  • the electronic device 100 acquires the pixel coordinates of the multiple target points on each frame of image, and the electronic device 100 determines each of the multiple target points based on the pixel coordinates of the multiple target points in the x-axis direction on different image frames The motion path of the target point in the x-axis direction in different image frames.
  • the electronic device 100 weights the motion paths of the multiple target points, and determines at least one motion path as the motion path of the background.
  • the motion path of the background may also be drawn based on the pose difference data of the previous frame and the current frame.
  • the electronic device 100 measures the pose transformation of the electronic device 100 based on the 6 degrees of freedom (depth of field, dof) technology or the 3dof technology, and the electronic device 100 converts the spatial position change into the pixel coordinate change of the background motion path, thereby describing Motion path for the background.
  • step S104 and step S105 do not have a sequential order.
  • the electronic device 100 determines the smooth path of the background based on the motion path of the target subject and the motion path of the background.
  • the electronic device 100 determines the smooth path of the background based on the motion path of the target subject and the motion path of the background. Wherein, the electronic device 100 drives the background motion based on the smooth path, and can achieve stable tracking of the target subject.
  • the electronic device 100 determines the difference polyline between the target body and the guide point based on the movement path of the target body.
  • the difference polyline can also be called a displacement guide line, which is used to indicate the displacement of the target body from the guide point.
  • the guide point is the position where the electronic device 100 wants the target body to be displayed in the output frame.
  • the electronic device 100 defaults the guiding point as the center point of the output box; in some embodiments, the electronic device 100 may determine the guiding point based on the received user operation.
  • the center and scale of the output frame are fixed relative to the original image collected by the electronic device 100, and the electronic device 100 sets the guide point in the output frame. If the target point of the target body is displayed on the guide point, the target body is displayed in the area where the guide point is located.
  • the target point of the target body may be any point on the target body.
  • the preset guide point of the electronic device 100 may be any point of the output box.
  • the electronic device 100 can set the guide point at the center point of the output frame, and the electronic device 100 guides the target point of the target body to display on the output frame based on the difference polyline between the target point of the target body and the center point of the output frame On the center point of the output box, the effect of always displaying the target subject in the center of the output box is achieved.
  • the electronic device 100 can use any coordinate point in the output box as a guide point, and always display the target point of the target body on the guide point, so as to achieve the effect of displaying the target body at any position in the display area .
  • the difference polyline between the target subject and the preset guide point is the difference between the target point of the target subject and the pixel coordinates of the preset guide point, as shown in Figure 11, the path 3 in Figure 11 is the target point of the target subject and the preset guide point.
  • the electronic device 100 obtains the pixel coordinates of the target point on each frame of image, and the electronic device 100 determines the path 3 based on the pixel coordinate difference in the X-axis direction of the target point and the preset guide point on different image frames.
  • the electronic device 100 determines a smooth path of the background based on the difference polyline and the moving path of the background.
  • the path 2 is an exemplary moving path of the background
  • the electronic device 100 determines the smooth path of the background based on the difference polyline (path 3 ) and the moving path of the background (path 2 ).
  • the smooth path should not only ensure smoothness, but also tend to the midpoint.
  • the point on the smooth path is the midpoint of path 3 and path 2; and at the point of f16, if the midpoint of path 3 and path 2 is taken as the smooth path , it will cause the smooth path to be unsmooth, so in order to ensure smoothness, the smooth path of the point f16 is not taken at the position of the midpoint. That is, the points on the smooth path are determined on the basis of smoothness and then tend to the midpoint of path 3 and path 2.
  • the electronic device 100 drives the background motion based on the smooth path, and can achieve stable tracking of the target subject.
  • the smooth path of the background includes a path in the X-axis direction and a path in the Y-axis direction.
  • the above solution process is described by taking the movement in the X-axis direction as an example, and the smooth path obtained by the solution is a smooth path in the X-axis direction.
  • the electronic device determines the motion path of the target subject and the motion path of the background, and then determines the smooth path in the Y-axis direction based on the motion path of the target subject and the background motion path in the Y-axis direction, which is the same as solving X
  • the principle of the smooth path in the axial direction is the same, and will not be repeated here.
  • the electronic device 100 since the smooth path is solved in the X-axis direction, the electronic device 100 further optimizes the smooth path based on the polyline on the left boundary of the output graph and the polyline on the right border of the output graph, as shown in FIG. 11 , the path in FIG. 11 4 is the polyline on the left border of the output graph, and path 5 is the polyline on the right border of the output graph. Path 4 and path 5 are the pixel coordinates of the left and right boundaries of the output frame on each frame of image after the electronic device 100 warps the original image data.
  • the electronic device 100 When determining the smooth path of the background, the electronic device 100 always limits the smooth path of the background between the polyline on the left border of the output image and the polyline on the right border of the output image, so as to avoid making the border of the output frame outside the border of the background.
  • the electronic device 100 further optimizes the smooth path based on the upper boundary polyline of the output graph and the lower boundary polyline of the output graph.
  • the electronic device 100 drives the background to warp through the smooth path, so that anti-shake and tracking can be achieved synchronously.
  • the electronic device 100 warps the collected image based on the smooth path, and outputs the warped image.
  • the electronic device 100 After determining the smooth path of the background, the electronic device 100 calculates a transformation matrix from the original motion path of the background to the obtained smooth path based on the smooth path, warps the original image onto the smooth path, and outputs the warped image.
  • the output image is an image that achieves stable tracking of the target subject. At this time, the electronic device 100 achieves anti-shake of the background and tracking of the target subject.
  • FIG. 12a shows an implementation effect after an image frame (the Nth frame) collected by the electronic device 100 is moved according to a smooth path.
  • the electronic device 100 collects an original image frame (the Nth frame), and the original image frame includes a person (represented by a solid line).
  • the background is smoothed
  • the path is near the original path, and the character cannot be displayed in the output box.
  • the electronic device 100 enters the target tracking mode, the electronic device 100 tracks the character based on the captured original image frame.
  • the electronic device receives an instruction to track the character, the electronic device 100 warps the original image frame.
  • the dotted line is the image frame after warp processing the original image frame, wherein the dashed frame also includes a character (indicated by the dashed line), which corresponds to the character of the original image frame.
  • the electronic device 100 moves the image frame indicated by the dotted frame based on the smooth path obtained by the solution in step S104, and can move the character into the output frame to achieve smooth tracking of the character.
  • the electronic device 100 collects the N+1 frame image, and the electronic device 100 performs warp processing on the N+1 frame image, and the dotted frame is the image after warp processing on the N+1 frame image. image frame.
  • the electronic device 100 moves the image frame indicated by the dotted frame based on the smooth path obtained by the solution in step S104, and can move the character into the output frame to achieve smooth tracking of the character.
  • the positions of the characters in the Nth frame image and the N+1th frame image are the same in the output frame, that is, the preset guide point of the electronic device 100 is at the position of the current character in the output frame .
  • the electronic device 100 drives the background movement through the motion of the target subject to move the target subject to the output screen.
  • two aspects of anti-shake smoothing and tracking are synchronously considered. Tracking of the target subject and path smoothing of the background share the same widest crop boundary (ie, the boundary of the original image data).
  • the anti-shake space is not limited, and the tracking effect can also be provided at a lower field of view, avoiding the problem that the subject visible on the original image is difficult to track due to the smaller cropping boundary.
  • the image frames collected by the electronic device 100 are N+2 frames
  • the picture displayed by the electronic device 100 in the display area in real time is the Nth frame image collected by the electronic device 100, where N is a positive integer.
  • the electronic device 100 wants to perform anti-shake and target tracking. 100 can only track the target subject based on N frames of images, that is, the tracking of the target subject by the electronic device 100 can only use the Nth frame of images, which will cause the output image to not be displayed to the Nth frame, resulting in a delay in the output image.
  • the smooth window of the anti-shake and the tracking can share the future frame information, and better balance the tracking performance and stability.
  • the electronic device 100 can collect the original image based on the telephoto camera, or can collect the image based on the wide-angle camera, that is, the image collected by the electronic device 100 is a wide-angle image with a larger field of view.
  • the device 100 performs subject recognition on the captured image through a target detection algorithm and determines the target subject. In this way, by obtaining a wide-angle image with a larger field of view, a wider range of tracking can be achieved.
  • FIG. 13 is a method flowchart of another target tracking method provided by an embodiment of the present application, and the steps of the target tracking method may include:
  • the electronic device 100 performs subject identification on the acquired image data and determines N target subjects, where N is a positive integer greater than 1.
  • the electronic device 100 acquires image data, wherein the electronic device 100 may acquire image data collected in real time by a camera, or may acquire image data sent by other devices, or may be image data stored by the electronic device 100 . Then, the electronic device 100 performs subject recognition on the acquired image data based on the target detection algorithm and determines N target subjects, or the electronic device 100 may also determine N target subjects based on the received user operation.
  • the electronic device 100 collects image data in real time through a camera, and performs subject recognition on the collected image data.
  • the electronic device 100 performs target detection on the collected image through a target detection algorithm, and acquires multiple target objects in the image, where the multiple target objects may be people, animals, objects, buildings, and the like.
  • the electronic device 100 determines N target subjects among the plurality of target objects.
  • the electronic device 100 may determine N target subjects from among the plurality of target objects according to a preset rule.
  • the preset rule may be the target object closest to the center position and the second closest position to the center position in the currently collected image; the preset rule may also be the target object occupying the largest and second largest image area in the currently collected image; etc. .
  • the electronic device 100 after detecting multiple target objects in the image, the electronic device 100 provides the user with selection icons of the multiple target objects on the display interface, and the electronic device 100 receives the user's selection of the multiple target objects User operation of the icon, analyze the click position of the user operation, and determine N target subjects. For example, when the user clicks the selection icons corresponding to the two target objects respectively, the electronic device 100 determines that the two objects are two target subjects.
  • the electronic device 100 performs subject recognition on the collected image data, and the collected image data is considered to be the background part except for the part of the target subject.
  • the electronic device 100 determines the motion path sets of the N target bodies.
  • the electronic device 100 After the electronic device 100 performs subject identification and determines N target subjects, it determines the motion path of each target subject in the N target subjects, wherein, how to determine the motion path of the target subject can refer to the relevant description in the above step S102, here No longer.
  • the electronic device 100 determines the motion path of each of the N target subjects, and obtains a motion path set of the target subjects, where the motion path set includes the motion path of each of the N target subjects.
  • the electronic device 100 determines the motion path of the background.
  • step S203 for the related description of step S203, reference may be made to the description of the foregoing step S103, which will not be repeated here.
  • the electronic device 100 determines the smooth path of the background based on the motion path set of the N target subjects and the motion path of the background.
  • the electronic device 100 determines the smooth path of the background based on the motion path set of the N target subjects and the motion path of the background. Wherein, the electronic device 100 drives the background motion based on the smooth path, and can achieve stable tracking of N target subjects.
  • the electronic device 100 determines a difference polyline between each target body and a preset guide point based on the motion path sets of the N target bodies, that is, N difference polylines are determined. In some embodiments, the electronic device 100 performs a weighted average of the N difference polylines to obtain a difference polyline that optimizes the average distance between each target body and the preset guide point.
  • the electronic device 100 performs a predicted output map score on the N difference polylines through composition score or aesthetic score. For example, the closer the position of the target subject in the output map is, the higher the score will be.
  • the device 100 selects one of the N difference polylines with the highest score as the determined difference polyline.
  • the electronic device 100 scores the N difference polylines according to the position and size of each target subject on the input image and the size of the final output image, such as the size of the target subject on the input image and the size of the final output image. The closer the sizes of the output graphs are, the higher the score. The electronic device 100 selects one of the N difference polylines with the highest score as the determined difference polyline.
  • the electronic device 100 determines a smooth path based on the difference polyline and the motion path of the background.
  • the electronic device 100 determines the smooth path based on the difference polyline and the motion path of the background.
  • the electronic device 100 warps the collected image based on the smooth path, and outputs the warped image.
  • step S205 for the related description of step S205, reference may be made to the description of the foregoing step S105, which will not be repeated here.
  • joint tracking of multiple target subjects is realized by identifying multiple target subjects.
  • FIG. 14 is a flow of steps of a target tracking method provided by an embodiment of the present application.
  • Step S301 the electronic device 100 enables the target tracking mode
  • Step S302 the electronic device 100 determines the target subject to be tracked according to the original image collected by the camera of the electronic device 100;
  • the electronic device 100 may determine the target subject through a target detection algorithm, or may determine the target subject through a received user operation. For the specific manner in which the electronic device 100 determines the target subject to be tracked, reference may be made to the related descriptions of FIG. 4a and FIG. 4b above.
  • Step S303 the electronic device 100 displays a first tracking screen in the display area, the first tracking screen includes a target subject, and the target subject is located at a first position in the framing area of the camera;
  • the first tracking picture can be, for example, the picture shown in FIG. 4b, where the target subject (person) is located at the left edge of the camera's viewing area (picture displayed in the preview frame 221) (first position).
  • Step S304 when the target body moves, the electronic device 100 displays a second tracking screen in the display area, and the second tracking screen shows that the target body is located at a second position in the viewing area of the camera.
  • the second tracking picture may be, for example, the picture shown in FIG. 4 c .
  • the target subject (person) moves to the right relative to 4 b , and the second position is different from the first position.
  • the electronic device 100 may perform target tracking on the collected original image. After the electronic device 100 determines the target subject in the original image, when the target subject moves from the first position to the second position, the display of the electronic device 100 The target subject is always displayed on the screen in the area.
  • the original image collected by the camera refers to all image data that can be collected by the camera.
  • the embodiments of the present application track the target subject based on the image range of the original image, thereby realizing a larger tracking range.
  • the position of the electronic device 100 is not moved, and the viewing area of the camera remains unchanged.
  • the tracking of the target subject in the embodiment of the present application may be to track the moving target subject when the electronic device 100 is not moving, which can improve the stability of the target subject being displayed on the output screen.
  • the moving target subject is tracked, and the target subject is always displayed on the output screen; or when the electronic device 100 is moving, The target subject is tracked, and the target subject is always displayed on the output screen.
  • the original image is not subjected to anti-shake or deblurring processing, or the original image includes all objects in the viewing area of the camera.
  • the embodiments of the present application track the target subject based on the image range of the original image, thereby realizing a larger tracking range.
  • the electronic device 100 can also track the target subject.
  • the method further includes: the electronic device 100 displays the original image captured by the camera in a preview frame, and the preview frame occupies a part or all of the display area.
  • the preview frame may be a picture-in-picture frame, which is superimposed and displayed on the display area, so that the user can more intuitively see the comparison between the original image and the tracking image in the display area.
  • the preview frame can also occupy the entire screen of the display area, for example, the preview frame is first displayed in all the frames of the display area, and then the preview frame is superimposed and displayed on the display area.
  • the preview frame may be the preview frame 221 shown in FIGS. 4a-4f.
  • the electronic device 100 determining the target subject to be tracked according to the original image captured by the camera of the electronic device 100 includes: the electronic device 100 receives the user's first operation in the preview frame, the first operation instruction The target subject selected by the user; the electronic device 100 determines the target subject to be tracked according to the first operation.
  • the original image captured by the camera is displayed in the preview frame, and the image displayed in the preview frame includes the target subject.
  • the electronic device 100 may determine the target subject based on the user's first operation on the preview frame.
  • the first operation here may be, for example, The click action on the display position of the target subject. That is, the electronic device 100 may determine the target subject based on the user operation.
  • the electronic device 100 determining the target subject to be tracked according to the original image collected by the camera of the electronic device 100 includes: The obtained original image is subjected to automatic target detection to determine the target subject to be tracked.
  • the preset target detection algorithm may be for a specific category of objects, such as a target detection algorithm for detecting people, such as a target detection algorithm for detecting animals, such as a target detection algorithm for detecting objects, such as a target for detecting moving objects detection algorithms, etc.
  • the preview frame further includes an output frame, and the image in the output frame corresponds to the picture displayed in the display area.
  • the output box is used to indicate which area in the original image the picture displayed in the current display area is.
  • the output box may be the dashed box 221B in Figure 4a, the solid box 221C in Figure 4b, the solid box 221D in Figure 4c, the solid box 221E in Figure 4d, or the solid box 221E in Figure 4e
  • the solid line box 221F in FIG. 4f may be the solid line box 221G in FIG. 4f.
  • the method further includes: the electronic device 100 determines a guide point in the output frame, and the guide point indicates the display position of the target body; the electronic device 100 displays the target body at the first position on the display according to the guide point.
  • the first tracking screen, or the target subject at the second position is displayed on the second tracking screen.
  • the guide point is used to determine the display position of the target body in the output frame. If the guide point is at the center point of the output frame, the target body is displayed at the center position of the first tracking screen in the first tracking screen, and in the second tracking screen In the tracking screen, the target body is displayed at the center of the second tracking screen.
  • the electronic device 100 can stably display the target body at the position where the guidance point is located, so as to achieve the effect of stable tracking.
  • the guide point in FIGS. 4a to 4f is the center point of the output frame
  • the guide point in FIGS. 7a to 7d is a point to the left of the output frame.
  • the electronic device 100 determining the guide point in the output frame includes: the electronic device 100 determines the guide point in the output frame according to default settings, or the electronic device 100 receives the second operation of the user, the user's second The action indicates the location of the user-selected guide point in the output box. Provided here is the manner in which the electronic device 100 can determine the guide point through default settings and user operations.
  • the electronic device 100 displays the target subject at the first position on the first tracking screen according to the guide point, or displays the target subject at the second position on the second tracking screen, including: the electronic device 100 determines the motion path of the target body; the electronic device 100 determines the difference polyline between the target body and the guide point based on the motion path of the target body; the electronic device 100 determines the motion path of the background in the original image; the electronic device 100 determines the motion path of the background based on the motion path and The difference polyline determines the smooth path; the electronic device 100 wraps the original image based on the smooth path; the electronic device 100 displays the warp-processed image in the display area, and the picture displayed in the display area corresponds to the image in the output frame.
  • the algorithm principle for the electronic device 100 to achieve stable tracking of the target subject is described here. Based on the idea that the foreground drives the background, the electronic device 100 drives the background movement through the movement of the target subject, so that the target subject moves into the output screen. In the process of solving the smooth path of the background, the electronic device 100 determines the difference polyline between the moving path of the target subject and the guide point, and uses the difference polyline as a guide reference item for smoothing the background to obtain the smooth path of the background. Warp the original image collected by the electronic device 100 based on the smooth path, so that the target subject can be stably displayed in the position of the guide point in the output frame.
  • the one-time solution of the smooth path of the background by the electronic device 100 simultaneously takes into account the two aspects of anti-shake smoothing and tracking, so that the tracking of the target subject and the path smoothing of the background can share the same widest cropping boundary (that is, the boundary of the original image). ).
  • the tracking of the target subject is realized while the background is smoothed, and the followability and smoothness of the tracking result are taken into account.
  • the electronic device 100 enabling the target tracking mode includes: the electronic device 100 detects a third operation of the user, and the third operation includes an operation of increasing the zoom magnification or a switch of the user directly enabling the target tracking mode.
  • the electronic device 100 detects a third operation of the user, and the third operation includes an operation of increasing the zoom magnification or a switch of the user directly enabling the target tracking mode.
  • the electronic device 100 provides a way for the electronic device 100 to start the target tracking mode based on a user operation.
  • the operation of increasing the zoom magnification indicates the display magnification selected by the user; the electronic device 100 displays the first tracking picture or the second tracking picture in the display area according to the display magnification.
  • the camera is a telephoto camera.
  • the electronic device 100 uses a telephoto camera to collect image data.
  • the increased zoom magnification is greater than the preset magnification.
  • the preset magnification may be, for example, 15 times.
  • the second tracking picture when the second position is an edge position of the viewing area of the camera, the second tracking picture includes the target subject. Since the electronic device 100 solves the smooth path of the background at one time, the two aspects of anti-shake smoothing and tracking are simultaneously taken into account, and the tracking of the target subject and the path smoothing of the background can share the same widest cropping boundary (that is, the original image boundary). In this way, even if the target subject is at the edge position in the original image, the electronic device 100 can also track the target subject, which solves the problem that the electronic device 100 can capture the target even if the target subject is located at the edge of the shooting area of the electronic device 100 . The image of the subject also cannot track the target subject.
  • the above-mentioned embodiments it may be implemented in whole or in part by software, hardware, firmware or any combination thereof.
  • software it can be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, all or part of the processes or functions described in the embodiments of the present application are generated.
  • the computer may be a general purpose computer, special purpose computer, computer network, or other programmable device.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be downloaded from a website site, computer, server, or data center Transmission to another website site, computer, server, or data center by wire (eg, coaxial cable, optical fiber, digital subscriber line) or wireless (eg, infrared, wireless, microwave, etc.).
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that includes an integration of one or more available media.
  • the usable media may be magnetic media (eg, floppy disks, hard disks, magnetic tapes), optical media (eg, DVDs), or semiconductor media (eg, solid state drives), and the like.
  • the process can be completed by instructing the relevant hardware by a computer program, and the program can be stored in a computer-readable storage medium.
  • the program When the program is executed , which may include the processes of the foregoing method embodiments.
  • the aforementioned storage medium includes: ROM or random storage memory RAM, magnetic disk or optical disk and other mediums that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Studio Devices (AREA)

Abstract

公开了一种目标追踪方法,其特征在于,所述方法包括:电子设备开启目标追踪模式;电子设备根据电子设备的摄像头采集到的原始图像,确定待追踪的目标主体;电子设备在显示区中显示第一追踪画面,第一追踪画面中包括目标主体,目标主体位于摄像头的取景区域的第一位置;当目标主体移动位置,电子设备在显示区中显示第二追踪画面,第二追踪画面显示目标主体位于摄像头的取景区域的第二位置。这里,摄像头采集到的原始图像指的是摄像头可采集到的全部图像数据。本申请实施例基于对原始图像的图像范围对目标主体进行追踪,实现了更大的追踪范围。

Description

一种目标追踪方法及相关装置
本申请要求于2021年04月30日提交中国专利局、申请号为202110486512.3、申请名称为“一种目标追踪方法及相关装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及电子技术领域,尤其涉及一种目标追踪方法及相关装置。
背景技术
手机在长焦镜头上拍摄运动物体时,手持拍摄不容易对准拍摄到运动主体。若用户移动手机跟随运动主体的运动而移动,不仅很容易跟随不到运动主体,而且会导致画面的抖动。尤其在预览场景下,电子设备无法像离线程序一样缓存全部的视频图像信息并利用未来帧的图像帧运动路径给出当前帧最完美的(兼顾跟踪和稳定)输出框位置。
因此,如何保证实时输出的裁切框既能很好的框住运动主体,又能使输出图像流连续且足够稳定,是本领域技术人员正在研究的问题。
发明内容
本申请实施例提供了一种目标追踪方法及相关装置,可以对目标主体进行稳定追踪。
第一方面,本申请提供了一种目标追踪方法,该方法包括:电子设备开启目标追踪模式;电子设备根据电子设备的摄像头采集到的原始图像,确定待追踪的目标主体;电子设备在显示区中显示第一追踪画面,第一追踪画面中包括目标主体,目标主体位于摄像头的取景区域的第一位置;当目标主体移动位置,电子设备在显示区中显示第二追踪画面,第二追踪画面显示目标主体位于摄像头的取景区域的第二位置。
本申请实施例,电子设备可以对采集到的原始图像进行目标追踪,电子设备确定出原始图像中的目标主体后,当目标主体从第一位置移动到第二位置,电子设备的显示区中始终将该目标主体显示在画面中。这里,摄像头采集到的原始图像指的是摄像头可采集到的全部图像数据。本申请实施例基于对原始图像的图像范围对目标主体进行追踪,实现了更大的追踪范围。
在一种可能的实施方式中,电子设备未移动位置,摄像头的取景区域不变。本申请实施例对目标主体的追踪,可以是电子设备不动的情况下,对运动的目标主体进行追踪,这样能够提高目标主体显示在输出画面中的稳定性。其中,本申请实施例也可以是电子设备运动的情况下,对运动的目标主体进行追踪,将目标主体始终显示在输出画面中;也可以是电子设备运动的情况下,对不运动的目标主体进行追踪,将目标主体始终显示在输出画面中。
在一种可能的实施方式中,原始图像未经过防抖或去模糊处理,或者,原始图像包括摄像头的取景区域内的全部对象。本申请实施例基于对原始图像的图像范围对目标主体进行追踪,实现了更大的追踪范围。当目标主体位于原始图像的边缘区,电子设备也可以对该目标主体进行追踪。
在一种可能的实施方式中,方法还包括:电子设备在预览框中显示摄像头采集到的原始图像,预览框占用显示区的一部分或全部。其中,预览框可以是一种画中画框,叠加显示在显示区上,以供用户能够更直观看到原始图像和显示区的追踪画面的对比。预览框也可以占用显示区的全部画面,例如先在显示区的全部画面中显示预览框,然后再将预览框叠加显示 在显示区上。
在一种可能的实施方式中,电子设备根据电子设备的摄像头采集到的原始图像,确定待追踪的目标主体包括:电子设备接收用户在预览框中的第一操作,第一操作指示用户选择的目标主体;电子设备根据第一操作,确定待跟踪的目标主体。其中,预览框中显示摄像头采集的原始图像,则预览框中显示的图像中包括目标主体,电子设备可以基于用户针对预览框的第一操作确定出目标主体,这里的第一操作例如可以是针对目标主体的显示位置的点击操作。即电子设备可以基于用户操作确定目标主体。
在一种可能的实施方式中,电子设备根据电子设备的摄像头采集到的原始图像,确定待追踪的目标主体包括:电子设备基于预设的目标检测算法,对电子设备的摄像头采集到的原始图像进行自动目标检测,确定待追踪的目标主体。这里,预设的目标检测算法可以是针对特定类别的对象,例如针对检测人物的目标检测算法,例如针对检测动物的目标检测算法,例如针对检测物体的目标检测算法,例如针对检测运动对象的目标检测算法,等等。
在一种可能的实施方式中,预览框中还包括输出框,输出框中的图像对应显示区所显示的画面。该输出框用于指示当前的显示区显示的画面是在原始图像中的哪一个区域。
在一种可能的实施方式中,方法还包括:电子设备在输出框中确定引导点,引导点指示目标主体的显示位置;电子设备根据引导点,将位于第一位置的目标主体显示在第一追踪画面,或将位于第二位置的目标主体显示在第二追踪画面。这里,引导点是用于确定目标主体在输出框的显示位置,若引导点是在输出框的中心点,则在第一追踪画面中目标主体显示在第一追踪画面的中心位置,在第二追踪画面中目标主体显示在第二追踪画面的中心位置。这样,电子设备能够使目标主体稳定显示在引导点所在的位置,实现稳定追踪的效果。
在一种可能的实施方式中,电子设备在输出框中确定引导点包括:电子设备根据默认设置确定输出框中的引导点,或者电子设备接收用户的第二操作,用户的第二操作指示用户选择的引导点在输出框中的位置。这里提供了电子设备可以通过默认设置和用户操作来确定引导点的方式。
在一种可能的实施方式中,电子设备根据引导点,将位于第一位置的目标主体显示在第一追踪画面,或将位于第二位置的目标主体显示在第二追踪画面包括:电子设备确定目标主体的运动路径;电子设备基于目标主体的运动路径确定目标主体和引导点的差值折线;电子设备确定原始图像中的背景的运动路径;电子设备基于背景的运动路径和差值折线,确定平滑路径;电子设备将原始图像基于平滑路径进行wrap处理;电子设备在显示区中显示warp处理后的图像,显示区显示的画面对应输出框中的图像。这里描述了电子设备实现对目标主体进行稳定追踪的算法原理。基于前景驱动背景的思想,电子设备通过目标主体的运动驱动背景运动,来使得目标主体移动到输出画面中。在对背景的平滑路径进行求解的过程中,电子设备确定目标主体运动路径和引导点的差值折线,将该差值折线作为背景平滑的引导参考项,得到背景的平滑路径。基于该平滑路径对电子设备采集到的原始图像进行warp,能够使目标主体稳定显示在输出框中的引导点的位置。电子设备对背景的平滑路径的一次求解,同步考虑到了防抖平滑和追踪两个方面,对目标主体的追踪和背景的路径平滑得以共用同样的最广的裁切边界(即原始图像的边界)。实现了在进行背景平滑的同时实现对目标主体的追踪,兼顾了追踪结果的跟随性和平滑性。
在一种可能的实施方式中,电子设备开启目标追踪模式包括:电子设备检测到用户的第三操作,第三操作包括调高变焦倍率的操作或者用户直接开启目标追踪模式的开关。这里提供了电子设备基于用户操作开启目标追踪模式的方式。
在一种可能的实施方式中,调高变焦倍率的操作指示用户选择的显示倍率;电子设备根据显示倍率在显示区中显示第一追踪画面或第二追踪画面。
在一种可能的实施方式中,摄像头为长焦摄像头。一般来说,在高倍率的显示倍率下,电子设备使用长焦摄像头采集图像数据。
在一种可能的实施方式中,调高后的变焦倍率大于预设倍率。预设倍率例如可以是15倍。
在一种可能的实施方式中,当第二位置为摄像头的取景区域的边缘位置,第二追踪画面中包括目标主体。由于电子设备对背景的平滑路径的一次求解,同步考虑到了防抖平滑和追踪两个方面,对目标主体的追踪和背景的路径平滑得以共用同样的最广的裁切边界(即原始图像的边界)。这样,即使目标主体在原始图像中的边缘位置,电子设备也同样可以对该目标主体进行追踪,解决了由于目标主体的位置在电子设备的拍摄区域的边缘,电子设备即使采集到了目标主体的图像也无法对目标主体进行追踪的问题。
第二方面,本申请实施例提供了一种电子设备,包括:一个或多个处理器、一个或多个存储器;该一个或多个存储与一个或多个处理器耦合;该一个或多个存储器用于存储计算机程序代码,该计算机程序代码包括计算机指令;当该计算机指令在该处理器上运行时,使得该电子设备执行上述任一方面任一种可能的实现方式中的目标追踪方法。
第三方面,本申请实施例提供了一种计算机存储介质,包括计算机指令,当计算机指令在电子设备上运行时,使得通信装置执行上述任一方面任一项可能的实现方式中的目标追踪方法。
第四方面,本申请实施例提供了一种目标追踪方法,该方法包括:启动目标追踪;采集原始图像;根据原始图像,确定待追踪的目标主体;输出第一追踪画面的信息,第一追踪画面中包括目标主体,目标主体位于取景区域的第一位置;输出第二追踪画面的信息,第二追踪画面显示目标主体位于取景区域的第二位置。
第五方面,本申请实施例提供了一种摄像头模组,其特征在于,包括输入单元、输出单元和至少一个摄像头;
输入单元用于根据电子设备的指示启动目标追踪;
至少一个摄像头用于采集原始图像,以及根据原始图像,确定待追踪的目标主体;
输出单元用于输出第一追踪画面的信息,第一追踪画面中包括目标主体,目标主体位于取景区域的第一位置;以及,输出第二追踪画面的信息,第二追踪画面显示目标主体位于取景区域的第二位置。
在一些实施方式中,至少一个摄像头中包括处理单元,至少一个摄像头还用于通过处理单元确定目标主体的运动路径;至少一个摄像头还用于通过处理单元基于目标主体的运动路径确定目标主体和引导点的差值折线;至少一个摄像头还用于通过处理单元确定原始图像中的背景的运动路径;至少一个摄像头还用于通过处理单元基于背景的运动路径和差值折线,确定平滑路径;至少一个摄像头还用于通过处理单元将原始图像基于平滑路径进行wrap处理;输出单元还用于在显示区中输出warp处理后的图像,显示区显示的画面对应输出框中的图像。
第六方面,本申请实施例提供了一种计算机程序产品,当计算机程序产品在计算机上运行时,使得计算机执行上述任一方面任一项可能的实现方式中的目标追踪方法。
附图说明
图1a和图1b为本申请实施例提供的一种目标追踪方法的原理示意图;
图2为本申请实施例提供的一种电子设备的结构示意图;
图3a和图3b为本申请实施例提供的一组应用界面示意图;
图4a至图4f为本申请实施例提供的又一组应用界面示意图;
图5a和图5b为本申请实施例提供的又一组应用界面示意图;
图6a和图6b为本申请实施例提供的又一组应用界面示意图;
图7a至图7d为本申请实施例提供的又一组应用界面示意图;
图8a至图8d为本申请实施例提供的又一组应用界面示意图;
图9为本申请实施例提供的一种目标追踪方法的方法流程图;
图10为本申请实施例提供的又一种目标追踪方法的原理示意图;
图11为本申请实施例提供的又一种目标追踪方法的原理示意图;
图12a和图12b为本申请实施例提供的又一种目标追踪方法的原理示意图;
图13为本申请实施例提供的又一种目标追踪方法的方法流程图;
图14为本申请实施例提供的又一种目标追踪方法的方法流程图。
具体实施方式
下面将结合附图对本申请实施例中的技术方案进行地描述。其中,在本申请实施例的描述中,除非另有说明,“/”表示或的意思,例如,A/B可以表示A或B;文本中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况,另外,在本申请实施例的描述中,“多个”是指两个或多于两个。
以下,术语“第一”、“第二”仅用于描述目的,而不能理解为暗示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征,在本申请实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。术语“中间”、“左”、“右”、“上”、“下”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本申请和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本申请的限制。
本申请实施例中,防抖和追踪是实现对实时采集的图像中的目标进行稳定追踪的两大技术。其中,防抖是基于一个目标点或目标区域对采集的图像进行裁切,使该目标点或目标区域始终在画面的某一个区域内以平滑的方式运动,从而使显示的画面稳定;追踪是通过主体检测识别或特征匹配等算法在实时采集的图像中确定目标主体,为了方便呈现,可以对采集的图像进行裁切,使该目标主体始显示在输出画面中(例如中心位置)。
防抖和追踪是分离的算法,若先实现跟踪再实现防抖,由于在高倍率下,轻微的手抖也会被放大为预览画面的巨大跳变,因此在未做防抖的原始输入图像上执行主体检测算法,会使检测目标主体的准确率大大降低,难以根据原始输入图像上的位置准确识别前后帧上的同一个主体。
若先实现防抖再实现跟踪,在已经进行防抖的图像上执行主体检测算法,可以提高检测的准确率。但是由于防抖时对原始输入图像进行了裁切,导致追踪时的检测和可活动范围都受到了限制,难以利用原始输入图像更大的边界。而且经过追踪的二次裁切后,目标主体的路径由于剪裁框跳变重新变得不平滑,还需要二次平滑,导致波峰更加滞后,造成延迟。
如图1a所示,最外围的图像框①为电子设备100通过摄像头采集到的原始图像,由于电子设备100在采集图像时大部分时候与被拍摄物体不是直直的拍摄,电子设备100需要先对采集到的原始图像进行图像投影变换(image warp),使图像呈现一个平直性。例如,电子设备100的摄像头在一个侧方向上采集二维码图像,电子设备100需要先对采集到的二维码图像进行warp变换,则可以得到一个正方向上的二维码图像,从而实现对该二维码的识别。
图1a中,虚线框②为电子设备100将原始图像进行warp变换得到的图像。电子设备100针对该wrap后的图像进行防抖处理,最终进行防抖裁剪后的图像框为图1a中的图像框③。可以看出,电子设备100对原始图像进行防抖处理后,能够用于进行追踪处理的图像范围变为图像框③所示的范围,当目标主体在位置一时,即使电子设备100采集到了目标主体的图像,也无法追踪到该目标主体,即由于进行防抖处理后图像范围变小导致在原始图像上可见的目标主体也无法进行追踪;当目标主体在位置二时,电子设备100才能够追踪到该目标主体。可以看出,电子设备100在对目标主体进行追踪检测时无法利用原始图像提供的更大的范围。
并且,电子设备100对目标主体进行追踪需要对目标主体所在的区域进行进一步的裁剪,由于不同图像帧的主体识别一般存在位置偏差,会导致裁剪区域产生抖动。为了防止裁剪后的相邻图像帧由于裁剪区域的不同产生跳变,又需要对裁剪后的图像进行第二次的防抖平滑。如图1b所示,图1b中,路径1为二次裁剪过后的显示图像相对于在原始图像上的运动路径,路径2为理想情况下二次裁剪过后的显示图像的防抖平滑路径,路径3为实际情况下二次裁剪过后的显示图像的防抖平滑路径。可以看出,实际上这样的处理方式会导致波峰迟滞,造成图像显示延迟,用户体验不佳。
本申请实施例,提出了一种基于路径优化的目标追踪方法,基于前景驱动背景的思想,电子设备100通过目标主体(前景)的运动驱动背景运动,来使得目标主体移动到输出画面中。在对背景的平滑路径进行求解的过程中,电子设备100将目标主体运动路径作为背景平滑的引导参考项,以得到背景的平滑路径。电子设备100基于该平滑路径对电子设备采集到的原始图像进行warp,能够使目标主体稳定显示在输出画面中。电子设备100对背景的平滑路径的一次求解,同步考虑到了防抖平滑和追踪两个方面,对目标主体的追踪和背景的路径平滑得以共用同样的最广的裁切边界(即原始图像的边界)。实现了在进行背景平滑的同时实现对目标主体的追踪,兼顾了追踪结果的跟随性和平滑性。
本申请实施例可以对实时采集的图像数据进行目标追踪,追踪指的可以是拍摄设备不动的情况下,对运动对象进行追踪,将该运动对象始终显示在输出画面中;也可以是拍摄设备运动的情况下,对运动对象进行追踪,将该运动对象始终显示在输出画面中;也可以是拍摄设备运动的情况下,对不运动对象进行追踪,将该不运动对象始终显示在输出画面中。例如在相机应用的预览界面,可以在不移动拍摄设备的情况下实时追踪目标主体,获取到包含该目标主体的图片或视频。
下面首先介绍本申请实施例中涉及的电子设备100,电子设备100包括拍摄设备。
参见图2,图2示出了本申请实施例提供的示例性电子设备100的结构示意图。
电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192, 摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
可以理解的是,本申请实施例示意的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
其中,控制器可以是电子设备100的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
可以理解的是,本申请实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备100的结构限定。在本申请另一些实施例中,电子设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
充电管理模块140用于从充电器接收充电输入。电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,外部存储器,显示屏194,摄像头193,和无线通信模块160等供电。
电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
电子设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode 的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备100可以包括1个或N个显示屏194,N为大于1的正整数。
在本申请的一些实施例中,显示屏194中显示有系统当前输出的界面内容。例如,界面内容为相机应用提供的预览界面,该预览界面中可以显示摄像头193实时采集的图像,该预览界面中也可以显示摄像头193实时采集的经过GPU处理后的图像。
电子设备100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备100可以包括1个或N个摄像头193,N为大于1的正整数。
在一些实施例中,摄像头193中可以包括长焦摄像头、广角摄像头、超广角摄像头等类型。电子设备100通过摄像头193实时采集图像,在不同的放大倍率下可以使用不同的摄像头采集图像,例如,在0.5x~1x的放大倍率下,可以使用超广角摄像头,通过超广角摄像头采集到的原始图像为0.5x的放大倍率的图像,0.5x~1x的放大倍率下的显示图像是将原始图像(0.5x的放大倍率的图像)进行裁剪得到的;在1x~3.5x的放大倍率下,可以使用广角摄像头,通过广角摄像头采集到的原始图像为1x的放大倍率的图像,1x~3.5x的放大倍率下的显示图像是将原始图像(1x的放大倍率的图像)进行裁剪得到的;在3.5x以上的放大倍率下,可以使用长焦摄像头,通过长焦摄像头采集到的原始图像为3.5x的放大倍率的图像,3.5x以上的放大倍率下的显示图像是将原始图像(3.5x的放大倍率的图像)进行裁剪得到的。
可选的,不论是使用长焦摄像头、广角摄像头、超广角摄像头,电子设备100均可以通过将原始图像进行裁剪的方式得到任意放大倍率的显示图像。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。这样,电子设备100可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备100的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行电子设备100的各种功能应用以及数据处理。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。耳机接口170D用于连接有线耳机。
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。
陀螺仪传感器180B可以用于确定电子设备100的运动姿态。在一些实施例中,可以通过陀螺仪传感器180B确定电子设备100围绕三个轴(电子设备的X轴、Y轴和Z轴)的角速度。陀螺仪传感器180B可以用于拍摄防抖。示例性的,当按下快门,陀螺仪传感器180B检测电子设备100抖动的角度,根据角度计算出镜头模组需要补偿的距离,让镜头通过反向运动抵消电子设备100的抖动,实现防抖。陀螺仪传感器180B还可以用于导航,体感游戏场景。
气压传感器180C用于测量气压。
磁传感器180D包括霍尔传感器。
加速度传感器180E可检测电子设备100在各个方向上(一般为三轴)加速度的大小。当电子设备100静止时可检测出重力的大小及方向。还可以用于识别电子设备姿态,应用于横竖屏切换,计步器等应用。在本申请一些可选的实施例中,加速度传感器180E可用于捕获用户手指部位接触显示屏(或者用户手指敲击电子设备100的后壳后侧边框)时生成的加速度值,并将该加速度值传输给处理器,以使得处理器识别用户通过哪个手指部位输入用户操作。
距离传感器180F,用于测量距离。电子设备100可以通过红外或激光测量距离。在一些实施例中,拍摄场景,电子设备100可以利用距离传感器180F测距以实现快速对焦。
接近光传感器180G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。
环境光传感器180L用于感知环境光亮度。
指纹传感器180H用于采集指纹。
温度传感器180J用于检测温度。
触摸传感器180K,也称“触控面板”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。
骨传导传感器180M可以获取振动信号。
按键190包括开机键,音量键等。马达191可以产生振动提示。指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。SIM卡接口195用于连接SIM卡。SIM卡可以通过插入SIM卡接口195,或从SIM卡接口195拔出,实现和电子设备100的接触和分离。
下面结合应用场景,介绍本申请中提供的一种目标追踪方法在电子设备100的显示界面 上的实现形式。
首先,对电子设备100启动目标追踪功能的显示界面进行介绍。
如图3a所示,图3a示出了电子设备上用于陈列应用程序列表的示例性用户界面。图3a包括状态栏201和显示界面202,其中状态栏201可包括:移动通信信号(又可称为蜂窝信号)的一个或多个信号强度指示符203、无线高保真(wireless fidelity,Wi-Fi)信号的一个或多个信号强度指示符204,蓝牙指示符205,电池状态指示符206、时间指示符207。当电子设备的蓝牙模块为开启状态(即电子设备为蓝牙模块进行供电)时,电子设备的显示界面上显示蓝牙指示符205。
显示界面202陈列了多个应用图标。其中,显示界面202中包括相机208的应用图标。当电子设备检测到作用于相机208的应用图标的用户操作,电子设备显示相机应用提供的应用界面。
参考图3b,图3b示出了一种可能的相机应用提供的用户界面。相机208的应用界面如图4b所示,该应用界面可以包括:显示区210、倍率调节区211、功能栏212、模式选择区213、图库图标214、拍摄图标215、切换图标216。
显示区210的显示画面为电子设备通过摄像头实时采集的图像,此时显示区210中显示画面包括一个人物的部分身体、一颗树木和树木上的一个铃铛。电子设备当前使用的摄像头可以是相机应用设置的默认摄像头,电子设备当前使用的摄像头还可以是上一次关闭相机应用时使用的摄像头。
倍率调节区211,也可称为焦距调节区,用于调节摄像头的拍摄焦距,从而调节显示区210的显示画面的显示倍率。倍率调节区211中包括调节滑块211A,该调节滑块211A用于指示显示倍率,调节滑块211A当前为5x,指示了当前的显示倍率为5倍。用户可以通过在倍率调节区211中滑动调节滑块211A,放大或缩小显示区210的显示画面。例如向上滑动调节滑块211A,显示区210的显示画面的显示倍率放大,可以使目前5倍率下的人物不在显示区中;向下滑动调节滑块211A,显示区210的显示画面的显示倍率减少,可以使目前5倍率下的人物能完整显示在显示区中。
在一些实施例中,不限于通过倍率调节区211对显示区210的显示画面的显示倍率进行调节,用户还可以使用其他快捷方式调节显示区210的显示画面的显示倍率。例如在显示区210中,分开两指可放大显示倍率,合拢两指可减少显示倍率。
功能栏212,用于提供相机应用的快捷功能,包括例如开启智慧视觉(图标212A)、切换闪光灯(图标212B)、开启AI摄影大师(图标212C)、切换色彩模式(图标212D)、开启相机设置界面(图标212E)等。
模式选择区213,用于提供不同的拍摄模式,根据用户选择的拍摄模式不同,电子设备启用的摄像头以及拍摄参数也不同。可以包括夜景模式、人像模式、拍照模式、录像模式、专业模式以及更多。图3b中拍照模式的图标被标记,用于提示用户当前的模式为拍照模式。其中,
在夜景模式下电子设备100可以提升亮部和暗部的细节呈现能力,控制噪点,并呈现出更多的画面细节。在拍照模式下电子设备100适应大部分的拍摄场景,可以根据当前的环境自动调整摄影参数。在录像模式电子设备100可以用于拍摄一段视频。当检测到作用于“更多”的用户操作时,响应于该操作,电子设备可显示其他选择模式,例如全景模式(实现自动拼接,电子设备将连续拍摄的多张照片拼接为一张照片,实现扩大画面视角的效果)、HDR模式(自动连拍欠曝光、正常曝光、过度曝光三张照片、并选取最好的部分合成为一张照片) 等等。
当检测到作用于模式选择区303中任一模式的应用图标(例如夜景模式、拍照模式、录像模式等等)的用户操作时,响应于该操作,电子设备100可以进入对应的模式。相应的,所述显示区210显示的图像为当前模式下处理后的图像。
模式选择区213中的各模式图标不限于虚拟图标,也可以为通过拍摄设备/电子设备上部署的物理按键进行模式的选择,从而使得拍摄设备进入相应的模式。
图库图标214,当检测到作用于图库图标214的用户操作时,响应于该操作,电子设备可进入电子设备100的图库,所述图库中可以包括已拍摄的照片和视频。其中,图库图标214可以显示为不同的形式,例如当电子设备保存摄像头当前采集的图像后,图库图标214中显示该图像的缩略图。
拍摄图标215,当检测到作用于拍摄图标215的用户操作(例如触控操作、语音操作、手势操作等)时,响应于该操作,电子设备100获取显示区210当前显示的图像,并保存在图库中。其中,可以通过针对于图库图标214的用户操作(例如触控操作、手势操作等)进入图库。
切换图标216,可以用于对前置摄像头和后置摄像头的切换。其中,前置摄像头的拍摄方向与用户使用的电子设备的屏幕的显示方向相同,后置摄像头的拍摄方向与用户使用的电子设备的屏幕的显示方向相反。
基于上述图3b中相机应用提供的用户界面,当电子设备100接收到针对于该用户界面的用户操作,电子设备100触发目标追踪模式,示例性的显示如图4a所示的界面。
其中,该用户操作可以是用户向上滑动倍率调节区211中的调节滑块211A,显示区210的显示画面的显示倍率放大,当电子设备100的显示倍率放大到不小于预设倍率,电子设备100触发目标追踪模式,示例性的显示如图4a所示的界面。其中,预设倍率例如可以是15倍。
可选的,该用户操作可以是用户在显示区210中分开两指以放大显示倍率,当电子设备100的显示倍率放大到不小于预设倍率,电子设备100触发目标追踪模式,示例性的显示如图4a所示的界面。
不限于上述触发目标追踪模式的方式,本申请实施例还可以通过其他的方式使电子设备100触发目标追踪模式。
可选的,图3b中还可以包含触发目标追踪模式的快捷图标,该用户操作可以是针对该快捷图标的点击操作,电子设备100接收到该点击操作,触发目标追踪模式,示例性的显示如图4a所示的界面。
如图4a所示,图4a示例性的示出了一种触发电子设备100开启目标追踪模式的应用界面。当电子设备100的显示倍率不小于预设倍率,电子设备100显示的应用界面可以为图4a所示的应用界面。该应用界面中,倍率调节区211中的调节滑块211A当前为15x,指示了当前的显示倍率为15倍。
图4a所示的应用界面中包括显示区220、预览框221以及引导标语222,其中,由于当前的显示倍率为15倍,则显示区220的显示画面相比于图3b中的显示区210的显示画面放大了15倍。
预览框221中显示的画面为摄像头实时采集的原始图像,预览框221为显示区220里的一个显示区域,一般悬浮在显示区220中的显示画面上。可以看出,预览框211中显示的图 像比5倍率下的显示的图像范围(如图3b中显示区210的显示图像)更大。一般来说,显示倍率为5倍时显示的图像不是摄像头采集的原始图像,而是将摄像头采集的原始图像进行裁切后显示的图像。这里,预览框中显示的图像为摄像头实时采集的原始图像,表示出本申请实施例中的追踪范围是摄像头采集到的原始图像的范围。
图4a中,此时预览框221中显示的画面包括一个人物(完整的)、一颗树木和树木上的一个铃铛。预览框221中虚线框221B中的画面为显示区220显示的画面,指示了当前显示区220显示的画面为摄像头实时采集到的画面的哪一个部分。由于当前的显示倍率为15倍,此时显示区220中显示的图像包括树木和树木上的铃铛,没有人物。退出图标221A用于关闭预览框221,当电子设备100检测到针对该退出图标221A的用户操作,电子设备100关闭预览框221,可选的,电子设备100同时关闭预览框221和引导标语222。
引导标语222提供了一种示例性的开启目标追踪模式的方式,显示文本“单击虚框可追踪目标”以提示用户,该虚框即为虚线框221B。
在一些实施例中,虚线框221B基于不同的显示形式(显示颜色、显示形状)可以指示不同的信息。举例来说,当虚线框221B的显示颜色为第一颜色,则表示当前电子设备100没有检测到目标主体,可选的,引导标语222可以显示文本“没有检测到目标”以提示用户;当虚线框221B的显示颜色为第二颜色,则表示当前电子设备100检测到了目标主体,可选的,引导标语222可以显示文本“单击虚框可追踪目标”以提示用户。
其中,单击虚框(虚线框221B)可追踪目标只是一种开启目标追踪模式的方式,与虚线框221B中的显示画面并没有关系。在图3b中,当电子设备100检测到的目标主体为人物,虚线框221B中并没有该人物的画面,当用户单击该虚线框221B,电子设备100对人物进行追踪。
在一些实施例中,当电子设备100检测到了目标主体,电子设备100可以在预览框221中对该目标主体进行突出显示,例如显示另一个虚线框将该目标主体框柱,以提示用户电子设备100检测出的目标主体。
基于上述图4a,电子设备100可以开启目标追踪模式。参考图4b,图4b示例性示出了一种目标追踪模式的应用界面。
在一些实施例中,当电子设备100检测到针对虚线框221B的用户操作,电子设备100开启目标追踪模式。在一些实施例中,当电子设备100检测到针对预览框221中的目标主体的用户操作,电子设备100开启目标追踪模式。在一些实施例中,图4a中还可以包含开启目标追踪模式的快捷图标,电子设备100接收到针对该快捷图标的点击操作,开启目标追踪模式,示例性的显示如图4b所示的应用界面。
图4b所示的应用界面中包括显示区230、预览框221以及引导标语223,其中,
预览框221中显示的画面为摄像头实时采集的原始图像。电子设备100开启目标追踪模式,电子设备100识别出摄像头当前采集的图像中的目标主体(即为预览框211中显示的图像中的人物),并在预览框221中显示实线框221C,其中,实线框221C中的画面为显示区230显示的画面,指示了当前显示区230显示的画面为摄像头实时采集到的画面的哪一个部分,本申请实施例中,实线框也可称为画面输出框。图4b中此时人物显示在显示区230中偏左的区域。这里,由于目标主体在预览框211中显示的画面的左边缘的位置,则实线框221C选取在预览框221中左边缘的位置,在实线框211C中,目标主体在实线框221C中偏左的区域。当目标主体进行移动,实线框221C的显示位置跟随目标主体进行移动。
引导标语223提供了一种退出目标追踪模式的方式,提示用户“单击实框可退出追踪”,该实框即为实线框221C。当电子设备100检测到针对实线框221C的用户操作,电子设备100退出目标追踪模式,可选的,此时电子设备100显示如图4a所示的用户界面;可选的,此时电子设备100同时关闭预览框221和引导标语223,并在显示区230中显示如图4a中的显示区220的显示画面。
可以看出,图4a中的显示区220显示的是将摄像头采集到的图像放大到15倍后的图像,截取的是摄像头采集到的图像的最中心位置的部分。此时当检测到作用于拍摄图标215的点击操作时,电子设备100获取显示区220当前显示的图像。而图4b中开启了目标追踪模式,在摄像头采集到的图像为相同图像(图4a和图4b中预览框221中为相同图像)的情况下,显示区230显示的图像是摄像头采集到的图像中包括目标主体的部分。此时,当检测到作用于拍摄图标215的点击操作时,电子设备100获取显示区230当前显示的图像。这样,能够达到不用移动电子设备100就可以获取到目标主体的图片/视频的效果,实现了对目标主体的追踪。
以上对电子设备100启动目标追踪功能的显示界面进行了介绍。其中,本申请实施例对如何启动目标追踪功能的显示界面和用户操作不做限定。
在目标追踪模式下,电子设备100不移动,目标主体进行移动,电子设备100可以对目标主体进行追踪。如图4c所示,在图4c中,预览框221中显示的画面为摄像头实时采集的原始图像。相比于图4b中采集到的图像,由于电子设备100没有移动,则预览框221中的背景(树木)没有变化,可以看出图4c中的人物相比于图4b中向右边移动了一段距离,但是实线框221D和实线框221C相对于预览框221的位置相同。实线框221D中的画面为显示区240显示的画面,指示了当前显示区240显示的画面为摄像头实时采集到的画面的哪一个部分,此时人物显示在显示区240中正中间的区域。这里,实线框221D选取在预览框221中左边缘的位置,目标主体在实线框221D中处于正中间的区域。
在电子设备100不移动的情况下,目标主体继续进行移动,如图4d所示,在图4d中,相比于图4c中采集到的图像,由于电子设备100没有移动,则预览框221中的背景(树木)没有变化,可以看出图4d中的人物相比于图4c中向右边移动了一段距离,并且实线框221E跟随着目标主体(人物)进行移动,即目标主体在实线框221E中仍然处于正中间的区域。实线框221E中的画面为显示区250显示的画面,此时人物显示在显示区250中正中间的区域。
在电子设备100不移动的情况下,目标主体继续进行移动,如图4e所示,在图4e中,相比于图4d中采集到的图像,由于电子设备100没有移动,则预览框221中的背景(树木)没有变化,可以看出图4e中的人物相比于图4d中向右上方向移动了一段距离,并且实线框221F跟随着目标主体(人物)进行移动,即目标主体在实线框221F中仍然处于正中间的区域。实线框221F中的画面为显示区260显示的画面,此时人物显示在显示区260中正中间的区域。
在电子设备100不移动的情况下,目标主体继续进行移动,如图4f所示,在图4f中,相比于图4e中采集到的图像,由于电子设备100没有移动,则预览框221中的背景(树木)没有变化,可以看出图4f中的人物相比于图4e中向右边移动了一段距离,此时人物已经移动到了电子设备100的拍摄范围的边缘,实线框221G跟随着目标主体(人物)进行移动,由于目标主体在预览框211中显示的画面的右边缘的位置,则实线框221G选取在预览框221中右边缘的位置,在实线框211G中,目标主体在实线框221G中偏右的区域。实线框221G 中的画面为显示区270显示的画面,此时人物显示在显示区270中偏右的区域。
基于上述图4b~图4f,可以看出,在目标追踪模式下,即使电子设备100不移动(预览框内的树木位置一直没有变化),电子设备100也可以自动追踪目标主体,并将包含目标主体的图像数据始终显示在显示区内。
可以看出,对于图4b~图4f来说,电子设备100倾向于将目标主体始终显示在显示区的正中间的区域。这里,电子设备100将输出框的中心坐标作为引导点,电子设备100确定出引导点后,始终将目标主体的引导显示在该引导点的位置,达到将目标主体显示在显示区的正中间的区域的效果。输出框的中心和比例相对于电子设备100采集到的原始图像是固定的,输出框的大小与电子设备100的显示倍率有关,电子设备100的显示倍率越大,输出框相对于原始图像越小;电子设备100的显示倍率越小,输出框相对于原始图像越大。
在一些实施例中,引导点是一个针对电子设备100采集到的原始图像的像素坐标点,将引导点选取在输出框中,则电子设备100可以将目标主体始终显示在输出框中的该引导点的位置。
在图4b~图4f中,由于图4b中目标主体的位置在电子设备100的拍摄范围的左边缘的位置,结合到当前的显示倍率(15倍)以及显示区的显示比例大小,目标主体无法显示在显示区的正中间,此时目标主体显示在显示区的左边缘的位置。当目标主体不断向右移动到电子设备100的拍摄范围之内,例如图4c~图4e,电子设备100对目标主体进行追踪,并将目标主体显示在显示区的正中间的区域。当目标主体继续向右移动,移动到电子设备100的拍摄范围的右边缘的位置,如图4f,结合到当前的显示倍率(15倍)以及显示区的显示比例大小,目标主体无法显示在显示区的正中间,此时目标主体显示在显示区的右边缘的位置。
可以理解的,电子设备100倾向于将目标主体始终显示在显示区的正中间的区域,在实际情况中,由于目标主体的运动速度较快、电子设备100的晃动程度较大、电子设备100的数据处理速度较慢等等因素,会导致实际显示情况没有如图4b~图4f所示的理想化,例如是在如图4c~图4e的情况下,目标主体也不一定稳定的显示在显示区的正中间,目标主体可以是在一定的误差范围内显示在围绕正中间的上下左右区域附近。
在一些实施例中,如图5a所示,当电子设备100检测到针对实线框221G的用户操作,电子设备100退出目标追踪模式,可选的,电子设备100显示如图5b所示的用户界面。图5b中,电子设备100退出目标追踪模式,预览框221中显示的画面为摄像头实时采集的原始图像。由于当前的显示倍率为15倍,则显示区280的显示画面相比于预览框221中的显示画面放大了15倍。预览框221中虚线框221H中的画面为显示区280显示的画面,指示了当前显示区220显示的画面为摄像头实时采集到的图像的哪一个部分。图5b中的引导标语224为“单击虚框可追踪目标”,该虚框为虚线框221H。
在一些实施例中,如图6a所示,当电子设备100检测到针对退出图标221K的用户操作,电子设备100关闭预览框221,可选的,此时电子设备100显示如图6b所示的用户界面。图6b中,相比于图6a,电子设备100同时关闭了预览框221和引导标语224,并在显示区290中显示如图6a中的显示区280的显示画面。
对于上述图4b~图4f来说,电子设备100倾向于将目标主体始终显示在显示区的正中间的区域。本申请实施例中,不限于是正中间的区域,电子设备100也可以将目标主体限定在显示区的任意区域,电子设备100可以将显示区的任意一个坐标点作为引导点,始终将目标主体的运动引导到该引导点上,达到将目标主体显示在显示区的任意区域的效果。
在一些实施例中,电子设备100可以基于接收到的用户操作确定出引导点。例如电子设备100提供引导点的设置输入框,电子设备100接收到用户输入的像素坐标点作为引导点;又例如电子设备100接收到针对显示区的点击操作,则电子设备100确定该点击操作对应的点击位置即为引导点的位置。
下面以图7a~图7d为例,示例性的说明电子设备100将目标主体限定在显示区域的偏左的区域的情况下的显示界面。
示例性的,在图4a中,当电子设备100检测到针对虚线框221B的用户操作,电子设备100开启目标追踪模式,电子设备100显示如图7a所示的用户界面。图7a中,电子设备100在显示区310确定一个引导点,始终将目标主体的运动引导到该引导点上,该引导点位于显示区的偏左的区域。预览框311中显示的画面为摄像头实时采集的图像,显示倍率为5倍。电子设备100开启目标追踪模式,电子设备100识别出摄像头当前采集的图像中的目标主体(即为预览框311中显示的图像中的人物),并确定实线框331A在预览框311中的位置,实线框331A中包括了电子设备100识别出的目标主体,由于电子设备100将目标主体限定在显示区域的偏左的区域(引导点的位置),则目标主体在实线框331A中偏左的区域。实线框331A中的画面为显示区310显示的画面,此时人物显示在显示区320中引导点的位置。其中,图7a所示的用户界面和图4b相同,因此图7a中关于用户界面内的描述可以参考上述图4b的描述,此处不再赘述。
在目标追踪模式下,电子设备100不移动,目标主体进行移动,电子设备100对目标主体进行追踪。如图7b所示,可以看出图7b中的人物相比于图7a中向右上边移动了一段距离,并且实线框311B跟随着目标主体(人物)进行移动,即目标主体在实线框311B中仍然处于偏左的区域。实线框311B中的画面为显示区320显示的画面,此时人物显示在显示区320中引导点的位置。
在电子设备100不移动的情况下,目标主体继续进行移动,如图7c所示,可以看出图7c中的人物相比于图7b中向右下边移动了一段距离,并且实线框311C跟随着目标主体(人物)进行移动,即目标主体在实线框311C中仍然处于偏左的区域。实线框311C中的画面为显示区330显示的画面,此时人物显示在显示区330中引导点的位置。
在电子设备100不移动的情况下,目标主体继续进行移动,如图7d所示,可以看出图7d中的人物相比于图7c中向右上边移动了一段距离,此时人物已经移动到了电子设备100的拍摄范围的右边缘,实线框311D跟随着目标主体(人物)进行移动,则实线框311D选取在预览框211中右边缘的位置,在实线框311D中,目标主体显示在实线框311D中偏右的区域。由于图7d中目标主体的位置在电子设备100的拍摄范围的右边缘的位置,结合到当前的显示倍率(15倍)以及显示区的显示比例大小,目标主体无法显示在显示区的偏左的区域(引导点的位置),此时目标主体显示在显示区的右边缘的位置。实线框311D中的画面为显示区340显示的画面,此时人物显示在显示区340中右边缘的区域。
综上所述,电子设备100可以将显示区的任意一个坐标点作为引导点,始终将目标主体的运动引导到该引导点上,达到将目标主体显示在显示区的任意区域的效果。
上述描述了电子设备100识别到一个目标对象的情况下,将该目标对象作为目标主体,电子设备100对该目标主体的追踪方式。下面描述当电子设备100识别到摄像头采集的图像中包括多个目标对象的情况下,确定目标主体的方式。
可选的,电子设备100可以根据预设规则在这多个目标对象中确定唯一一个目标主体。 其中,预设规则可以是当前采集的图像中处于最中心位置的目标对象;预设规则也可以是当前采集的图像中占据图像区域最大的目标对象;等等。
可选的,电子设备100识别到摄像头采集的图像中包括多个目标对象的情况下,电子设备100可以提供该多个目标对象的选择图标,电子设备100接收到用户针对该多个目标对象的选择图标的用户操作,解析该用户操作的点击位置,在多个目标对象中确定目标主体。
如图8a所示,图8a示出了一种可能的相机应用提供的用户界面。显示区400的显示画面为电子设备通过摄像头实时采集的图像,此时显示区400中显示画面包括人物1的部分身体、人物2、一颗树木和树木上的一个铃铛。
基于上述图8a中相机应用提供的用户界面,当电子设备100接收到针对于该用户界面的用户操作,电子设备100触发目标追踪模式,示例性的显示如图8b所示的界面。
可选的,该用户操作可以是用户在显示区400中分开两指以放大显示倍率,当电子设备100的显示倍率放大到不小于预设倍率,电子设备100触发目标追踪模式,示例性的显示如图4a所示的界面。其中,预设倍率例如可以是15倍。
可选的,图8a中还可以包含触发目标追踪模式的快捷图标,该用户操作可以是针对该快捷图标的点击操作,电子设备100接收到该点击操作,触发目标追踪模式,示例性的显示如图8b所示的界面。
如图8b所示,预览框411中显示的画面为摄像头实时采集的图像,可以看出,预览框411中显示的图像比5倍率下的显示的图像范围(如图8b中显示区410的显示图像)更大。此时预览框411中显示的画面包括人物1(完整的)、人物2、一颗树木和树木上的一个铃铛。预览框411中虚线框411A中的画面为显示区410显示的画面,指示了当前显示区410显示的画面为摄像头实时采集到的画面的哪一个部分。由于当前的显示倍率为15倍,此时显示区410中显示的图像包括树木和树木上的铃铛,以及人物2的身体的上半部分。
当电子设备100检测到针对虚线框411A的用户操作,电子设备100开启目标追踪模式,电子设备100识别出摄像头当前采集的图像(即为预览框411中显示的图像)中的目标主体。如图8c所示,此时电子设备100识别到两个目标对象,即人物1和人物2。电子设备100显示区域411B和区域411C,其中区域411B指示了一个目标对象人物1,区域411C指示了另一个目标对象人物2。电子设备100显示引导标语412,用于提示用户“点击目标对象确定追踪主体”。当用户点击区域411B,则电子设备100确定人物1为目标主体;当用户点击区域411C,则电子设备100确定人物2为目标主体。
示例性的,当用户点击区域411B,则电子设备100确定人物1为目标主体,显示如图8d所示的用户界面。预览框411中显示的画面为摄像头实时采集的原始图像。预览框411中包括实线框411D,实线框411D中包括了电子设备100识别出的目标主体(人物1),可以认为实线框411D指示了电子设备100对目标主体的追踪,当目标主体进行移动,实线框411D跟随目标主体进行移动。其中,实线框411D中的画面为显示区430显示的画面,指示了当前显示区430显示的画面为摄像头实时采集到的画面的哪一个部分。
可以看出,图8c中的显示区420显示的是将摄像头采集到的图像放大到15倍后的图像,截取的是摄像头采集到的图像的最中心位置的部分。此时当检测到作用于拍摄图标215的点击操作时,电子设备100获取显示区420当前显示的图像。图8d中开启了目标追踪模式,在摄像头采集到的图像为相同图像(图8c和图8d中预览框411中为相同图像)的情况下,显示区430显示的图像是摄像头采集到的图像中包括目标主体(人物1)的部分。此时,当检测到作用于拍摄图标215的点击操作时,电子设备100获取显示区430当前显示的图像。这 样,能够达到不用移动电子设备100就可以获取到目标主体的图片/视频的效果,实现了对目标主体的追踪。
在一些实施例中,基于上述图8a中相机应用提供的用户界面,当电子设备100接收到针对于该用户界面的用户操作,电子设备100触发目标追踪模式,电子设备100可以直接显示如图8c所示的界面。
可以理解的,电子设备100从多个目标对象中确定目标主体的方式不限于上述示例性的方式,本申请对此不做限制。
以上介绍了本申请实施例的一种目标追踪方法中所涉及的显示界面,下面介绍本申请提供的一种目标追踪方法的实现原理,如图9所示,图9为本申请实施例提供的一种目标追踪方法的方法流程图,该目标追踪方法步骤可包括:
S101、电子设备100获取图像数据,并确定该图像数据中的目标主体。
电子设备100获取图像数据,其中,电子设备100可以获取通过摄像头实时采集的图像数据,例如图4a中的预览框221中的显示画面即为电子设备100通过摄像头实时采集的图像数据。电子设备100也可以获取由其他设备发送的图像数据,也可以是电子设备100存储的图像数据。然后,电子设备100基于目标检测算法(也可以是目标识别、跟踪或特征匹配等算法)对获取到的图像数据进行主体识别并确定目标主体,或者电子设备100也可以基于接收到的用户操作确定该图像数据中的目标主体。
在一些实施例中,电子设备100显示第一界面。该第一界面中显示了电子设备100通过摄像头采集的图像。可选的,该第一界面可以是相机应用的预览界面。电子设备100接收到针对第一界面的第一操作,响应于该第一操作,电子设备100触发目标追踪模式,确定获取到的图像数据中的目标主体。其中,第一操作可以是触控操作、语音操作、悬浮手势操作等等,在此不作限定。例如,该第一界面是如图3b示出的相机应用的预览界面,则该第一操作可以是调高显示倍率的用户操作,然后电子设备100显示如图4a示出的界面,电子设备100确定获取到的图像数据中的目标主体,例如图4a中预览框211中的人物即为目标主体。
在一些实施例中,电子设备100通过摄像头实时采集图像数据,对采集到的图像数据进行主体识别。其中,电子设备100通过目标检测算法对采集到的图像进行目标检测,获取到图像中的一个或多个目标对象,该一个或多个目标对象可以是人物、动物、物品、建筑等等。电子设备100在这一个或多个目标对象中确定目标主体。需要说明的是,电子设备100对采集到的图像数据进行主体识别,采集到的图像数据中除了目标主体的部分,其余部分认为是背景部分。
在一些实施例中,电子设备100检测到图像中的一个或多个目标对象后,根据预设规则在这一个或多个目标对象中确定目标主体。其中,预设规则可以是当前采集的图像中处于最中心位置的目标对象;预设规则也可以是当前采集的图像中占据图像区域最大的目标对象;等等。
在一些实施例中,电子设备100通过目标检测算法对获取到的图像进行目标检测时,电子设备100对多帧图像进行分析处理,确定多帧图像中的运动物体,即该一个或多个目标对象为运动对象。需要说明的是,采集到的图像数据中除了该一个或多个目标对象的部分,其余部分认为是背景部分。
在一些实施例中,电子设备100检测到图像中的一个或多个目标对象后,在显示界面上为用户提供该一个或多个目标对象的选择图标,电子设备100接收到用户针对该一个或多个 目标对象的选择图标的用户操作,解析该用户操作的点击位置,在这一个或多个目标对象中确定目标主体。例如图8c中所示,电子设备100检测到图像中的两个目标对象(人物1和人物2),电子设备100接收到用户针对人物1的显示区域的用户操作,解析该用户操作的点击位置,电子设备100确定目标主体为人物1。
S102、电子设备100确定目标主体的运动路径。
电子设备100进行主体识别并确定目标主体后,基于多帧图像解析追踪该目标主体形成的运动路径,该运动路径也称为主体运动路径。
一般来说,电子设备100采集到的图像帧为N+2帧,则电子设备100实时显示在显示区的画面为电子设备100采集到的第N帧图像,N为正整数。这里的多帧图像指的是N+2帧图像,电子设备100基于采集到的N+2帧图像,确定出目标主体的运动路径。
一个目标主体运动路径由X轴方向上的运动路径和Y轴方向上的运动路径共同描述,本申请实施例以X轴方向上的运动路径为例,如图10所示,图10示出了目标主体和背景的运动路径折线图。图10中,纵坐标为x轴方向上的像素坐标,横坐标为以时间t为顺序的图像帧数。图10中的路径①为目标主体在x轴方向上的运动路径,该目标主体的运动路径可以是基于目标主体上的一个目标点绘制出的。电子设备100获取每一帧原始图像上该目标点的像素坐标,电子设备100基于在不同图像帧上该目标点在x轴方向上的像素坐标,确定该目标点在不同图像帧中x轴方向上的运动路径,作为目标主体在X轴方向的运动路径。
可选的,该目标主体的运动路径也可以是基于原始图像上一个区域绘制出的,其中该区域是固定大小的,目标主体的图像始终在这个区域内。电子设备100获取每一帧图像上该区域中一个目标点(接下来以该区域的中心点为例)的像素坐标,电子设备100基于在不同图像帧上该区域的中心点在x轴方向上的像素坐标,确定该中心点在不同图像帧中x轴方向上的运动路径,作为目标主体的运动路径。
可选的,该目标主体的运动路径也可以是基于目标主体上多个目标点绘制出的。电子设备100获取每一帧原始图像上该多个目标点的像素坐标,电子设备100基于在不同图像帧上该多个目标点在x轴方向上的像素坐标,确定该多个目标点的每一个目标点在不同图像帧中x轴方向上的运动路径。电子设备100将该多个目标点的运动路径进行加权,确定至少一条运动路径,作为目标主体的运动路径。
需要说明的是,上述描述了目标主体在x轴方向上的运动路径,其中目标主体在y轴方向上的运动路径的原理描述和上述在x轴方向上的运动路径的原理描述相同,同理可得,此处不再赘述。
S103、电子设备100确定背景的运动路径。
电子设备100对采集到的图像数据进行主体识别,采集到的图像数据中除了目标主体的部分,其余部分认为是背景部分。
电子设备100基于多帧图像解析追踪背景形成的运动路径,该运动路径也称为背景运动路径,其中这里的多帧图像指的是N+2帧图像。
一个背景运动路径由X轴方向上的运动路径和Y轴方向上的运动路径共同描述,本申请实施例以X轴方向上的运动路径为例,如图10所示,图10中的路径②为背景在x轴方向上的运动路径,背景的运动路径可以是基于背景中的一个目标点绘制出的。电子设备100获取每一帧图像上该目标点的像素坐标,电子设备100基于在不同图像帧上该目标点在x轴方向上的像素坐标,确定该目标点在不同图像帧中x轴方向上的运动路径,作为背景在X轴方向的运动路径。
可选的,背景的运动路径也可以是基于背景中多个目标点绘制出的。电子设备100获取每一帧图像上该多个目标点的像素坐标,电子设备100基于在不同图像帧上该多个目标点在x轴方向上的像素坐标,确定该多个目标点的每一个目标点在不同图像帧中x轴方向上的运动路径。电子设备100将该多个目标点的运动路径进行加权,确定至少一条运动路径,作为背景的运动路径。
可选的,背景的运动路径也可以是基于前一帧和当前帧的位姿差异数据绘制出的。电子设备100基于6自由度(depth of field,dof)技术或3dof技术测量出电子设备100的位姿变换,电子设备100将空间位置变化量转化为背景运动路径的像素坐标的变化量,从而描述背景的运动路径。
上述描述了背景在x轴方向上的运动路径,其中背景在y轴方向上的运动路径的原理描述和上述在x轴方向上的运动路径的原理描述相同,同理可得,此处不再赘述。
需要说明的是,步骤S104和步骤S105不具有先后顺序。
S104、电子设备100基于目标主体的运动路径和背景的运动路径,确定背景的平滑路径。
电子设备100基于目标主体的运动路径和背景的运动路径,确定背景的平滑路径。其中,电子设备100基于该平滑路径驱动背景运动,可以实现对目标主体的稳定追踪。
首先,电子设备100基于目标主体的运动路径,确定目标主体和引导点的差值折线。其中,差值折线也可以称为位移引导线,用于指示目标主体距离引导点的位移。这里,引导点为电子设备100想要目标主体显示在输出框中的位置。一般来说,电子设备100默认引导点为输出框的中心点;在一些实施例中,电子设备100可以基于接收到的用户操作确定引导点。
其中,输出框的中心和比例相对于电子设备100采集到的原始图像是固定的,电子设备100将引导点设置在输出框中,电子设备100基于该目标主体和引导点的差值折线,可以将目标主体的目标点显示在该引导点上,则目标主体显示在该引导点所在的区域内。
其中,目标主体的目标点可以是该目标主体上的任意一点。电子设备100的预设引导点可以是输出框的任意一点。
举例来说,电子设备100可以将引导点设置在输出框的中心点,电子设备100基于该目标主体的目标点和输出框的中心点的差值折线,引导该目标主体的目标点显示在该输出框的中心点上,达到将目标主体始终显示在输出框的正中间的区域的效果。本申请实施例中,电子设备100可以将输出框中的任意一个坐标点作为引导点,始终将目标主体的目标点显示在该引导点上,达到将目标主体显示在显示区的任意位置的效果。
目标主体和预设引导点的差值折线为目标主体的目标点和预设引导点的像素坐标的差值,如图11所示,图11中的路径③为目标主体的目标点和预设引导点在X轴方向上的像素差值折线。电子设备100获取每一帧图像上该目标点的像素坐标,电子设备100基于在不同图像帧上该目标点和预设引导点的在X轴方向上的像素坐标差值,确定出如路径③所示的目标主体和预设引导点的差值折线。
然后,电子设备100基于该差值折线和背景的运动路径,确定背景的平滑路径。如图11所示,路径②为示例性的一条背景的运动路径,电子设备100基于差值折线(路径③)和背景的运动路径(路径②),确定背景的平滑路径。其中,平滑路径既要保证平滑,又要趋向中点。可以看出,在f0和f9这两个点上,平滑路径上的点为路径③和路径②的中点;而在f16这个点上,若取路径③和路径②的中点为平滑路径上的点,则会导致平滑路径的不平滑,因此为了保证平滑,将f16这个点的平滑路径不取在中点的位置。即平滑路径上的点都是在平滑的基础上,再趋于路径③和路径②的中点而确定的。
电子设备100基于该平滑路径驱动背景运动,可以实现对目标主体的稳定追踪。
需要说明的是,背景的平滑路径包括X轴方向上的路径和Y轴方向上的路径。以上的求解过程均是以X轴方向上的运动为例进行描述的,并且求解得到的平滑路径为X轴方向上的平滑路径。在Y轴方向上,电子设备确定目标主体的运动路径和背景的运动路径,再基于在Y轴方向上目标主体的运动路径和背景的运动路径,确定Y轴方向的平滑路径,这和求解X轴方向上的平滑路径原理相同,这里不再赘述。
在一些实施例中,由于是X轴方向上对平滑路径的求解,电子设备100还基于输出图左边界折线和输出图右边界折线进一步优化平滑路径,如图11所示,图11中的路径④为输出图左边界折线,路径⑤为输出图右边界折线。路径④和路径⑤为电子设备100将原始图像数据进行warp处理后,确定出输出框的左边界和右边界在每一帧图像上的像素坐标。电子设备100在确定背景的平滑路径时,始终将背景的平滑路径限制在输出图左边界折线和输出图右边界折线之间,避免使输出框的边界在背景的边界之外。
同理可得,在Y轴方向上对平滑路径的求解,电子设备100还基于输出图上边界折线和输出图下边界折线进一步优化平滑路径。
这种实现方式,基于上述方法确定出来的平滑路径,电子设备100驱动背景通过该平滑路径进行warp,可以同步实现防抖和追踪。
S105、电子设备100基于该平滑路径对采集到的图像进行warp,输出warp后的图像。
电子设备100确定背景的平滑路径后,基于该平滑路径,计算从背景原始运动路径到求解得到的平滑路径的变换矩阵,将原始图像warp到平滑路径上,并输出warp后的图像。该输出的图像即为实现了稳定追踪目标主体的图像,此时,电子设备100实现了背景的防抖和目标主体的追踪。
如图12a所示,图12a中示出了电子设备100采集到的一个图像帧(第N帧)根据平滑路径进行移动后的实现效果。其中,电子设备100采集到一个原始图像帧(第N帧),该原始图像帧中包括一个人物(实线表示),此时当没有位移引导线(差值折线)参与背景平滑时,背景平滑路径在原始路径附近,人物不能显示在输出框内。当电子设备100进入到目标追踪模式,电子设备100基于采集的原始图像帧对该人物进行追踪,当电子设备接收到对该人物进行追踪的指令,电子设备100将原始图像帧进行warp处理,虚线框为将原始图像帧进行warp处理后的图像帧,其中,虚线框中也包括一个人物(虚线表示),与原始图像帧的人物对应。电子设备100将虚线框指示的图像帧基于步骤S104求解得到的平滑路径进行移动,可以将人物移动到输出框之中,实现对该人物的平稳追踪。
同理,如图12b所示,电子设备100采集到第N+1帧图像,电子设备100将该第N+1帧图像进行warp处理,虚线框为将第N+1帧图像进行warp处理后的图像帧。电子设备100将虚线框指示的图像帧基于步骤S104求解得到的平滑路径进行移动,可以将人物移动到输出框之中,实现对该人物的平稳追踪。在图12a和图12b中,可以看出第N帧图像和第N+1帧图像中的人物在输出框中的位置相同,即电子设备100的预设引导点在输出框中当前人物的位置。
本申请实施例,电子设备100通过目标主体的运动驱动背景移动来使得目标主体移动到输出画面中,在对背景的平滑路径进行求解的过程中,同步考虑到了防抖平滑和追踪两个方面,对目标主体的追踪和背景的路径平滑得以共用同样的最广的裁切边界(即原始图像数据的边界)。这样,既能使防抖空间不受限,也能在更低视场角时也提供追踪效果,避免因裁切边界变小导致在原始图像上可见的主体也难以进行追踪的问题。
并且,作为需要实时运行在手机上的方法,我们无法获取到很多未来帧信息并综合未来帧的图像流运动路径给出当前帧的最佳路径。尤其在相机应用的拍照预览模式下,只能获取到有限个(例如两个)未来帧的图像信息。
一般来说,电子设备100采集到的图像帧为N+2帧,则电子设备100实时显示在显示区的画面为电子设备100采集到的第N帧图像,N为正整数。目前,由于防抖和目标追踪是两个单独的算法,电子设备100想要进行防抖和目标追踪,电子设备100首先基于N+2帧图像实现对N帧图像的防抖处理,然后电子设备100只能基于N帧图像对目标主体进行追踪,即电子设备100对目标主体的追踪只能利用到第N帧图像,这样会导致输出画面无法显示到第N帧,导致输出画面的延迟。
本申请实施例通过直接将目标主体追踪和背景路径防抖两部分并行求解,使得防抖和追踪的平滑窗口可以共享未来帧信息,更好的兼顾跟踪性和稳定性。
在一些实施例中,电子设备100可以基于长焦摄像头采集原始图像,也可以基于广角摄像头采集图像,即电子设备100采集到的图像为更大视场角的广角图,基于广角图像,通过电子设备100通过目标检测算法对采集到的图像进行主体识别并确定目标主体。这种方式,通过获取更大视场角的广角图,能够实现更大范围的追踪。
上述实施例描述了电子设备100识别到一个目标主体的情况下电子设备100对该目标主体的追踪方式。下面本申请实施例描述了当电子设备100识别到摄像头采集的图像中包括多个目标主体的情况下,对该多个目标主体的追踪方式。如图13所示,图13为本申请实施例提供的又一种目标追踪方法的方法流程图,该目标追踪方法步骤可包括:
S201、电子设备100对获取到的图像数据进行主体识别并确定出N个目标主体,N为大于1的正整数。
电子设备100获取图像数据,其中,电子设备100可以获取通过摄像头实时采集的图像数据,也可以获取由其他设备发送的图像数据,也可以是电子设备100存储的图像数据。然后,电子设备100基于目标检测算法对获取到的图像数据进行主体识别并确定出N个目标主体,或者电子设备100也可以基于接收到的用户操作确定出N个目标主体。
在一些实施例中,电子设备100通过摄像头实时采集图像数据,对采集到的图像数据进行主体识别。其中,电子设备100通过目标检测算法对采集到的图像进行目标检测,获取到图像中的多个目标对象,该多个目标对象可以是人物、动物、物品、建筑等等。电子设备100在这多个目标对象中确定出N个目标主体。其中,电子设备100可以根据预设规则在多个目标对象中确定出N个目标主体。例如预设规则可以是当前采集的图像中最靠近中心位置和第二靠近中心位置的目标对象;预设规则也可以是当前采集的图像中占据图像区域最大和第二大的目标对象;等等。
在一些实施例中,电子设备100检测到图像中的多个目标对象后,在显示界面上为用户提供该多个目标对象的选择图标,电子设备100接收到用户针对该多个目标对象的选择图标的用户操作,解析该用户操作的点击位置,确定N个目标主体。例如当用户分别点击了两个目标对象对应的选择图标,则电子设备100确定该两个对象为两个目标主体。
需要说明的是,电子设备100对采集到的图像数据进行主体识别,采集到的图像数据中除了目标主体的部分,其余部分认为是背景部分。
S202、电子设备100确定N个目标主体的运动路径集。
电子设备100进行主体识别并确定N个目标主体后,确定该N个目标主体中每个目标主体的运动路径,其中,如何确定目标主体的运动路径可以参考上述步骤S102中的相关描述,此处不再赘述。
电子设备100确定该N个目标主体中每个目标主体的运动路径,获取目标主体的运动路径集,该运动路径集中包括该N个目标主体中每个目标主体的运动路径。
S203、电子设备100确定背景的运动路径。
这里,步骤S203的相关描述可以参考上述步骤S103的描述,此处不再赘述。
S204、电子设备100基于N个目标主体的运动路径集和背景的运动路径,确定背景的平滑路径。
电子设备100基于N个目标主体的运动路径集和背景的运动路径,确定背景的平滑路径。其中,电子设备100基于该平滑路径驱动背景运动,可以实现对N个目标主体的稳定追踪。
首先,电子设备100基于N个目标主体的运动路径集,确定每个目标主体和预设引导点的差值折线,即确定出N条差值折线。在一些实施例中,电子设备100将该N条差值折线进行加权平均,获取到一条使各个目标主体和预设引导点之间的平均距离最优的差值折线。
在一些实施例中,电子设备100通过构图评分或美学评分等将该N条差值折线进行预计的输出图评分,例如目标主体在输出图中的位置越趋于中心位置则评分越高,电子设备100从该N条差值折线中选出一条评分最高的作为确定的差值折线。
在一些实施例中,电子设备100通过根据每个目标主体在输入图上的位置和大小,以及最终输出图的大小对该N条差值折线进行评分,例如目标主体在输入图上的大小和输出图的大小越相近则评分越高,电子设备100从该N条差值折线中选出一条评分最高的作为确定的差值折线。
然后电子设备100基于该差值折线和背景的运动路径,确定平滑路径。这里,电子设备100如何基于差值折线和背景的运动路径确定平滑路径可以参考上述步骤S104中的相关描述,此处不再赘述。
S205、电子设备100基于该平滑路径对采集到的图像进行warp,输出warp后的图像。
这里,步骤S205的相关描述可以参考上述步骤S105的描述,此处不再赘述。
本申请实施例,通过对多个目标主体的识别,实现了对多目标主体的联合跟踪。
图14为本申请实施例提供的一种目标追踪方法的步骤流程。
步骤S301、电子设备100开启目标追踪模式;
其中,电子设备100开启目标追踪模式的方式可以参考上述图3b和图4a的相关描述。
步骤S302、电子设备100根据电子设备100的摄像头采集到的原始图像,确定待追踪的目标主体;
电子设备100可以通过目标检测算法确定目标主体,也可以通过接收到的用户操作确定目标主体。其中,电子设备100确定待追踪的目标主体的具体方式可以参考上述图4a和图4b的相关描述。
步骤S303、电子设备100在显示区中显示第一追踪画面,第一追踪画面中包括目标主体,目标主体位于摄像头的取景区域的第一位置;
其中,第一追踪画面例如可以是图4b所示的画面,此时目标主体(人物)位于摄像头的取景区域(预览框221显示的画面)的左边缘的位置(第一位置)。
步骤S304、当目标主体移动位置,电子设备100在显示区中显示第二追踪画面,第二追 踪画面显示目标主体位于摄像头的取景区域的第二位置。
其中,第二追踪画面例如可以是图4c所示的画面,此时目标主体(人物)相对于4b中向右移动了,第二位置与第一位置不同。
本申请实施例,电子设备100可以对采集到的原始图像进行目标追踪,电子设备100确定出原始图像中的目标主体后,当目标主体从第一位置移动到第二位置,电子设备100的显示区中始终将该目标主体显示在画面中。这里,摄像头采集到的原始图像指的是摄像头可采集到的全部图像数据。本申请实施例基于对原始图像的图像范围对目标主体进行追踪,实现了更大的追踪范围。
在一种可能的实施方式中,电子设备100未移动位置,摄像头的取景区域不变。本申请实施例对目标主体的追踪,可以是电子设备100不动的情况下,对运动的目标主体进行追踪,这样能够提高目标主体显示在输出画面中的稳定性。其中,本申请实施例也可以是电子设备100运动的情况下,对运动的目标主体进行追踪,将目标主体始终显示在输出画面中;也可以是电子设备100运动的情况下,对不运动的目标主体进行追踪,将目标主体始终显示在输出画面中。
在一种可能的实施方式中,原始图像未经过防抖或去模糊处理,或者,原始图像包括摄像头的取景区域内的全部对象。本申请实施例基于对原始图像的图像范围对目标主体进行追踪,实现了更大的追踪范围。当目标主体位于原始图像的边缘区,电子设备100也可以对该目标主体进行追踪。
在一种可能的实施方式中,方法还包括:电子设备100在预览框中显示摄像头采集到的原始图像,预览框占用显示区的一部分或全部。其中,预览框可以是一种画中画框,叠加显示在显示区上,以供用户能够更直观看到原始图像和显示区的追踪画面的对比。预览框也可以占用显示区的全部画面,例如先在显示区的全部画面中显示预览框,然后再将预览框叠加显示在显示区上。例如预览框可以是图4a~图4f中所示的预览框221。
在一种可能的实施方式中,电子设备100根据电子设备100的摄像头采集到的原始图像,确定待追踪的目标主体包括:电子设备100接收用户在预览框中的第一操作,第一操作指示用户选择的目标主体;电子设备100根据第一操作,确定待跟踪的目标主体。其中,预览框中显示摄像头采集的原始图像,则预览框中显示的图像中包括目标主体,电子设备100可以基于用户针对预览框的第一操作确定出目标主体,这里的第一操作例如可以是针对目标主体的显示位置的点击操作。即电子设备100可以基于用户操作确定目标主体。
在一种可能的实施方式中,电子设备100根据电子设备100的摄像头采集到的原始图像,确定待追踪的目标主体包括:电子设备100基于预设的目标检测算法,对电子设备100的摄像头采集到的原始图像进行自动目标检测,确定待追踪的目标主体。这里,预设的目标检测算法可以是针对特定类别的对象,例如针对检测人物的目标检测算法,例如针对检测动物的目标检测算法,例如针对检测物体的目标检测算法,例如针对检测运动对象的目标检测算法,等等。
在一种可能的实施方式中,预览框中还包括输出框,输出框中的图像对应显示区所显示的画面。该输出框用于指示当前的显示区显示的画面是在原始图像中的哪一个区域。例如输出框可以是图4a中的虚线框221B,可以是图4b中的实线框221C,可以是图4c中的实线框221D,可以是图4d中的实线框221E,可以是图4e中的实线框221F,可以是图4f中的实线框221G。
在一种可能的实施方式中,方法还包括:电子设备100在输出框中确定引导点,引导点 指示目标主体的显示位置;电子设备100根据引导点,将位于第一位置的目标主体显示在第一追踪画面,或将位于第二位置的目标主体显示在第二追踪画面。这里,引导点是用于确定目标主体在输出框的显示位置,若引导点是在输出框的中心点,则在第一追踪画面中目标主体显示在第一追踪画面的中心位置,在第二追踪画面中目标主体显示在第二追踪画面的中心位置。这样,电子设备100能够使目标主体稳定显示在引导点所在的位置,实现稳定追踪的效果。例如图4a~图4f中引导点为输出框的中心点,图7a~图7d中引导点为输出框的偏左边的一点。
在一种可能的实施方式中,电子设备100在输出框中确定引导点包括:电子设备100根据默认设置确定输出框中的引导点,或者电子设备100接收用户的第二操作,用户的第二操作指示用户选择的引导点在输出框中的位置。这里提供了电子设备100可以通过默认设置和用户操作来确定引导点的方式。
在一种可能的实施方式中,电子设备100根据引导点,将位于第一位置的目标主体显示在第一追踪画面,或将位于第二位置的目标主体显示在第二追踪画面包括:电子设备100确定目标主体的运动路径;电子设备100基于目标主体的运动路径确定目标主体和引导点的差值折线;电子设备100确定原始图像中的背景的运动路径;电子设备100基于背景的运动路径和差值折线,确定平滑路径;电子设备100将原始图像基于平滑路径进行wrap处理;电子设备100在显示区中显示warp处理后的图像,显示区显示的画面对应输出框中的图像。这里描述了电子设备100实现对目标主体进行稳定追踪的算法原理。基于前景驱动背景的思想,电子设备100通过目标主体的运动驱动背景运动,来使得目标主体移动到输出画面中。在对背景的平滑路径进行求解的过程中,电子设备100确定目标主体运动路径和引导点的差值折线,将该差值折线作为背景平滑的引导参考项,得到背景的平滑路径。基于该平滑路径对电子设备100采集到的原始图像进行warp,能够使目标主体稳定显示在输出框中的引导点的位置。电子设备100对背景的平滑路径的一次求解,同步考虑到了防抖平滑和追踪两个方面,对目标主体的追踪和背景的路径平滑得以共用同样的最广的裁切边界(即原始图像的边界)。实现了在进行背景平滑的同时实现对目标主体的追踪,兼顾了追踪结果的跟随性和平滑性。
在一种可能的实施方式中,电子设备100开启目标追踪模式包括:电子设备100检测到用户的第三操作,第三操作包括调高变焦倍率的操作或者用户直接开启目标追踪模式的开关。这里提供了电子设备100基于用户操作开启目标追踪模式的方式。
在一种可能的实施方式中,调高变焦倍率的操作指示用户选择的显示倍率;电子设备100根据显示倍率在显示区中显示第一追踪画面或第二追踪画面。
在一种可能的实施方式中,摄像头为长焦摄像头。一般来说,在高倍率的显示倍率下,电子设备100使用长焦摄像头采集图像数据。
在一种可能的实施方式中,调高后的变焦倍率大于预设倍率。预设倍率例如可以是15倍。
在一种可能的实施方式中,当第二位置为摄像头的取景区域的边缘位置,第二追踪画面中包括目标主体。由于电子设备100对背景的平滑路径的一次求解,同步考虑到了防抖平滑和追踪两个方面,对目标主体的追踪和背景的路径平滑得以共用同样的最广的裁切边界(即原始图像的边界)。这样,即使目标主体在原始图像中的边缘位置,电子设备100也同样可以对该目标主体进行追踪,解决了由于目标主体的位置在电子设备100的拍摄区域的边缘,电子设备100即使采集到了目标主体的图像也无法对目标主体进行追踪的问题。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当 使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线)或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如DVD)、或者半导体介质(例如固态硬盘)等。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,该流程可以由计算机程序来指令相关的硬件完成,该程序可存储于计算机可读取存储介质中,该程序在执行时,可包括如上述各方法实施例的流程。而前述的存储介质包括:ROM或随机存储记忆体RAM、磁碟或者光盘等各种可存储程序代码的介质。

Claims (19)

  1. 一种目标追踪方法,其特征在于,所述方法包括:
    电子设备开启目标追踪模式;
    所述电子设备根据所述电子设备的摄像头采集到的原始图像,确定待追踪的目标主体;
    所述电子设备在显示区中显示第一追踪画面,所述第一追踪画面中包括所述目标主体,所述目标主体位于所述摄像头的取景区域的第一位置;
    当所述目标主体移动位置,所述电子设备在所述显示区中显示第二追踪画面,所述第二追踪画面显示所述目标主体位于所述摄像头的取景区域的第二位置。
  2. 根据权利要求1所述的方法,其特征在于,所述电子设备未移动位置,所述摄像头的取景区域不变。
  3. 根据权利要求1所述的方法,其特征在于,所述原始图像未经过防抖或去模糊处理,或者,所述原始图像包括所述摄像头的取景区域内的全部对象。
  4. 根据权利要求1-3任一项所述的方法,其特征在于,所述方法还包括:
    所述电子设备在预览框中显示所述所述摄像头采集到的原始图像,所述预览框占用所述显示区的一部分或全部。
  5. 根据权利要求4所述的方法,其特征在于,所述电子设备根据所述电子设备的摄像头采集到的原始图像,确定待追踪的目标主体包括:
    所述电子设备接收用户在所述预览框中的第一操作,所述第一操作指示所述用户选择的所述目标主体;所述电子设备根据所述第一操作,确定待跟踪的所述目标主体。
  6. 根据权利要求4所述的方法,其特征在于,所述电子设备根据所述电子设备的摄像头采集到的原始图像,确定待追踪的目标主体包括:
    所述电子设备基于预设的目标检测算法,对所述电子设备的摄像头采集到的原始图像进行自动目标检测,确定待追踪的目标主体。
  7. 根据权利要求4所述的方法,其特征在于,所述预览框中还包括输出框,所述输出框中的图像对应所述显示区所显示的画面。
  8. 根据权利要求7所述的方法,其特征在于,所述方法还包括:
    所述电子设备在所述输出框中确定引导点,所述引导点指示所述目标主体的显示位置;
    所述电子设备根据所述引导点,将位于所述第一位置的所述目标主体显示在所述第一追踪画面,或将位于所述第二位置的所述目标主体显示在所述第二追踪画面。
  9. 根据权利要求8所述的方法,其特征在于,所述电子设备在所述输出框中确定引导点包括:
    所述电子设备根据默认设置确定所述输出框中的引导点,或者所述电子设备接收所述用 户的第二操作,所述用户的第二操作指示所述用户选择的所述引导点在所述输出框中的位置。
  10. 根据权利要求8所述的方法,其特征在于,所述电子设备根据所述引导点,将位于所述第一位置的所述目标主体显示在所述第一追踪画面,或将位于所述第二位置的所述目标主体显示在所述第二追踪画面包括:
    所述电子设备确定所述目标主体的运动路径;
    所述电子设备基于所述目标主体的运动路径确定所述目标主体和所述引导点的差值折线;
    所述电子设备确定所述原始图像中的背景的运动路径;
    所述电子设备基于所述背景的运动路径和所述差值折线,确定平滑路径;
    所述电子设备将所述原始图像基于所述平滑路径进行wrap处理;
    所述电子设备在所述显示区中显示所述warp处理后的图像,所述显示区显示的画面对应所述输出框中的图像。
  11. 根据权利要求1-10任一项所述的方法,其特征在于,所述电子设备开启目标追踪模式包括:
    所述电子设备检测到所述用户的第三操作,所述第三操作包括调高变焦倍率的操作或者所述用户直接开启目标追踪模式的开关。
  12. 根据权利要求11所述的方法,其特征在于,所述调高变焦倍率的操作指示所述用户选择的显示倍率;所述电子设备根据所述显示倍率在所述显示区中显示所述第一追踪画面或所述第二追踪画面。
  13. 根据权利要求12所述的方法,其特征在于,所述摄像头为长焦摄像头。
  14. 根据权利要求11或12所述的方法,其特征在于,所述调高后的变焦倍率大于预设倍率。
  15. 根据权利要求1-14任一项所述的方法,其特征在于,当所述第二位置为所述摄像头的取景区域的边缘位置,所述第二追踪画面中包括所述目标主体。
  16. 一种电子设备,其特征在于,包括:一个或多个处理器、一个或多个存储器;所述一个或多个存储器分别与所述一个或多个处理器耦合;所述一个或多个存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令;当所述计算机指令在所述处理器上运行时,使得所述电子设备执行如权利要求1-15所述的方法。
  17. 一种计算机可读介质,用于存储一个或多个程序,其中所述一个或多个程序被配置为被所述一个或多个处理器执行,所述一个或多个程序包括指令,所述指令用于执行如权利要求1-15所述的方法。
  18. 一种目标追踪方法,其特征在于,所述方法包括:
    启动目标追踪;
    采集原始图像;
    根据所述原始图像,确定待追踪的目标主体;
    输出第一追踪画面的信息,所述第一追踪画面中包括所述目标主体,所述目标主体位于取景区域的第一位置;
    输出第二追踪画面的信息,所述第二追踪画面显示所述目标主体位于取景区域的第二位置。
  19. 一种摄像头模组,其特征在于,包括输入单元、输出单元和至少一个摄像头;
    所述输入单元用于根据电子设备的指示启动目标追踪;
    所述至少一个摄像头用于采集原始图像,以及根据所述原始图像,确定待追踪的目标主体;
    所述输出单元用于输出第一追踪画面的信息,所述第一追踪画面中包括所述目标主体,所述目标主体位于取景区域的第一位置;以及,输出第二追踪画面的信息,所述第二追踪画面显示所述目标主体位于取景区域的第二位置。
PCT/CN2022/088093 2021-04-30 2022-04-21 一种目标追踪方法及相关装置 WO2022228259A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP22794730.6A EP4322518A4 (en) 2021-04-30 2022-04-21 TARGET TRACKING METHOD AND ASSOCIATED DEVICE

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110486512.3A CN115278043B (zh) 2021-04-30 2021-04-30 一种目标追踪方法及相关装置
CN202110486512.3 2021-04-30

Publications (1)

Publication Number Publication Date
WO2022228259A1 true WO2022228259A1 (zh) 2022-11-03

Family

ID=83745817

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/088093 WO2022228259A1 (zh) 2021-04-30 2022-04-21 一种目标追踪方法及相关装置

Country Status (3)

Country Link
EP (1) EP4322518A4 (zh)
CN (1) CN115278043B (zh)
WO (1) WO2022228259A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117714848A (zh) * 2023-08-11 2024-03-15 荣耀终端有限公司 追焦方法、电子设备及可读介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101867725A (zh) * 2009-01-23 2010-10-20 卡西欧计算机株式会社 摄像装置和拍摄对象跟踪方法
US20120268641A1 (en) * 2011-04-21 2012-10-25 Yasuhiro Kazama Image apparatus
JP2015119400A (ja) * 2013-12-19 2015-06-25 キヤノン株式会社 撮像装置、その制御方法、および制御プログラム
CN111010506A (zh) * 2019-11-15 2020-04-14 华为技术有限公司 一种拍摄方法及电子设备
CN113709354A (zh) * 2020-05-20 2021-11-26 华为技术有限公司 一种拍摄方法及电子设备

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9661232B2 (en) * 2010-08-12 2017-05-23 John G. Posa Apparatus and method providing auto zoom in response to relative movement of target subject matter
CN106575027B (zh) * 2014-07-31 2020-03-06 麦克赛尔株式会社 摄像装置及其被摄体跟踪方法
US10659676B2 (en) * 2015-12-08 2020-05-19 Canon Kabushiki Kaisha Method and apparatus for tracking a moving subject image based on reliability of the tracking state
JP6979799B2 (ja) * 2017-06-06 2021-12-15 ローム株式会社 カメラおよび動画の撮影方法
CN112333380B (zh) * 2019-06-24 2021-10-15 华为技术有限公司 一种拍摄方法及设备
CN114157804B (zh) * 2020-01-23 2022-09-09 华为技术有限公司 一种长焦拍摄的方法及电子设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101867725A (zh) * 2009-01-23 2010-10-20 卡西欧计算机株式会社 摄像装置和拍摄对象跟踪方法
US20120268641A1 (en) * 2011-04-21 2012-10-25 Yasuhiro Kazama Image apparatus
JP2015119400A (ja) * 2013-12-19 2015-06-25 キヤノン株式会社 撮像装置、その制御方法、および制御プログラム
CN111010506A (zh) * 2019-11-15 2020-04-14 华为技术有限公司 一种拍摄方法及电子设备
CN113709354A (zh) * 2020-05-20 2021-11-26 华为技术有限公司 一种拍摄方法及电子设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4322518A4

Also Published As

Publication number Publication date
EP4322518A4 (en) 2024-10-09
EP4322518A1 (en) 2024-02-14
CN115278043B (zh) 2024-09-20
CN115278043A (zh) 2022-11-01

Similar Documents

Publication Publication Date Title
WO2021147482A1 (zh) 一种长焦拍摄的方法及电子设备
WO2021093793A1 (zh) 一种拍摄方法及电子设备
WO2022068537A1 (zh) 一种图像处理方法及相关装置
KR102709021B1 (ko) 비디오 촬영 방법 및 전자 디바이스
KR102114377B1 (ko) 전자 장치에 의해 촬영된 이미지들을 프리뷰하는 방법 및 이를 위한 전자 장치
CN113747085B (zh) 拍摄视频的方法和装置
WO2021129198A1 (zh) 一种长焦场景下的拍摄方法及终端
WO2021185250A1 (zh) 图像处理方法及装置
JP7495517B2 (ja) 画像撮影方法と電子機器
WO2022057723A1 (zh) 一种视频的防抖处理方法及电子设备
WO2021219141A1 (zh) 拍照方法、图形用户界面及电子设备
JP2006303651A (ja) 電子装置
WO2022252660A1 (zh) 一种视频拍摄方法及电子设备
WO2021185374A1 (zh) 一种拍摄图像的方法及电子设备
CN114845059B (zh) 一种拍摄方法及相关设备
CN115484380A (zh) 拍摄方法、图形用户界面及电子设备
WO2024087804A1 (zh) 切换摄像头的方法与电子设备
WO2022142388A1 (zh) 特效显示方法及电子设备
WO2022206589A1 (zh) 一种图像处理方法以及相关设备
CN118435615A (zh) 图像拍摄的方法、设备、存储介质和程序产品
US20210243354A1 (en) Voice input apparatus, control method thereof, and storage medium for executing processing corresponding to voice instruction
CN116711316A (zh) 电子装置及其操作方法
WO2022228259A1 (zh) 一种目标追踪方法及相关装置
WO2022057384A1 (zh) 拍摄方法和装置
WO2023231697A1 (zh) 一种拍摄方法及相关设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22794730

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2022794730

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2022794730

Country of ref document: EP

Effective date: 20231106

NENP Non-entry into the national phase

Ref country code: DE