WO2022082440A1 - 确定目标跟随策略的方法、装置、系统、设备及存储介质 - Google Patents

确定目标跟随策略的方法、装置、系统、设备及存储介质 Download PDF

Info

Publication number
WO2022082440A1
WO2022082440A1 PCT/CN2020/122234 CN2020122234W WO2022082440A1 WO 2022082440 A1 WO2022082440 A1 WO 2022082440A1 CN 2020122234 W CN2020122234 W CN 2020122234W WO 2022082440 A1 WO2022082440 A1 WO 2022082440A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
followed
determining
image
following strategy
Prior art date
Application number
PCT/CN2020/122234
Other languages
English (en)
French (fr)
Inventor
施泽浩
聂谷洪
王帅
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2020/122234 priority Critical patent/WO2022082440A1/zh
Priority to CN202080035340.1A priority patent/CN113841380A/zh
Publication of WO2022082440A1 publication Critical patent/WO2022082440A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects

Definitions

  • the present application relates to the technical field of target following, and in particular, to a method, apparatus, system, device and storage medium for determining a target following strategy.
  • the target to be followed needs to be determined.
  • the image is identified by the target detection algorithm, and the target to be followed in the image is followed.
  • the target detection algorithm can detect with limited accuracy and cannot quickly and accurately detect the target. Determine the target to be followed, and it is easy to lose the target to be followed in the process of following the target to be followed.
  • the recognition effect of the target detection algorithm is not good, the following effect of the following target is unstable, and the user experience is not good.
  • the embodiments of the present application provide a method, apparatus, system, device and storage medium for determining a target following strategy, aiming to quickly and accurately determine the target to be followed, and reliably follow the target to be followed to improve user experience.
  • an embodiment of the present application provides a method for determining a target following strategy, and the method for determining a target following strategy includes:
  • the photographing device follows the target to be followed.
  • an embodiment of the present application further provides an apparatus for determining a target following strategy.
  • the device for determining a target following strategy is used for communication and connection with the photographing device.
  • the device for determining a target following strategy includes a memory and a processor;
  • the memory is used to store computer programs
  • the processor is configured to execute the computer program and implement the following steps when executing the computer program:
  • the photographing device follows the target to be followed.
  • an embodiment of the present application further provides a system for determining a target following strategy.
  • the system for determining a target following strategy includes a gimbal, a photographing device mounted on the gimbal, and the above-mentioned determining target following device of strategy.
  • an embodiment of the present application further provides a handheld cloud platform, the handheld cloud platform includes a handle part, a cloud platform connected to the handle part, and the above-mentioned device for determining a target following strategy, the cloud platform The stage is used for carrying shooting equipment, and the device for determining the target following strategy is arranged on the handle portion.
  • an embodiment of the present application further provides a movable platform, the movable platform includes a platform body, a pan/tilt mounted on the platform body, and the above-mentioned device for determining a target following strategy, the cloud platform
  • the platform is used for carrying shooting equipment, and the device for determining a target following strategy is arranged on the platform body.
  • an embodiment of the present application further provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the processor implements the above-mentioned Steps to determine a method for a goal-following strategy.
  • the embodiments of the present application provide a method, device, system, device and storage medium for determining a target following strategy.
  • a recognition result of the target to be followed is obtained.
  • the characteristics of the target to be followed and the following strategy corresponding to the target to be followed are determined, and the target to be followed is followed according to the following strategy of the target to be followed. follow, greatly improving the user experience.
  • FIG. 1 is a schematic diagram of a scenario for implementing the method for determining a target following strategy provided by an embodiment of the present application
  • FIG. 2 is a schematic diagram of another scenario for implementing the method for determining a target following strategy provided by an embodiment of the present application
  • FIG. 3 is a schematic flowchart of steps of a method for determining a target following strategy provided by an embodiment of the present application
  • FIG. 4 is a schematic flowchart of steps for determining a target object feature library provided by an embodiment of the present application
  • FIG. 5 is a schematic diagram of an image displayed in a model image in an embodiment of the present application.
  • Fig. 6 is a schematic flow chart of a sub-step of the method for determining the target following strategy in Fig. 3;
  • FIG. 7 is a schematic flow chart of a sub-step of the method for determining a target following strategy of FIG. 6;
  • Fig. 8 is a schematic flow chart of a sub-step of the method for determining a target following strategy in Fig. 3;
  • Fig. 9 is a schematic diagram of selecting a target to be followed from the image collected by the photographing device in Fig. 8;
  • Fig. 10 is a schematic flowchart of a sub-step of selecting a target to be followed from the images collected by the photographing device in Fig. 8;
  • Figure 11 is a schematic flow chart of a sub-step of the method for determining a target following strategy in Figure 3;
  • Fig. 12 is a schematic flow chart of a sub-step of the method for determining a target following strategy in Fig. 3;
  • Fig. 13 is a schematic flow chart of a sub-step of the method for determining a target following strategy in Fig. 3;
  • FIG. 14 is a schematic diagram of an image captured by a photographing device displayed by a display device in an embodiment of the present application.
  • Figure 15 is a schematic flow chart of a sub-step of the method for determining a target following strategy in Figure 3;
  • 16 is a schematic structural block diagram of an apparatus for distinguishing target tracking strategies provided by an embodiment of the present application.
  • 17 is a schematic structural block diagram of a system for determining a target following strategy provided by an embodiment of the present application.
  • FIG. 18 is a schematic block diagram of the structure of a handheld cloud platform provided by an embodiment of the present application.
  • FIG. 19 is a schematic structural block diagram of a movable platform provided by an embodiment of the present application.
  • the target to be followed needs to be determined.
  • the image is identified by the target detection algorithm, and the target to be followed in the image is followed.
  • the target detection algorithm can detect with limited accuracy and cannot quickly and accurately detect the target. Determine the target to be followed, and it is easy to lose the target to be followed in the process of following the target to be followed.
  • the recognition effect of the target detection algorithm is not good, the following effect of the following target is unstable, and the user experience is not good.
  • the embodiments of the present application provide a method, device, system, device and storage medium for determining a strategy for a target to be followed.
  • the identification of the target to be followed is obtained.
  • the characteristics of the target to be followed and the following strategy corresponding to the target to be followed are determined, and the target to be followed is followed according to the following strategy of the target to be followed, the whole process can quickly and accurately determine the target to be followed, And follow the target to be followed reliably, which greatly improves the user experience.
  • FIG. 1 is a schematic diagram of a scenario for implementing the method for determining a target strategy to be followed provided by an embodiment of the present application.
  • the scene includes a handheld gimbal 100 and a photographing device 200 mounted on the handheld gimbal 100 .
  • the handheld gimbal 100 includes a handle portion 101 and a gimbal 102 disposed on the handle portion 101 .
  • the gimbal 102 uses When the photographing device 200 is mounted, the photographing device 200 may be integrated with the pan/tilt 102 , or may be externally connected to the pan/tilt 102 .
  • the photographing device 200 may be a smartphone, or a camera, such as a single-lens reflex camera, or a camera.
  • the handheld platform 100 can carry the photographing device 200 for fixing the photographing device 200 and changing the height, inclination and/or direction of the photographing device 200, or for stably keeping the photographing device 200 in a certain posture and controlling the photographing device 200 to shoot.
  • the handheld pan/tilt 100 is communicatively connected to the photographing device 200, and the handheld pan/tilt 100 may be connected to the photographing device 200 through a control line, such as a shutter release cable.
  • a control line such as a shutter release cable.
  • the type of the shutter release cable is not limited here, for example, the shutter release cable may be a Universal Serial Bus (Universal Serial Bus, USB).
  • the handheld gimbal 100 can also be connected to the photographing device 200 in a wireless manner. communication connection between.
  • the pan/tilt head 102 includes three-axis motors, and the three-axis motors are a pitch (pitch) axis motor 1021, a yaw (yaw) axis motor 1022, and a roll (roll) axis motor (not shown in FIG. 1 ), respectively.
  • the three-axis motor is used to adjust the balance posture of the photographing device 200 mounted on the gimbal 102 so as to photograph a stable and smooth picture.
  • the PTZ 102 is also provided with an inertial measurement unit (Inertial measurement unit, IMU), which can be, for example, at least one of an accelerometer or a gyroscope, which can be used to measure the attitude and acceleration of the PTZ 102, etc. Adjust the posture of the gimbal 102 .
  • the handle portion 101 is also provided with an inertial measurement unit (Inertial measurement unit, IMU), for example including at least one of an accelerometer or a gyroscope, which can be used to measure the attitude and acceleration of the handle portion 101, etc., In order to adjust the posture of the pan/tilt head 102 according to the posture of the handle part 101 and the posture of the pan/tilt head 102 .
  • the handheld pan-tilt 100 includes a processor, and the processor is configured to process input control instructions, or send and receive signals.
  • the processor may be provided inside the handle portion 101 .
  • the processor may be a central processing unit (Central Processing Unit, CPU), and the processor may also be other general-purpose processors, digital signal processors (Digital Signal Processors, DSP), application specific integrated circuits (application specific integrated circuits) circuit, ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the handheld gimbal 100 includes multiple working modes, including, for example, a follow mode, a target tracking mode, a lock mode, a motion mode, and/or a sleep mode, and the like.
  • the handheld gimbal 100 performs different actions in different working modes.
  • the following mode is used to control the shooting device 200 to follow the shooting, and the following mode may refer to the shooting mode in which the gimbal 102 follows the movement of the handle part 101; if the handheld gimbal 100 is in target tracking mode, after the target to be followed is determined, the pan/tilt 102 starts to automatically rotate so that the angle of the photographing device 200 always follows the target to be followed and keeps the target to be followed in the captured image.
  • the lock mode means that the three axes of the gimbal 102 are locked, and the three axes of the gimbal do not follow;
  • the motion mode means that the gimbal 102 follows at a preset speed, such as the maximum speed of the three axes of the gimbal;
  • sleep mode Mode means to control the handheld gimbal to enter the sleep state.
  • the following object of the gimbal may be the handle part 101, the target to be followed, or others, which can be set as required, which is not specifically limited here.
  • the method for determining the target to be followed may be: the processor of the handheld platform 100 obtains the image collected by the shooting device 200 , detects the image collected by the shooting device 200 , and obtains the image from the image collected by the shooting device 200 . After determining the target to be followed, and identifying the characteristics of the target to be followed and the following strategy corresponding to the target to be followed, the photographing device 200 follows the target to be followed according to the following strategy of the target to be followed.
  • the determination method of the target to be followed can also be determined by the user after performing an operation such as frame selection or click selection on the hand-held platform 100, or it can also be determined according to a specific posture in the image captured by the photographing device 200, It may also be determined by using the position of the target to be followed in the image in the image captured by the photographing device 200 . There is no specific limitation here.
  • the handle portion 101 is further provided with a control key, so that the user can operate the control key to control the pan/tilt head 102 or the photographing device 200 .
  • the control key may be, for example, a key, a trigger, a knob or a joystick, etc., of course, other forms of physical keys or virtual keys are also included.
  • the virtual keys may be virtual buttons provided on the touch screen for interacting with the user.
  • the joystick can be used to control the movement of at least one rotating shaft, and then control the movement of the photographing device 200 . It will be appreciated that the joystick can also be used for other functions. It can be understood that the number of control keys may be one or more.
  • control keys When the number of control keys is one, different operation modes can be used to generate different control instructions for the control key, for example, the number of times of pressing is different; when the number of control keys is multiple, for example, the first control key, second control key, third control key, etc., different control keys are used to generate different control instructions.
  • the control key includes a follow control key, and the follow control key is used to control the handheld gimbal 100 to start or exit the target tracking mode.
  • the handheld gimbal 100 is controlled to be in the target tracking mode, and the image collected by the photographing device 200 is acquired.
  • a target to be followed is determined in the collected image, and the characteristics of the target to be followed and a following strategy corresponding to the target to be followed are identified, and the photographing device 200 is to follow the target to be followed according to the following strategy of the target to be followed, so that the user can control the following by following the target. Press the button to quickly control the handheld pan/tilt 100 to enter the target tracking mode, which can track the target to be followed.
  • the handheld platform 100 further includes a display device, and the display device is used to display the image captured by the photographing device 200 .
  • the processor controls the display device to display the image collected by the photographing device 200, and identifies the target to be followed in the image displayed by the display device; according to the recognition result Determine the following priority of each candidate target in the image; in response to the second pressing operation of the following control button, according to the following priority of the candidate target, re-determine the target to be followed, and mark the re-determination in the image the target to follow. It is convenient for the user to switch the target to be followed through the follow control button.
  • FIG. 2 is a schematic diagram of another scenario for implementing the method for determining a target following strategy provided by an embodiment of the present application.
  • the scene includes a control terminal 300 and a movable platform 400 , the control terminal 300 is connected in communication with the movable platform 400 , the control terminal 300 includes a display device 310 , and the display device 310 is used to display an image sent by the movable platform 400 .
  • the display device 310 includes a display screen disposed on the control terminal 300 or a display independent of the control terminal 300, and the display independent of the control terminal 300 may include a mobile phone, a tablet computer, a personal computer, etc., or may also be a Other electronic equipment with a display screen.
  • the display screen includes an LED display screen, an OLED display screen, an LCD display screen, and the like.
  • the movable platform 400 includes a platform body 410, a pan/tilt 420 mounted on the platform body, and a power system 430.
  • the pan/tilt 420 is used to carry the photographing device 500
  • the power system 430 includes a motor 431 and a propeller 432. 431 is used to drive the propeller 432 to rotate, so as to provide moving power for the movable platform.
  • the pan/tilt 420 includes three-axis motors, which are a translation axis motor 421, a pitch axis motor 422, and a roll axis motor 423, which are used to adjust the balance posture of the photographing device 500 mounted on the pan/tilt 420, so as to capture images anytime, anywhere. High-precision stable picture.
  • the movable platform 400 further includes a processor, and the processor is configured to process the input control instructions, or to send and receive signals.
  • the processor may be located inside the movable platform 400 .
  • the processor may be a central processing unit (Central Processing Unit, CPU), and the processor may also be other general-purpose processors, digital signal processors (Digital Signal Processors, DSP), application specific integrated circuits (application specific integrated circuits) circuit, ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • control terminal 300 includes a follow-up control button, and the follow-up control button is used to control the movable platform 400 to start or exit the target tracking mode.
  • the movable platform 400 can control the PTZ. 420 moves, so that the angle of the photographing device 500 always follows the target to be followed and keeps the target to be followed in the captured picture.
  • control terminal 300 generates a target following start instruction in response to the user's first pressing operation on the follow control button, and sends the target follow start instruction to the movable platform 400; the movable platform 400 receives the target follow start instruction, and It is transmitted to the processor, and the processor controls the movable platform 400 to be in the target tracking mode according to the target following the start instruction, and acquires the image collected by the photographing device 500, and then detects the image collected by the photographing device 500, and collects the image from the photographing device 500.
  • the movable platform includes movable robots, unmanned aerial vehicles and unmanned vehicles, etc.
  • the movable platform 400 is an unmanned aerial vehicle, and the power system 430 can make the unmanned aerial vehicle take off vertically from the ground, or land vertically on the ground, Without the need for any horizontal movement of the drone (eg no taxiing on the runway).
  • power system 430 may allow the drone to pre-set positions and/or steering in the air.
  • One or more of the power systems 430 may be controlled independently of the other power systems 430 .
  • one or more power systems 430 may be controlled simultaneously.
  • the drone may have multiple horizontally oriented power systems 430 to track the lift and/or push of the target.
  • the horizontally oriented power system 430 can be actuated to provide the drone with the ability to take off vertically, land vertically, and hover.
  • one or more of the horizontally oriented power systems 430 may rotate in a clockwise direction, while one or more of the other horizontally oriented power systems may rotate in a counter-clockwise direction.
  • the rotational rate of each power system 430 in the horizontal direction can be varied independently to achieve lift and/or push operations caused by each power system 430 to adjust the spatial orientation, speed and/or acceleration of the drone (eg, relative to multiple rotation and translation up to three degrees of freedom).
  • the drone may also include a sensing system, which may include one or more sensors to sense the spatial orientation, velocity, and/or acceleration of the drone (eg, relative to up to three Degree of freedom rotation and translation), angular acceleration, attitude, position (absolute position or relative position), etc.
  • the one or more sensors include GPS sensors, motion sensors, inertial sensors, proximity sensors, or image sensors.
  • the sensing system can also be used to collect data on the environment in which the UAV is located, such as climatic conditions, potential obstacles to be approached, locations of geographic features, locations of man-made structures, and the like.
  • the drone may include a tripod
  • the tripod is a contact piece between the drone and the ground when the drone lands
  • the tripod can be received by the unmanned aerial vehicle in a flying state (for example, when the unmanned aerial vehicle is cruising). It can only be put down when landing; it can also be fixedly installed on the drone and kept in the state of being put down all the time.
  • the movable platform 400 can communicate with the control terminal 300, and can realize data interaction between the control terminal 300 and the movable platform 400, such as the movement control of the movable platform 400, the control of the load (when When the payload is the photographing device 500, the control terminal 300 can control the photographing device 500), wherein the control terminal 300 can communicate with the movable platform 400 and/or the payload, and the communication between the movable platform 400 and the control terminal 300 can be Wireless communication can provide direct communication between the movable platform 400 and the control terminal 300 . This direct communication may not require any intermediary devices or intermediary networks.
  • indirect communication may be provided between the movable platform 400 and the control terminal 300 .
  • Such indirect communication may take place by means of one or more intermediaries or networks.
  • indirect communication may utilize a telecommunications network.
  • Indirect communication may take place by means of one or more routers, communication towers, satellites, or any other intermediary device or network.
  • Examples of types of communication may include, but are not limited to, communication via the Internet, Local Area Network (LAN), Wide Area Network (WAN), Bluetooth, Near Field Communication (NFC) technology, based on technologies such as General Packet Radio Service (GPRS), GSM Enhanced Data GSM Environment (EDGE), 3G, 4G, or Long Term Evolution (LTE) protocols for mobile data protocols, infrared (IR) communication technology, and/or Wi-Fi, and may be wireless, wired, or its combination.
  • GPRS General Packet Radio Service
  • EDGE GSM Enhanced Data GSM Environment
  • 3G Third Generation
  • 4G Long Term Evolution
  • LTE Long Term Evolution
  • control terminal 300 may include but not limited to: smart phone/mobile phone, tablet computer, personal digital assistant (PDA), desktop computer, media content player, video game station/system, virtual reality system, augmented reality system, wearable devices (eg, watches, glasses, gloves, headwear (eg, hats, helmets, virtual reality headsets, augmented reality headsets, head mounted devices (HMDs), headbands), pendants, armbands, leg loops, shoes, vest), gesture recognition device, microphone, any electronic device capable of providing or rendering image data, or any other type of device.
  • the control terminal 300 may be a handheld terminal, and the control terminal 300 may be portable.
  • the control terminal 300 may be carried by a human user. In some cases, the control terminal 300 may be remote from the human user, and the user may control the control terminal 300 using wireless and/or wired communication.
  • FIG. 1 or FIG. 2 are only used to explain the method for determining the target following strategy provided by the embodiment of the present application, but do not constitute a limitation on the application scenarios of the method for determining the target following strategy provided by the embodiment of the present application.
  • FIG. 3 is a schematic flowchart of steps of a method for determining a target following strategy provided by an embodiment of the present application.
  • the method for determining a target following strategy includes steps S301 to S307 .
  • Step S301 Obtain an image collected by a photographing device.
  • Step S303 by detecting the image collected by the photographing device, and determining the target to be followed from the image collected by the photographing device.
  • Step S305 Determine a following strategy corresponding to the target to be followed according to the characteristics of the target to be followed.
  • Step S307 According to the following strategy, the photographing device follows the target to be followed.
  • the image collected by the photographing device is acquired, the target to be followed is determined in the image collected by the photographing device, the characteristics of the target to be followed are identified, and the following strategy corresponding to the target to be followed is determined , and according to the following strategy of the target to be followed, the target to be followed is followed, thereby improving the accuracy and reliability of following for different targets to be followed.
  • the target determination device is used for communication and connection with a photographing device
  • the photographing device may be a smartphone, or a camera, such as a single-lens reflex camera, or a camera.
  • the image collected by the photographing device may be an original image or a processed image.
  • the image can be processed by denoising the image or by adding a filter to the image.
  • the target to follow can be an infant or an adult.
  • the following strategy may be an infant following strategy or an adult following strategy, wherein the following speed of the infant following strategy is slower than that of the adult following strategy.
  • the target to be followed and the following strategy can be set based on the actual situation.
  • the image collected by the photographing device is detected, the target to be followed is determined from the image collected by the photographing device, and it is determined that the image collected by the photographing device contains the first target object to be followed.
  • the feature of the first target object determines the following strategy corresponding to the first target object, and follows the target to be followed.
  • the feature of the target to be followed may be the facial feature of the target to be followed; and/or the contour of the target to be followed; and/or the motion attribute of the target to be followed.
  • the feature of the target to be followed may be set based on the actual situation, which is not specifically limited in this embodiment of the present application.
  • the feature of the target to be followed is determined according to the facial features of the target to be followed. For example, when the distance between the eyes of the target to be followed is close, it can be determined that the target to be followed is an infant, and when the collected When the distance between the eyes of the target to be followed in the image is far, it can be determined that the target to be followed is an adult.
  • the facial feature of the target to be followed may be set based on the actual situation, which is not specifically limited in this embodiment of the present application.
  • the feature of the target to be followed is determined according to the topography profile of the target
  • the topographical outline of the target to be followed in the obtained image is that the limbs are stretched, it can be determined that the target to be followed is an adult.
  • the topography profile of the target to be followed may be set based on the actual situation, which is not specifically limited in this embodiment of the present application.
  • the characteristics of the target to be followed are determined according to the motion attributes of the target to be followed. For example, when the movement of the target to be followed is slow, it can be determined that the target to be followed has a relatively high probability of being an infant. When the object to be followed in the collected image moves violently, it can be determined that the probability of the object to be followed is an adult is relatively high.
  • the motion attribute of the target to be followed may be set based on the actual situation, which is not specifically limited in this embodiment of the present application.
  • the image collected by the photographing device includes the target infant to be followed, and the image collected by the photographing device is detected, and the target infant to be followed is determined from the image collected by the photographing device, and the target infant to be followed is determined according to the image collected by the photographing device.
  • the characteristics of determine the following strategy corresponding to the target infant to be followed as the infant following strategy.
  • the infant following strategy may be to follow the target infant at a relatively slow following speed according to the image currently collected by the photographing device, the image previously collected by the photographing device, and the image subsequently collected by the photographing device.
  • the infant following strategy may be, according to the image currently collected by the shooting device, the image previously collected by the shooting device, and the image subsequently collected by the shooting device, when the image currently collected by the target to be followed is obtained, the shooting device will Make precise adjustments to the position of the target to be followed.
  • the image collected by the photographing device includes the target adult to be followed.
  • the target adult to be followed is determined from the image collected by the photographing device, and according to the target adult to be followed characteristics, and determine the follow-up strategy corresponding to the target infant to be followed as an adult follow-up strategy.
  • the adult following strategy may be to follow the target adult at a relatively fast following speed according to the image currently collected by the shooting device, the image previously collected by the shooting device, and the image subsequently collected by the shooting device.
  • the adult following strategy may be based on images currently collected by the photographing device, images previously collected by the photographing equipment, and images subsequently collected by the photographing equipment, before the target to be followed moves, according to the currently collected images by the photographing equipment.
  • the motion state of the target to be followed is calculated from the image and the image previously collected by the photographing device, and the subsequent position of the target to be followed is calculated according to the motion state.
  • the subsequent position is fine-tuned, and when the currently collected image is obtained, the shooting device is precisely adjusted to the position of the target to be followed.
  • FIG. 4 is a schematic flowchart of steps for determining a target object feature library provided by an embodiment of the present application.
  • the determining of the target object feature library includes steps S401 to S405.
  • Step S401 in the model image, determine a first target object.
  • the model image is an image used to determine the target object. That is, the image containing the first target object may be a model image.
  • the first target object may be an infant or an adult.
  • determining the first target object includes determining an attribute of the first target object and/or an image area corresponding to the first target object.
  • the attribute of the first target object is used to represent the category of the first target object.
  • the attribute of the first target object may be an infant.
  • the attribute of the first target object may also be an adult.
  • the present invention is not limited thereto, and the category of the first target object may also include other categories.
  • the attributes of the first target object may further include: middle-aged, elderly, children, and the like.
  • the image area corresponding to the first target object may be an area where the first target object appears in the image.
  • the position identifier may be used to indicate the image area corresponding to the first target object in the model image.
  • the above-mentioned position identification indicating the image area corresponding to the first target object in the model image includes displaying a rectangular frame and/or an identification icon in the area where the first target object is located.
  • the model image includes a first target object 501 , a second target object 503 and a background 505 .
  • a rectangular frame 507 is displayed in the area where the first target object 501 is located.
  • the manner of identifying the first target object may be designed based on the actual situation, which is not specifically limited in this embodiment of the present application.
  • the attribute of the first target object is determined to be an infant.
  • the attribute of the first target object may be set based on the actual situation, which is not specifically limited in this embodiment of the present application.
  • Step S403 extract the features of the first target object.
  • the feature of the first target object may be the facial feature of the target to be followed; and/or the contour of the target to be followed; and/or the motion attribute of the target to be followed.
  • the feature of the first target object may be set based on the actual situation, which is not specifically limited in this embodiment of the present application.
  • a feature of the first target object is extracted. Movement attributes for infants.
  • the feature of the first target object may be set based on the actual situation, which is not specifically limited in this embodiment of the present application.
  • a feature of the first target object is extracted. Sports attributes for adults.
  • the feature of the first target object may be set based on the actual situation, which is not specifically limited in this embodiment of the present application.
  • Step S405 establishing a feature library about the first target object according to the extracted features.
  • the feature library of the first target object may include the facial features of the first target object; and/or the topographical outline of the first target object; and/or the motion attribute of the first target object.
  • a feature of the first target object is extracted, and a feature library of the first target object of the infant is established, and the feature library includes: the facial features of the infant; and/ Or, the baby's topographic profile; and/or, the baby's motor attributes.
  • the content included in the feature library may be set based on the actual situation, which is not specifically limited in this embodiment of the present application.
  • the feature of the first target object is extracted, and a feature library about the first target object of the adult is established, and the feature library includes: the facial features of the adult; and/ Or, the adult's physical profile; and/or, the adult's motor attributes.
  • the content included in the feature library may be set based on the actual situation, which is not specifically limited in this embodiment of the present application.
  • the above target object feature library needs to be determined.
  • FIG. 6 is a schematic flowchart of a sub-step of the method for determining a target following strategy in FIG. 3 .
  • the method for determining a target following strategy may include sub-steps S601 to S605 after determining the target object feature library.
  • Sub-step S601 determine the characteristics of the first target object.
  • the characteristics of the first target object need to be determined.
  • the features may include the infant's facial features; and/or the infant's topographic profile; and/or the infant's movement attributes.
  • the first target object may be set based on an actual situation, which is not specifically limited in this embodiment of the present application.
  • Sub-step S603 according to the characteristics of the first target object, determine whether there is a first candidate target object in the template image.
  • the first candidate target object has the characteristics of the first target object.
  • Sub-step S605 If the model image includes the first candidate target object, determine the probability that the first candidate target object is the first target object, so as to optimize the method for determining the target following strategy.
  • the features of the first target object may be the infant's facial features; and/or the infant's contours; and/or the infant's movement attributes. It is determined whether there is a first candidate target object having the characteristics of the first target object in the model image according to the above-mentioned characteristics. When the first candidate target object exists in the model image, determine the probability that the first candidate target object is an infant.
  • determining the probability that the first candidate target object is the first target object to optimize the method for determining target following includes sub-steps S701 to S705 .
  • Sub-step S701 Determine a first probability that the first candidate target object and the first target object are of the same category.
  • Sub-step S703 Determine a second probability that the first candidate target object is at the first position.
  • Determining the probability that the first candidate target object is the first target object may be: determining the first probability that the first candidate target object and the first target object are of the same category, wherein the first probability may be the first candidate target The degree of consistency between the characteristics of the object and the characteristics of the first target object; determining the second probability that the first candidate target object and the first target object are in a first position, wherein the first position may be the first target object For the position in the image, the second probability may be the distance between the position of the first candidate target object in the image and the first position.
  • the first probability of the first candidate target object may be determined according to the first preset mapping relationship and the degree of consistency between the first candidate target object and the first target object of the same category, and the first preset mapping relationship includes corresponding correspondences between different degrees of consistency
  • the first probability that the first candidate target object and the first target object are of the same category for example, the degree of consistency between the characteristics of the first candidate target object and the characteristics of the first target object is 60%, 70%, 90% %, 95% of the corresponding first candidate target object and the first target object are the first probability of the same category are 60 points, 70 points, 90 points, 95 points, therefore, when the first candidate target object features and The degree of consistency of the features of the first target object is 95%, and the first probability that the first candidate target object and the first target object are of the same category is 95 points.
  • the second probability of the first candidate target object and the first target object at the first position can be based on the second preset mapping relationship and the position of the first candidate target object in the image and the position of the first target object in the image, that is, the first target object.
  • the distance between a location is determined, and the second preset relationship includes the following indices corresponding to different distances.
  • the second probabilities corresponding to distances of 0.5 cm, 1 cm, and 1.5 cm are 95 points, 90 points, and 85 points, respectively. Therefore, , the distance between the position of the first candidate target object in the image and the first position is 0.5 cm, then the second probability that the first candidate target object and the first target object are at the first position is 95 points.
  • Sub-step S705 Obtain a prediction result of whether the first candidate target object is the first target object according to the first probability and the second probability.
  • the method of determining whether the first candidate target object is the prediction result of the first target object may be: if only the first candidate target object and the first target object are considered is the first probability of the same category, then the first probability is determined as the prediction result of whether the first candidate target object is the first target object; if only the second probability of the first candidate target object at the first position is considered, the The second probability is determined as the prediction result of whether the first candidate target object is the first target object; if considering the first probability that the first candidate target object and the first target object are of the same category, and the first candidate target object is in the first position the second probability, the sum of the first probability and the second probability is determined as the prediction result of whether the first candidate target object is the first target object.
  • the first preset weight is calculated and Calculate the product of the first probability, and calculate the product of the second preset weight and the second probability; calculate the sum of the above two products, and determine the sum of the above two products as whether the first candidate target object is the first target object.
  • the prediction result wherein the sum of the first preset weight and the second preset weight is 1.
  • the method for determining the target following strategy can be optimized.
  • the objective function of the method for determining the target following strategy may be obtained according to the first probability, the second probability and the prediction result, and the above method for determining the target following strategy is updated according to the objective function.
  • the above objective function may be a function representing the reliability of the method for determining the target following strategy
  • the above objective function may also be a function for revising the method for determining the target following strategy
  • the above objective function may also be used to optimize how to determine the target to be followed. method.
  • the above-mentioned method for updating the above-mentioned determination of the target-following strategy may be adding an objective function to the method for determining the to-be-followed target strategy, so that the above-mentioned method for determining the target-following strategy is more reliable.
  • the manner of determining the second target object in the image as the target to be followed may be: if there are multiple second target objects in the image collected by the photographing device, then determining the category of each second target object. ; According to the current shooting mode of the shooting device and the category of each second target object, determine the target to be followed from the plurality of second target objects.
  • the shooting modes of the shooting device include a family shooting mode, a portrait shooting mode, a pet shooting mode, a plant shooting mode, a vehicle shooting mode, and a panoramic shooting mode.
  • the method of determining the target to be followed from the plurality of second target objects may be: There is a second target object that conforms to the current shooting mode of the shooting device. If there is only one second target object that conforms to the current shooting mode of the shooting device, the second target object is determined as the target to be followed; If the number of second target objects in the current shooting mode is at least two, then the following priority of each second target object in the current shooting mode of the shooting device is met, and the second target object corresponding to the highest following priority is determined as To be followed target.
  • the determination of the target to be followed from the images collected by the photographing device may be through a user operation to select the target to be followed, including sub-steps S801 to S803 .
  • Sub-step S801 in response to a user's click operation, identify the target to be followed in an image area near the click position.
  • the target to be followed can be determined from the image in response to the user's click operation on the image captured by the photographing device.
  • the click operation includes a single-click operation, a double-click operation, a long-press operation, and the like.
  • Sub-step S803 mark the category of the target to be followed, and/or the location of the target to be followed.
  • the category of the target to be followed may be an infant, an adult, or a plant or the like.
  • the features of the target to be followed may be the facial features of the infant; and/or the contour of the infant; and/or the motion attribute of the infant.
  • the category of the target to be followed and the characteristics of the target to be followed may be set based on the actual situation, which is not specifically limited in this embodiment of the present application.
  • the user performs a click operation on the position 901 in the image captured by the photographing device, and the target 905 to be followed is identified in the image area 903 near the position 901 .
  • an object to be followed 905 is identified in an image area near the user's click position, and a category 907 of the object to be followed 905 is marked, such as an infant; and/or, a category 907 of the object to be followed 905 is marked; Location, such as rectangle 909 and/or identification icon 911.
  • the determination of the target to be followed from the images collected by the photographing device may be through a user operation to select the target to be followed, including sub-steps S1001 to S1003 .
  • Sub-step S1001 in response to the user's first pressing operation on the mode selection button, identify the target to be followed in the central area of the image.
  • the mode selection button is used to select different shooting modes, and the shooting modes include family shooting mode, portrait shooting mode, pet shooting mode, plant shooting mode, vehicle shooting mode and panorama shooting mode.
  • the image captured by the photographing device is acquired, and the image is displayed by the display device, so that the target determination device can determine the target to be followed from the image.
  • the first pressing operation includes a single-click operation, a double-click operation, a long-press operation, and the like.
  • Sub-step S1003 mark the category of the target to be followed, and/or mark the location of the target to be followed.
  • the preset area of the image collected by the photographing device may be the central area of the image.
  • the preset area of the image collected by the photographing device may be set based on the actual situation, which is not specifically limited in this embodiment of the present application.
  • the mode selected by the user is the family shooting mode
  • whether there is an infant target object is identified in the central area of the image captured by the shooting device, and when the identification result is that there is an infant, the category of the target to be followed is marked as an infant , and/or mark the location of the target to be followed.
  • the method for determining the target following strategy may be: by detecting images collected by the photographing device, determining the target to be followed from the images collected by the photographing device, and identifying the characteristics of the target to be followed.
  • the target to be followed is determined from the images collected by the photographing device, and the identification of the characteristics of the target to be followed includes sub-steps S1101 to S1113 .
  • Sub-step S1101 compare the target to be followed with the first target object in the model image.
  • the first target object may be an infant or an adult.
  • Sub-step S1103 If the first feature of the target to be followed is similar to the first target feature of the first target object, mark the target to be followed as a first category.
  • the first feature and the first target feature include at least one of facial features, facial contours, and motion attributes.
  • Step S1105 determining a following strategy corresponding to the target to be followed according to the characteristics of the target to be followed.
  • Step S1107 If the target to be followed is marked as the first category, the following strategy corresponding to the target to be followed is the first following strategy.
  • Step S1109 If the first feature of the target to be followed is not similar to the first target feature of the first target object; compare the target to be followed with the second target object in the model image.
  • Step S1111 If the first feature of the target to be followed is similar to the second target object feature of the second target object, mark the target to be followed as a second category.
  • the first target object and the second target object are different.
  • Step S1113 If the target to be followed is marked as the second category, the following strategy corresponding to the target to be followed is the second following strategy.
  • the first following strategy and the second following strategy are different.
  • the following speed of the first following strategy is slower than that of the second following strategy.
  • the image collected by the photographing device is detected, the target to be followed is determined from the image collected by the photographing device, and the characteristics of the target to be followed are identified, for example, the facial features of the target to be followed are identified; and/or, the identification The topographical outline of the target to be followed; and/or, identifying the motion attribute of the target to be followed.
  • the first feature of the target to be followed is a facial feature
  • the first target object in the model image is an infant
  • the first target feature of the first target object in the model image is a facial feature
  • the The facial features are compared with the facial features in the model image.
  • the target to be followed is marked as an infant.
  • the following strategy corresponding to the target to be followed is determined.
  • the target to be followed is an infant
  • the following strategy corresponding to the target to be followed can be determined as the first Follow the strategy.
  • the image collected by the photographing device is detected, the target to be followed is determined from the image collected by the photographing device, and the characteristics of the target to be followed are identified, for example, the facial features of the target to be followed are identified; and/or, the identification The topographical outline of the target to be followed; and/or, identifying the motion attribute of the target to be followed.
  • the first feature of the target to be followed is a facial feature
  • the first target object in the model image is an infant
  • the first target feature of the first target in the model image is a first facial feature
  • the The facial features of the target are compared with the facial features in the model image.
  • the facial feature of the target to be followed is compared with the second target feature of the second target object in the model image.
  • the second target feature of the second target object is also the second facial feature. That is, the facial features of the target to be followed are compared with the second facial features of the second target object.
  • the facial features of the target to be followed are compared with the facial features of the adult in the model image.
  • the following strategy corresponding to the target to be followed is determined.
  • the target to be followed is an adult
  • the characteristic that the movement attribute of the adult is relatively fast it can be determined that the following strategy corresponding to the target to be followed is the second one.
  • identifying the characteristics of the target to be followed includes sub-steps S1201 to S1203 .
  • Sub-step S1201 Compare the target to be followed with multiple target objects in the model image to determine a first target object from the multiple target objects.
  • the target objects may include infant target objects; and/or adult target objects.
  • the first target object may be an infant or an adult.
  • the features of the first target object and the features of the target to be followed may be facial features; and/or contours; and/or motion attributes, and the features of the first target object are similar to those of the target to be followed.
  • Sub-step S1203 Determine a following strategy corresponding to the target to be followed according to the category of the first target object.
  • the category of the first target object is located in a preset category library
  • the preset category library may include an infant preset category library, and may also include an adult preset category library.
  • the predicted category library includes categories corresponding to multiple target objects. For example, the baby target object corresponds to the baby preset category library, and the adult target object corresponds to the adult preset category library.
  • the target objects in the model image include an infant target object and an adult target object.
  • the target to be followed By comparing the target to be followed with the above two target objects, for example, when the facial features of the target to be followed are the same as those of the infant target
  • the facial features of the object are similar, and/or, the topographical outline of the target to be followed is similar to the topographical outline of the infant target object, and/or, when the motion attributes of the to-be-followed target are similar to the motion attributes of the infant target object, determine
  • the infant target object is the first target object. According to the category of the first target object, that is, the infant, the following strategy corresponding to the target to be followed is determined as the infant following strategy.
  • the target objects in the model image include an infant target object and an adult target object.
  • the target to be followed By comparing the target to be followed with the above two target objects, for example, when the facial features of the target to be followed are the same as those of the adult target The facial features of the object are similar, and/or, the profile outline of the target to be followed is similar to the profile profile of the adult target object, and/or, when the motion attributes of the target to be followed are similar to the motion attributes of the adult target object, determine The adult target object is the first target object. According to the category of the first target object, that is, an adult, the following strategy corresponding to the target to be followed is determined as an adult following strategy.
  • the target to be followed is determined from the images collected by the photographing device by detecting the images collected by the photographing device, including sub-steps S1301 to S1307 .
  • Sub-step S1301 Determine a first target and a second target from the images collected by the photographing device.
  • the first target and the second target are different.
  • Sub-step S1303 Compare the first target with multiple target objects in the model image to determine a first recognition result.
  • Sub-step S1305 Compare the second target with the multiple target objects to determine a second recognition result.
  • Sub-step S1307 Determine the target to be followed from the first target and the second target according to the first recognition result and the second recognition result.
  • the first recognition result and the second recognition result may be the categories of the first target and the second target; and/or, occupy the frame ratio of the image collected by the photographing device; and/or in the image collected by the photographing device s position.
  • the position of the first target in the image collected by the photographing device may be the distance from the preset position, and the preset position may be the center position of the image, or may be the best position for the composition of the image.
  • the following priorities of the first target and the second target may be determined.
  • the second recognition result indicates that the second target belongs to a second target category, eg, an adult. It is determined that the following priority of the first target is level 1, and the following priority of the second target is level 2, wherein the priority level corresponding to level 1 is the highest, and the first target is used as the target to be followed.
  • the first category and the second category may be set based on actual conditions, which are not specifically limited in this embodiment of the present application.
  • the first recognition result is that the category of the target object corresponding to the first target is the first target object
  • the second recognition result is that the category of the target object corresponding to the second target is the second target object
  • the above-mentioned first The recognition result indicates that the first object belongs to the first category
  • the above-mentioned first recognition result indicates that the frame ratio of the first target in the image collected by the photographing device is 30%
  • the above-mentioned second recognition result at least the second target object occupies the frame ratio of the image collected by the photographing device: 10%
  • the following priority of the first target is level 1
  • the following priority of the second target object is level 2, of which level 1 corresponds to the highest priority, and the first target is used as the target to be followed.
  • the above-mentioned second recognition result is at least the second target object in the image collected by the photographing device.
  • the distance between the position of the image and the center position is b, where a ⁇ b, then it is determined that the following priority of the first target is level 1, and the following priority of the second target object is level 2, and the priority corresponding to level 1 is determined.
  • the level is the highest, and the first target is used as the target to be followed.
  • the method of determining the first follow-up priority of each target may be: according to each target.
  • the category of the target object corresponding to each target is determined, and the first following index of each target is determined; the second following index is determined according to the frame ratio of each target in the image; the position between the position of each target and the center position of the image is determined and determine the third following index for each specific target according to the distance between the position of each target and the center position of the image, and determine the third following index for each specific target according to the first following index, second following index and/or The third following index, which determines the following index for each target.
  • the following priority of each target is determined by comparing the following indices of different targets.
  • the first following index of the first target may be determined according to the first preset mapping relationship and the category of the target object corresponding to the first target, and the first preset mapping relationship includes the following indexes corresponding to the categories of different target objects, for example, The category of the target object is infant, and the first follow-up index corresponding to adult is 80 and 90 points, respectively. Therefore, if the category of the target object corresponding to the first target is infant, the first follow-up index of the first target is 90 points.
  • the second following index of the first target may be determined according to the second preset mapping relationship and the distance between the position of the first target and the center position of the image, and the second preset relationship includes the following indices corresponding to different distances, for example, the distance
  • the following indices corresponding to 0.5 cm, 1 cm, and 1.5 cm are 90 points, 80 points, and 70 points, respectively. Therefore, the distance between the position of the first target and the center position of the image is 0.5 cm.
  • the second follow index is 90 points.
  • the third following index of the first target may be determined according to the third preset mapping relationship and the frame ratio of the first target in the image, and the third preset mapping relationship includes the following indices corresponding to different frame ratios, for example, the frame ratio is 5%.
  • the following indices corresponding to , 10%, and 12% are 60 points, 65 points, and 70 points, respectively. Therefore, if the first target occupies 12% of the frame of the image, the third following index of the first target is 70 points.
  • the first preset mapping relationship, the second preset mapping relationship, and the third preset mapping relationship may be set based on actual conditions, which are not specifically limited in this embodiment of the present application.
  • the method of determining the target follow index of the first target may be: if only the category of the target object corresponding to the first target is considered, Then the first following index is determined as the target following index of the first target; if only the position of the first target in the image is considered, the second following index is determined as the target following index of the first target; if only the first target is considered If the target occupies the frame ratio of the image, the third follow index is determined as the target follow index of the first target; if the category and position of the first target are considered, the first follow index and the second follow index are The sum is determined as the target following index of the first target; if considering the category of the target object corresponding to the first target and the frame ratio of the image, the sum of the first following index and the third following index is determined as the first target.
  • Target following index if the position of the first target in the image and the frame ratio of the image are considered, the sum of the second and third following indices is determined as the target following index of the first target; if the first target is considered The category of the target object corresponding to the target, the position in the image and the frame ratio of the image, the sum of the first, second and third following indices is determined as the target following index of the first target.
  • the product of the first preset weight and the first tracking index of the first target is calculated, and the first target is calculated.
  • the product of the second preset weight and the second follow index of the first target calculate the sum of the above two products, and determine the sum of the above two products as the target follow index of the first target, wherein the first preset weight and The sum of the second preset weights is 1.
  • the product of the first preset weight and the first follow index of the first target is calculated, and the third preset weight and the first target are calculated.
  • the product of the third following index of the target calculate the sum of the above two products, and determine the sum of the above two products as the target following index of the first target, wherein the sum of the first preset weight and the third preset weight is 1.
  • the product of the second preset weight and the second follow-up index of the first target is calculated, and the third preset weight is calculated
  • the product of the third follow index of the first target calculate the sum of the above two products, and determine the sum of the above two products as the target follow index of the first target, wherein the second preset weight and the third preset The sum of the weights is 1.
  • the product of the first preset weight and the first tracking index of the first target is calculated, and the second The product of the preset weight and the second following index of the first target and calculating the product of the third preset weight and the third following index of the first target; calculating the sum of the above three products, and determining the sum of the above three products is the target following index of the first target, wherein the sum of the first preset weight, the second preset weight and the third preset weight is 1.
  • the priority of the first category is higher than the priority of the second category, and the first target corresponding to the first category is determined to be the target to be followed.
  • the first mode is the family shooting mode
  • the priority of the first category of infants is higher than that of the second category of adults, and the first target infant is determined to be the target to be followed.
  • the second recognition result indicates that the second target belongs to a second target category, eg, an adult.
  • the first target is used as the target to be followed.
  • the second target in the image collected by the photographing device is determined according to the second recognition result is the target to be followed.
  • the method for determining the target following strategy may include determining the target to be followed from the images collected by the photographing device. If there is no first target in the image collected by the photographing device, and there are multiple second targets, if there are multiple second targets in the image. a second target, the second following priority of each second target is determined according to the salience of each of the plurality of second target objects in the image collected by the photographing device, and according to the For the second following priority of each second target, one of the plurality of second targets is determined as the target to be followed. Wherein, the target object containing the features of the first target in the image collected by the photographing device is the second target.
  • the degree of salience of the above-mentioned second target in the image collected by the photographing device may be determined according to the length of stay of the second target at the preset position in the image and/or may be determined according to the second target in the collected image.
  • the saliency value between the image area in which it is located and the adjacent image area is determined. It can be understood that the longer the staying time of the second target at the preset position in the image, the higher the salience of the second target in the captured image, and the longer the second target stays at the preset position in the image. The shorter the duration, the less prominent the second object is in the captured image.
  • the preset position, preset dwell time and preset significance value can be set based on the actual situation or set by the user. For example, the preset position can be the center of the image, the preset dwell time is 10 seconds, and the preset significance value is 50. .
  • the second following priorities of the second target A, the second target B, and the second target C are the first, the second, and the third, respectively, and the first priority corresponds to the highest priority. Therefore, the second target A is selected as the to-be-followed Target.
  • the second following priority is used to describe the probability of selecting the second target as the target to be followed. If the second following priority of the second target is higher, the probability of selecting the second target as the target to be followed is higher.
  • the probability of selecting the second target as the target to be followed is lower.
  • the manner of determining the second following priority of each second target may be: obtaining the following probability of each second target from the second identification result, Wherein, the following probability is the probability that the output object is the second target when the object in the image is identified according to the target detection algorithm; and/or, the position of each second target in the image is obtained from the second identification result And/or, from the second recognition result, obtain the position information of each second target in this image, and according to the position information of each second target in this image, determine that each second target accounts for the image Frame ratio: According to the following probability of each second target, the position in the image and/or the frame ratio of the image, the second following priority of each second target is determined.
  • the following probabilities of the second target A, the second target B, and the second target C are 90%, 80%, and 85%, respectively. Since 90%>85%>80%, through 90%>85%>80 %, it can be determined that the second follow-up priority of the second target A is the first level, the second follow-up priority of the second target B is the second level, and the second follow-up priority of the second target C is the third level.
  • the frame ratios of the second target A, the second target B, and the second target C in the image are 8%, 12%, and 15%, respectively.
  • the second follow-up priority of the second target A is level three
  • the second follow-up priority of the second target B is level two
  • the second follow-up priority of the second target C is level one.
  • the method of determining the second follow-up priority of each second target may be: At the position in the image and the center position of the image, determine the distance between the position of each second target and the center position; according to the distance between the position of each second target and the center position, determine each second target The second following priority for the target.
  • the distances between the positions of the second target A, the second target B, and the second target C in the image and the center position are d, e, and f, respectively, and d>e>f. Therefore, by d>e > f, it can be determined that the second follow-up priority of the second target A is level three, the second follow-up priority of the second target B is level two, and the second follow-up priority of the second object C is level one.
  • the method of determining the second following priority of each second target may be: according to The following probability of each second target is determined, and the first following index of each second target is determined; the distance between the position of each second target and the center position of the image is determined, and the position of each second target is The distance between the center positions of the image is to determine the second following index of each second target; the third following index is determined according to the frame ratio of each second target in the image, and the third following index is determined according to the A follow index, a second follow index and/or a third follow index, determine the target follow index of each second target; determine the second follow priority of each second target according to the target follow index of each second target .
  • the second following index of the second object may be determined according to a second preset mapping relationship and the distance between the position of the second object and the center position of the image, and the second preset relationship includes the following index corresponding to different distances, for example, the distance
  • the following indices corresponding to 0.5 cm, 1 cm, and 1.5 cm are 90 points, 80 points, and 70 points, respectively. Therefore, if the distance between the position of the second target and the center of the image is 1 cm, then the The second follow index is 80 points.
  • the method of determining the second follow-up priority of each second target may be: specifying a predetermined position in the image, and according to each second target The position of each second target in the image and the predetermined position of the image, determine the distance between the position of each second target and the predetermined position; according to the distance between the position of each second target and the predetermined position, determine Second follow priority for each second target.
  • the third following index of the second target may be determined according to the third preset mapping relationship and the frame ratio of the second target in the image.
  • the third preset mapping relationship includes the following indices corresponding to different frame ratios.
  • the frame ratio is 5%
  • the following indices corresponding to , 10%, and 12% are 60 points, 65 points, and 70 points, respectively. Therefore, if the second target occupies 10% of the image, the third follower index of the second target is 65 points.
  • the method of determining the target follow index of the second target may be: if only the follow probability of the second target is considered, then A following index is determined as the target following index of the second target; if only the position of the second target in the image is considered, the second following index is determined as the target following index of the second target; The frame ratio of the image, the third following index is determined as the target following index of the second target; if the following probability of the second target and the position in the image are considered, the sum of the first following index and the second following index is Determined as the target following index of the second target; if considering the following probability of the second target and the frame ratio of the image, the sum of the first following index and the third following index is determined as the target following index of the second target; if Considering the position of the second target in the image and the frame ratio of the image, the sum of the second following index and the third following index is determined as the target following index of the second target; if the position of the second target in the image and the frame ratio of the image, the sum of the second following
  • the product of the first preset weight and the first following index of the second target is calculated, and the second preset weight is calculated.
  • the product of the weight and the second follow index of the second target calculate the sum of the above two products, and determine the sum of the above two products as the target follow index of the second target, wherein the first preset weight and the second pre Let the sum of the weights be 1.
  • the product of the first preset weight and the first following index of the second target is calculated, and the third preset weight and the second target of the second target are calculated.
  • the product of the three following indices calculate the sum of the above two products, and determine the sum of the above two products as the target following index of the second target, wherein the sum of the first preset weight and the third preset weight is 1.
  • the product of the second preset weight and the second follow index of the second target is calculated, and the third preset weight is calculated.
  • the product of the third follow index of the second target calculate the sum of the above two products, and determine the sum of the above two products as the target follow index of the second target, wherein the second preset weight and the third preset The sum of the weights is 1.
  • the product of the first preset weight and the first following index of the second target is calculated, and the second preset weight is calculated. multiplying the product with the second following index of the second target and calculating the product of the third preset weight and the third following index of the second target; calculating the sum of the above three products, and determining the sum of the above three products as the second The target following index of the target, wherein the sum of the first preset weight, the second preset weight and the third preset weight is 1.
  • the first following mode is used to follow; when the second target object is the target to be followed, the second following mode is used to follow.
  • the category of the first target is different from the category of the second target object, and the first following mode and the second following mode are different.
  • the first target, the second target object, the first follow mode, and the second follow mode may be set based on actual conditions, which are not specifically limited in this embodiment of the present application.
  • the first following mode is used to follow.
  • the second following mode is used to follow.
  • the following speed of the photographing device in the above-mentioned first following mode may be slower than that of the photographing device in the above-mentioned second following mode.
  • the images collected by the photographing device include images currently collected by the photographing device, images previously collected by the photographing equipment, and images subsequently collected by the photographing equipment.
  • the target to be followed is in the images previously collected by the photographing equipment.
  • the target to be followed is at a second position in the image currently collected by the shooting device, and the posture of the shooting device is adjusted according to the second position; and in the second following mode, according to the first position and the second position. relationship, predict the third position of the target to be followed in the image subsequently collected by the shooting device; and adjust the posture of the shooting device according to the third position.
  • the target 1401 to be followed is located at the first position 1403 in the image previously collected by the photographing device, and is located at the second position 1403 in the image currently collected by the photographing device Location 1405.
  • the posture of the photographing device is adjusted so that the target 1401 to be followed remains in the preset area in the image.
  • the preset area may be the center area of the picture.
  • the target 1401 to be followed is located at the first position 1403 in the image previously collected by the photographing device, and is located at the second position 1403 in the image currently collected by the photographing device Position 1405, according to the relationship between the first position 1403 and the second position 1405, predict the third position 1407 of the target to be followed in the image subsequently collected by the shooting device; according to the third position 1407, adjust the posture of the shooting device to make the target to be followed 1401 remains in a preset area in the image.
  • the preset area may be the center area of the picture.
  • the method for determining a target following strategy may include sub-steps S1501 to S1507.
  • Sub-step S1501 the image collected by the photographing device is detected, and the target to be followed is determined from the image collected by the photographing device.
  • the images collected by the photographing device include images currently collected by the photographing equipment and images previously collected by the photographing equipment.
  • Sub-step S1503 Determine a plurality of third targets in the image currently collected by the photographing device.
  • Sub-step S1505 Compare the plurality of third targets with the targets to be followed in the images previously collected by the photographing device to determine a third recognition result.
  • the third identification result is the degree of similarity between the characteristics of the third target and the characteristics of the target to be followed.
  • Sub-step S1507 if the third recognition result indicates that the plurality of third targets are not similar to the first target to be followed, compare the plurality of third targets with the first target object in the model image , to determine whether the first target to be followed exists in the plurality of third targets.
  • the photographing device by detecting the image collected by the photographing device, it is recognized that there are multiple third objects C in the image currently collected by the photographing device, and the features of the multiple third objects C are compared with the characteristics of the multiple third objects C previously collected by the photographing device.
  • the characteristics of the target A to be followed in the obtained image are compared, and when the result of the above comparison is that the characteristics of the third target C are not similar to the characteristics of the above-mentioned target A to be followed, it is necessary to compare the characteristics of multiple third targets and Features of the first target object B in the model image.
  • the method for determining a target following strategy includes, when detecting the image collected by the photographing device, recognizing that there are multiple third targets C in the image currently collected by the photographing device, exemplarily, the target to be followed A and the first target object B are infants, and the facial features of the third target C; and/or, the topography; and/or, the motion attributes and the facial features of the target A to be followed; and/or, the topography Outline; And/or, the motion attribute compares and obtains the 3rd recognition result, when the 3rd recognition result is that the feature of the 3rd target C and the target A to be followed is not similar, the facial feature of the 3rd target C needs to be compared; And/ Or, the topography; and/or, the motion attributes and the facial features of the first target object B; and/or, the topography; and/or, the motion attributes are compared, if the result of the comparison is similar, it can be judged that the first The three targets C are the target B to be followed;
  • the method for determining a target following strategy may include taking the target to be followed as a first target object; extracting features of the first target object; and updating a feature library corresponding to the first target according to the extracted features.
  • the features of the target to be followed are extracted, including facial features; and/or contours; and/or motion attributes, and the above features are added to the corresponding first target Feature Library. In order to achieve the effect of enriching the feature library, so as to continuously improve the accuracy of recognition.
  • FIG. 16 is a schematic structural block diagram of an apparatus for determining a target following strategy provided by an embodiment of the present application.
  • the device 1601 for determining a target following strategy includes a processor 1603 and a memory 1605, and the processor 1603 and the memory 1605 are connected through a bus 1607, such as an I2C (Inter-integrated Circuit) bus.
  • the apparatus 1601 for determining a target following strategy is used for communicating with the photographing device.
  • the processor 1603 may be a micro-controller unit (Micro-controller Unit, MCU), a central processing unit (Central Processing Unit, CPU) or a digital signal processor (Digital Signal Processor, DSP) or the like.
  • MCU Micro-controller Unit
  • CPU Central Processing Unit
  • DSP Digital Signal Processor
  • the memory 1605 may be a Flash chip, a read-only memory (ROM, Read-Only Memory) magnetic disk, an optical disk, a U disk, or a removable hard disk, and the like.
  • ROM Read-Only Memory
  • the memory 1605 may be a Flash chip, a read-only memory (ROM, Read-Only Memory) magnetic disk, an optical disk, a U disk, or a removable hard disk, and the like.
  • the processor 1603 is used for running the computer program stored in the memory 1605, and implements the following steps when executing the computer program:
  • the target to be followed is determined from the image collected by the photographing device by detecting the image collected by the photographing device.
  • a following strategy corresponding to the target to be followed is determined.
  • the photographing device follows the target to be followed.
  • the device for determining a target following strategy is configured to communicate with a photographing device, and the device for determining a target following strategy includes a memory and a processor; the memory is used to store a computer program; the processor, The computer program is used for executing the computer program, and when the computer program is executed, the following steps are implemented: acquiring an image collected by a photographing device; a target to be followed; a following strategy corresponding to the target to be followed is determined according to the characteristics of the target to be followed; and according to the following strategy, the photographing device follows the target to be followed.
  • a target object feature library is determined.
  • the determining of the target object feature library includes: in the model image, determining a first target object; extracting features of the first target object; and establishing a feature library about the first target object according to the extracted features Feature Library.
  • the determining the first target object in the model image includes: in the model image, determining an attribute of the first target object and/or an image area corresponding to the first target object .
  • the determining the attribute of the first target object and/or the image area corresponding to the first target object includes: indicating the image area corresponding to the first target object through a position identifier.
  • the method further includes: determining features about the first target object; determining whether there is a first target object in the model image according to the features of the first target object candidate target object; if the model image includes the first candidate target object, determine the probability that the first candidate target object is the first target object.
  • the determining the probability that the first candidate target object is the first target object to optimize the method for determining target following includes: determining the first candidate target object and the first target object The first probability that a target object is of the same category is determined, and the second probability that the first candidate target object is at the first position is determined; according to the first probability and the second probability, the first candidate target object is obtained Whether it is the prediction result of the first target object.
  • the processor is further configured to implement the following steps: optimizing the device for determining a target following strategy according to the first probability and the second probability and the prediction result; wherein, according to the The first probability, the second probability and the prediction result obtain an objective function of the device for determining a target following strategy; and the device for determining a target following strategy is updated according to the objective function.
  • the determining of the target to be followed from the image collected by the photographing device includes: selecting the target to be followed through a user operation.
  • selecting the target to be followed through a user operation includes: in response to a user's click operation, identifying the target to be followed in an image area near the clicked position; and marking the target to be followed , and/or the location of the target to be followed.
  • selecting the target to be followed through a user operation includes: in response to a first pressing operation of a mode selection button by the user, identifying the target to be followed in a central area of the image; and marking the target to be followed The category of the target to be followed, and/or the location of the target to be followed is marked.
  • the determining the target to be followed from the image collected by the photographing device by detecting the image collected by the photographing device includes: identifying the characteristics of the target to be followed.
  • the identifying the feature of the target to be followed includes: comparing the target to be followed with a first target object in the model image; if the first feature of the target to be followed and the If the first target characteristics of the first target objects are similar, the target to be followed is marked as a first category; and the determining of the following strategy corresponding to the target to be followed according to the characteristics of the target to be followed includes: if The target to be followed is marked as the first category, and the following strategy corresponding to the target to be followed is the first following strategy.
  • the first feature of the target to be followed is not similar to the first target feature of the first target object; then compare the target to be followed with the second target object in the model image; Wherein, the first target object and the second target object are different.
  • the first feature and the first target feature include at least one of facial features, facial contours, and motion attributes.
  • the identifying the feature of the target to be followed includes: comparing the target to be followed with a plurality of target objects in a model image to determine a first target from the plurality of target objects object, wherein the characteristics of the first target object are similar to the characteristics of the target to be followed; and the determining a following strategy corresponding to the target to be followed according to the characteristics of the target to be followed includes: according to the characteristics of the target to be followed
  • the category of the first target object determines the following strategy corresponding to the target to be followed; wherein, the category of the first target object is located in a preset category library; the predicted category library includes respective categories corresponding to a plurality of target objects.
  • the determining the target to be followed from the image collected by the photographing device by detecting the image collected by the photographing device includes: determining the first target from the image collected by the photographing device. a target and a second target, wherein the first target and the second target are different; comparing the first target with a plurality of target objects in the model image to determine a first recognition result; comparing the first target two targets are compared with the plurality of target objects to determine a second recognition result; and according to the first recognition result and the second recognition result, determine the target from the first target and the second target Statement to follow the target.
  • the priority of the first target and the second target is determined according to the first identification result and the second identification result.
  • the first identification result indicates that the first object belongs to a first category; and the first identification result indicates that the first object occupies a frame ratio of the image and/or is in the image and the second recognition result indicates that the second object belongs to the second category; the second recognition result indicates that the second object occupies the frame ratio of the image and/or the size in the image Location.
  • the first category and the second category are different.
  • the first category is infants and the second category is adults.
  • the priority of the first category is higher than the priority of the second category; it is determined that the first target corresponding to the first category is the To be followed target.
  • the priority corresponding to the first recognition result is higher than the priority corresponding to the second recognition result, and the first target is used as the target to be followed.
  • the image collected by the photographing device is classified according to the second recognition result.
  • the second target in is determined as the target to be followed.
  • determining the second target in the image as the target to be followed includes: if there are a plurality of the second targets in the image, according to the plurality of the second targets The degree of saliency of each of the objects in the image collected by the photographing device determines the second follow-up priority of each of the second objects; For the second following priority, the target to be followed is determined from a plurality of the second targets.
  • the target following includes: if the first target is the target to be followed, adopting the first following mode to follow; if the second target is the target to be followed, adopting the second following mode
  • the category of the first target and the category of the second target are different, and the first following mode and the second following mode are different.
  • the following speed of the photographing device in the first following mode is slower than the following speed of the photographing device in the second following mode.
  • the image collected by the photographing device includes an image currently collected by the photographing device, an image previously collected by the photographing device, and an image subsequently collected by the photographing device; in the first follow mode, The target to be followed is located at a first position in the image previously collected by the photographing device, and the target to be followed is located at a second position in the image currently collected by the photographing device. According to the second position, adjust the The posture of the photographing device; and in the second following mode, according to the relationship between the first position and the second position, predict the position of the target to be followed in the image subsequently collected by the photographing device a third position; adjusting the posture of the photographing device according to the third position.
  • the image collected by the photographing device includes an image currently collected by the photographing device and an image previously collected by the photographing device; and the image collected by detecting the photographing device, from Determining the target to be followed in the image collected by the photographing device includes: determining a plurality of third targets in the image currently collected by the photographing device; Compare the targets to be followed in the images of the 2 to determine a third recognition result; if the third recognition result indicates that the plurality of third targets are not similar to the first targets to be followed, then the plurality of The three targets are compared with the first target object in the model image to determine whether the first target to be followed exists in the plurality of third targets.
  • the method further includes: taking the target to be followed as a first target object; extracting features of the first target object; and updating a feature library corresponding to the first target according to the extracted features.
  • FIG. 17 is a schematic structural block diagram of a system for determining a target following strategy provided by an embodiment of the present application.
  • a system 1701 for determining a target following strategy includes a device 1703 for determining a target following strategy, a pan/tilt 1705 , and a photographing device 1707 mounted on the pan/tilt 1705 , and the device 1703 for determining a target following strategy communicates with the photographing device 1707 connect.
  • the pan/tilt 1705 is connected to the handle portion, and the device 1703 for determining the target following strategy is disposed on the handle portion.
  • the pan/tilt 1705 is mounted on the movable platform, and the device 1703 for determining the target following strategy is also used to control the movement of the movable platform.
  • the pan/tilt head is connected to a handle portion, and the device for determining a target following strategy is provided on the handle portion.
  • the PTZ is mounted on a movable platform
  • the device for determining a target following strategy is further configured to control the movable platform to move.
  • FIG. 18 is a schematic block diagram of the structure of a handheld gimbal provided by an embodiment of the present application.
  • the handheld gimbal 1801 includes a device 1803 for determining a target following strategy, a handle portion, and a gimbal 1805 connected to the handle portion.
  • the gimbal 1805 is used for carrying a photographing device, and the device 1803 for determining a target following strategy is set on the handle. Ministry.
  • the device 1803 for determining the target following strategy is connected to the pan/tilt 1805 .
  • FIG. 19 is a schematic structural block diagram of a movable platform provided by an embodiment of the present application.
  • the movable platform 1901 includes a platform body, a pan/tilt 1903 mounted on the platform body, and a device 1905 for determining a target following strategy.
  • the pan/tilt 1903 is used for carrying shooting equipment, and the device 1905 for determining a target following strategy is set on the platform on the body.
  • Embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and the computer program includes program instructions, and the processor executes the program instructions to realize the provision of the above embodiments.
  • the computer-readable storage medium may be the control terminal described in any of the foregoing embodiments or an internal storage unit of the unmanned aerial vehicle, such as a hard disk or memory of the control terminal or the unmanned aerial vehicle.
  • the computer-readable storage medium can also be an external storage device of the control terminal or the unmanned aerial vehicle, such as a plug-in hard disk equipped on the control terminal or the unmanned aerial vehicle, a smart memory card (Smart Media Card, SMC) , Secure Digital (Secure Digital, SD) card, flash memory card (Flash Card) and so on.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

一种确定目标跟随策略的方法、装置、系统、设备及存储介质,该方法包括:获取拍摄设备采集到的图像S301,通过检测所述拍摄设备采集到的图像,从所述拍摄设备采集到的图像中确定待跟随目标S303;根据所述待跟随目标的特征,确定所述待跟随目标对应的跟随策略S305;根据所述跟随策略,所述拍摄设备对所述待跟随目标进行跟随S307。本申请能够快速准确地确定待跟随目标,并对待跟随目标进行可靠地跟随,提高用户体验。

Description

确定目标跟随策略的方法、装置、系统、设备及存储介质 技术领域
本申请涉及目标跟随技术领域,尤其涉及一种确定目标跟随策略的方法、装置、系统、设备及存储介质。
背景技术
在目标智能跟随场景下,需要确定待跟随目标,一般是通过目标检测算法对图像进行识别,并对图像中的待跟随目标进行跟随,但目标检测算法能够检测的准确度有限,无法快速准确地确定待跟随目标,在跟随待跟随目标的过程中容易丢失待跟随目标。目前,目标检测算法的识别效果不佳,对待跟随目标的跟随效果不稳定,用户体验不好。
发明内容
基于此,本申请实施例提供了一种确定目标跟随策略的方法、装置、系统、设备及存储介质,旨在快速准确地确定待跟随目标,并对待跟随目标进行可靠地跟随,提高用户体验。
第一方面,本申请实施例提供了一种确定目标跟随策略的方法,所述确定目标跟随策略的方法包括:
获取拍摄设备采集到的图像;
通过检测所述拍摄设备采集到的图像,从所述拍摄设备采集到的图像中确定待跟随目标;
根据所述待跟随目标的特征,确定所述待跟随目标对应的跟随策略;以及
根据所述跟随策略,所述拍摄设备对所述待跟随目标进行跟随。
第二方面,本申请实施例还提供了一种确定目标跟随策略的装置。所述确定目标跟随策略的装置用于与拍摄设备通信连接。所述确定目标跟随策略的装置包括存储器和处理器;
所述存储器用于存储计算机程序;
所述处理器,用于执行所述计算机程序并在执行所述计算机程序时,实现如下步骤:
获取拍摄设备采集到的图像;
通过检测所述拍摄设备采集到的图像,从所述拍摄设备采集到的图像中确定待跟随目标;
根据所述待跟随目标的特征,确定所述待跟随目标对应的跟随策略;以及
根据所述跟随策略,所述拍摄设备对所述待跟随目标进行跟随。
第三方面,本申请实施例还提供了一种确定目标跟随策略的系统,所述确定目标跟随策略的系统包括云台、搭载于所述云台上的拍摄设备和如上所述的确定目标跟随策略的装置。
第四方面,本申请实施例还提供了一种手持云台,所述手持云台包括手柄部、连接于所述手柄部的云台和如上所述的确定目标跟随策略的装置,所述云台用于搭载拍摄设备,所述确定目标跟随策略的装置设置在所述手柄部上。
第五方面,本申请实施例还提供了一种可移动平台,所述可移动平台包括平台本体、搭载于所述平台本体的云台和如上所述的确定目标跟随策略的装置,所述云台用于搭载拍摄设备,所述确定目标跟随策略的装置设置在所述平台本体上。
第六方面,本申请实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时使所述处理器实现如上所述的确定目标跟随策略的方法的步骤。
本申请实施例提供了一种确定目标跟随策略的方法、装置、系统、设备及存储介质,通过在拍摄设备采集到的图像中识别待跟随目标,得到待跟随目标的识别结果,根据待跟随目标的识别结果,确定待跟随目标的特征以及对应该待跟随目标的跟随策略,根据待跟随目标的跟随策略对待跟随目标进行跟随,整个过程能够快速准确地确定待跟随目标,并对待跟随目标进行可靠地跟随,极大地提高了用户体验。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本申请。
附图说明
为了更清楚地说明本申请实施例技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本申请 的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是实施本申请实施例提供的确定目标跟随策略的方法的一场景示意图;
图2是实施本申请实施例提供的确定目标跟随策略的方法的另一场景示意图;
图3是本申请实施例提供的一种确定目标跟随策略的方法的步骤示意流程图;
图4是本申请实施例提供的一种确定目标对象特征库的步骤示意流程图;
图5是本申请实施例中模型图像中显示图像的一示意图;
图6是图3中的确定目标跟随策略的方法的一子步骤示意流程图;
图7是图6的确定目标跟随策略的方法的一子步骤示意流程图;
图8是图3中的确定目标跟随策略的方法的一子步骤示意流程图;
图9是图8中从拍摄设备采集到的图像中选择待跟随目标的一示意图;
图10图8中从拍摄设备采集到的图像中选择待跟随目标的一子步骤示意流程图;
图11是图3中的确定目标跟随策略的方法的一子步骤示意流程图;
图12是图3中的确定目标跟随策略的方法的一子步骤示意流程图;
图13是图3中的确定目标跟随策略的方法的一子步骤示意流程图;
图14是本申请实施例中显示装置显示的拍摄设备采集到的图像的一示意图;
图15是图3中的确定目标跟随策略的方法的一子步骤示意流程图;
图16是本申请实施例提供的一种区别目标跟踪策略的装置的结构示意性框图;
图17是本申请实施例提供的一种确定目标跟随策略的系统的结构示意性框图;
图18是本申请实施例提供的一种手持云台的结构示意性框图;
图19是本申请实施例提供的一种可移动平台的结构示意性框图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
附图中所示的流程图仅是示例说明,不是必须包括所有的内容和操作/步骤,也不是必须按所描述的顺序执行。例如,有的操作/步骤还可以分解、组合或部分合并,因此实际执行的顺序有可能根据实际情况改变。
下面结合附图,对本申请的一些实施方式作详细说明。在不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。
在目标智能跟随场景下,需要确定待跟随目标,一般是通过目标检测算法对图像进行识别,并对图像中的待跟随目标进行跟随,但目标检测算法能够检测的准确度有限,无法快速准确地确定待跟随目标,在跟随待跟随目标的过程中容易丢失待跟随目标。目前,目标检测算法的识别效果不佳,对待跟随目标的跟随效果不稳定,用户体验不好。
为解决上述问题,本申请实施例提供了一种确定待跟随目标策略的方法、装置、系统、设备及存储介质,通过在拍摄设备采集到的图像中识别待跟随目标,得到待跟随目标的识别结果,根据待跟随目标的识别结果,确定待跟随目标的特征以及对应该待跟随目标的跟随策略,根据待跟随目标的跟随策略对待跟随目标进行跟随,整个过程能够快速准确地确定待跟随目标,并对待跟随目标进行可靠地跟随,极大地提高了用户体验。
请参阅图1,图1是实施本申请实施例提供的确定待跟随目标策略的方法的一场景示意图。如图1所示,该场景包括手持云台100和搭载于手持云台100上的拍摄设备200,手持云台100包括手柄部101和设于手柄部101上的云台102,云台102用于搭载拍摄设备200,拍摄设备200可以与云台102一体设置,也可以外接于云台102。示例性的,拍摄设备200可以为智能手机,也可以为相机,例如为单反相机,还可以为摄像头。手持云台100可以承载拍摄设备200,用于固定拍摄设备200以及改变拍摄设备200的高度、倾角和/或方向,或者用于将拍摄设备200稳定地保持在某一姿态上,并控制拍摄设备200进行拍摄。
在一实施例中,手持云台100与拍摄设备200通信连接,手持云台100可以通过控制线与拍摄设备200连接,该控制线例如为快门线。此处不限定快门线的种类,例如,该快门线可以是通用串行总线(Universal Serial Bus,USB)。手持云台100也可以通过无线的方式与拍摄设备200连接,例如,通过手持云台100内置的第一蓝牙模块与拍摄设备200内置的第二蓝牙模块,建立手持云台100与拍摄设备200之间的通信连接。
在一实施例中,云台102包括三轴电机,三轴电机分别为俯仰(pitch)轴电机1021、平移(yaw)轴电机1022和横滚(roll)轴电机(图1中未示出),所述三轴电机用于调整搭载于云台102上的拍摄设备200的平衡姿态,以便拍摄稳定流畅的画面。其中,云台102上还设置有惯性测量单元(Inertial measurement unit,IMU),可例如为加速度计或陀螺仪中的至少一种,可以用于测量云台102的姿态和加速度等,以便根据姿态调整云台102的姿态。在一实施例中,手柄部101上也设置有惯性测量单元(Inertial measurement unit,IMU),例如包括加速度计或陀螺仪中的至少一种,可以用于测量手柄部101的姿态和加速度等,以便根据手柄部101的姿态和云台102的姿态调整云台102的姿态。
在一实施例中,手持云台100包括处理器,处理器用于对输入的控制指令进行处理,或者收发信号等。处理器可以设置在手柄部101的内部。可选地,该处理器可以是中央处理单元(Central Processing Unit,CPU),该处理器还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
在一实施例中,手持云台100包括多种工作模式,比如包括:跟随模式、目标跟踪模式、锁定模式、运动模式和/或休眠模式等。手持云台100处于不同的工作模式执行不同的动作。例如,若手持云台100处于跟随模式,则采用跟随模式控制拍摄设备200进行跟随拍摄,所述跟随模式可以是指云台102跟随手柄部101运动的拍摄模式;若手持云台100处于目标跟踪模式,则在确定待跟随目标后,云台102开始自动旋转使得拍摄设备 200的角度始终跟随待跟随目标转动,保持待跟随目标在采集的画面中。
例如,锁定模式是指对云台102的三轴进行锁定,云台的三轴不跟随;运动模式是指云台102以预设速度跟随,比如以云台的三轴的最大速度跟随;休眠模式是指控制手持云台进入休眠状态。其中,在锁定模式或运动模式,云台的跟随对象可以是手柄部101,也可以是待跟随目标,还可以是其它,具体可以根据需要设定,此处不做具体限定。
在一实施例中,待跟随目标的确定方式可以为:手持云台100的处理器获取拍摄设备200采集到的图像,通过检测拍摄设备200采集到的图像,从拍摄设备200采集到的图像中确定待跟随目标,并识别该待跟随目标的特征以及对应该待跟随目标的跟随策略,拍摄设备200根据待跟随目标的跟随策略对待跟随目标进行跟随。可以理解的是,待跟随目标的确定方式也可以由用户在手持云台100侧进行诸如框选或点选的操作后确定,也可以是根据拍摄设备200采集到的图像中的特定姿势确定,还可以是利用根据拍摄设备200采集到的图像中的待跟随目标在画面中的位置确定。此处不做具体限定。
在一实施例中,手柄部101上还设置有控制键,以便用户操作该控制键以控制云台102或拍摄设备200。该控制键可例如为按键、扳机、旋钮或者摇杆等,当然也包括其他形式的物理按键或者虚拟按键。其中,虚拟按键可以为设置于触摸屏上的用于和用户交互的虚拟按钮。摇杆可以用于控制至少一个转轴的运动,进而控制拍摄设备200的运动。可以理解的是,遥杆也可以用于其他功能。可以理解的是,控制键的数量可以为一个或多个。当控制键的数量为一个时,可以针对该控制键采用不同的操作方式产生不同的控制指令,不同的操作方式比如为按压次数不同;当控制键的数量为多个时,比如包括第一控制键、第二控制键和第三控制键等,不同控制键用于产生不同的控制指令。
在一实施例中,该控制键包括跟随控制按键,该跟随控制按键用于控制手持云台100启动或退出目标跟踪模式,例如,手持云台100的处理器响应于用户对跟随控制按键的第一按压操作,若手持云台100不处于目标跟踪模式,则控制手持云台100处于目标跟踪模式,并获取拍摄设备200采集到的图像,通过检测拍摄设备200采集到的图像,从拍摄设备200采 集到的图像中确定待跟随目标,并识别该待跟随目标的特征以及对应该待跟随目标的跟随策略,拍摄设备200根据待跟随目标的跟随策略对待跟随目标进行跟随,使得用户能够通过跟随控制按键快速地控制手持云台100进入目标跟踪模式,能够对待跟随目标进行跟踪。
在一实施例中,手持云台100还包括显示装置,该显示装置用于显示拍摄设备200采集到的图像。在手持云台100处于目标跟踪模式,且对待跟随目标进行跟踪的过程中,处理器控制显示装置显示拍摄设备200采集到的图像,并在显示装置显示的图像中标识待跟随目标;根据识别结果确定该图像中的每个候选目标的跟随优先级;响应于对该跟随控制按键的第二按压操作,根据该候选目标的跟随优先级,重新确定待跟随目标,并在该图像中标识重新确定的待跟随目标。通过跟随控制按键方便用户切换待跟随目标。
请参阅图2,图2是实施本申请实施例提供的确定目标跟随策略的方法的另一场景示意图。如图2所示,该场景包括控制终端300和可移动平台400,控制终端300与可移动平台400通信连接,控制终端300包括显示装置310,显示装置310用于显示可移动平台400发送的图像。需要说明的是,显示装置310包括设置在控制终端300上的显示屏或者独立于控制终端300的显示器,独立于控制终端300的显示器可以包括手机、平板电脑或者个人电脑等,或者也可以是带有显示屏的其他电子设备。其中,该显示屏包括LED显示屏、OLED显示屏、LCD显示屏等等。
在一实施例中,可移动平台400包括平台本体410、搭载于平台本体上的云台420和动力系统430,云台420用于搭载拍摄设备500,动力系统430包括电机431和螺旋桨432,电机431用于驱动螺旋桨432旋转,从而为可移动平台提供移动动力。其中,云台420包括三轴电机,分别为平移轴电机421、俯仰轴电机422和横滚轴电机423,用于调整搭载于云台420上的拍摄设备500的平衡姿态,以便随时随地拍摄出高精度的稳定画面。
在一实施例中,可移动平台400还包括处理器,处理器用于对输入的控制指令进行处理,或者收发信号等。处理器可以设置可移动平台400内部。可选地,该处理器可以是中央处理单元(Central Processing Unit,CPU), 该处理器还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
在一实施例中,控制终端300包括跟随控制按键,该跟随控制按键用于控制可移动平台400启动或退出目标跟踪模式,可移动平台400处于目标跟踪模式时,可移动平台400能够控制云台420运动,使得拍摄设备500的角度始终跟随待跟随目标转动,保持待跟随目标在采集的画面中。例如,控制终端300响应于用户对该跟随控制按键的第一按压操作,生成目标跟随启动指令,并向可移动平台400发送该目标跟随启动指令;可移动平台400接收该目标跟随启动指令,并传输给处理器,处理器根据该目标跟随启动指令,控制可移动平台400处于目标跟踪模式,并获取拍摄设备500采集到的图像,然后检测拍摄设备500采集到的图像,从拍摄设备500采集到的图像中确定待跟随目标,并识别该待跟随目标的特征以及对应该待跟随目标的跟随策略,并根据待跟随目标的跟随策略控制云台420运动,使得拍摄设备500的角度始终跟随待跟随目标转动,保持待跟随目标在采集的画面中。
其中,可移动平台包括可移动机器人、无人机和无人车等,可移动平台400为无人机,动力系统430能够使无人机垂直地从地面起飞,或者垂直地降落在地面上,而不需要无人机任何水平运动(如不需要在跑道上滑行)。可选的,动力系统430可以允许无人机在空中预设位置和/或方向盘旋。一个或者多个动力系统430在受到控制时可以独立于其它的动力系统430。可选的,一个或者多个动力系统430可以同时受到控制。例如,无人机可以有多个水平方向的动力系统430,以追踪目标的提升及/或推动。水平方向的动力系统430可以被致动以提供无人机垂直起飞、垂直降落、盘旋的能力。
在一实施例中,水平方向的动力系统430中的一个或者多个可以顺时针方向旋转,而水平方向的动力系统中的其它一个或者多个可以逆时针方向旋转。例如,顺时针旋转的动力系统430与逆时针旋转的动力系统430 的数量一样。每一个水平方向的动力系统430的旋转速率可以独立变化,以实现每个动力系统430导致的提升及/或推动操作,从而调整无人机的空间方位、速度及/或加速度(如相对于多达三个自由度的旋转及平移)。
在一实施例中,无人机还可以包括传感系统,传感系统可以包括一个或者多个传感器,以感测无人机的空间方位、速度及/或加速度(如相对于多达三个自由度的旋转及平移)、角加速度、姿态、位置(绝对位置或者相对位置)等。所述一个或者多个传感器包括GPS传感器、运动传感器、惯性传感器、近程传感器或者影像传感器。可选的,传感系统还可以用于采集无人飞行器所处的环境数据,如气候条件、要接近的潜在的障碍、地理特征的位置、人造结构的位置等。另外,无人机可以包括脚架,所述脚架是无人机降落时,无人机与地面的接触件,脚架可以是无人飞行器在飞行状态(例如无人飞行器在巡航时)收起,在降落时才放下;也可以固定安装在无人机上,一直处于放下的状态。
在一实施例中,可移动平台400能够与控制终端300进行通信,可以实现控制终端300与可移动平台400之间的数据交互,例如对可移动平台400的移动控制、对负载的控制(当负载为拍摄设备500时,控制终端300可以控制该拍摄设备500),其中,控制终端300可以与可移动平台400和/或负载进行通信,可移动平台400与控制终端300之间的通信可以是无线通信,可以在可移动平台400和控制终端300之间提供直接通信。这种直接通信可以无需任何中间装置或中间网络。
在一实施例中,可以在可移动平台400与控制终端300之间提供间接通信。这种间接通信可以借助于一个或多个中间装置或网络来发生。例如,间接通信可以利用电信网络。间接通信可以借助于一个或多个路由器、通信塔、卫星、或任何其他的中间装置或网络来进行。通信类型的实例可以包括但不限于经由以下方式的通信:因特网,局域网(LAN),广域网(WAN),蓝牙,近场通信(NFC)技术,基于诸如通用分组无线电服务(GPRS)、GSM增强型数据GSM环境(EDGE)、3G、4G、或长期演进(LTE)协议的移动数据协议的网络,红外线(IR)通信技术,和/或Wi-Fi,并且可以是无线式、有线式、或其组合。
其中,控制终端300可以包括但不限于:智能电话/手机、平板电脑、 个人数字助理(PDA)、台式计算机、媒体内容播放器、视频游戏站/系统、虚拟现实系统、增强现实系统、可穿戴式装置(例如,手表、眼镜、手套、头饰(例如,帽子、头盔、虚拟现实头戴耳机、增强现实头戴耳机、头装式装置(HMD)、头带)、挂件、臂章、腿环、鞋子、马甲)、手势识别装置、麦克风、能够提供或渲染图像数据的任意电子装置、或者任何其他类型的装置。该控制终端300可以是手持终端,控制终端300可以是便携式的。该控制终端300可以由人类用户携带。在一些情况下,控制终端300可以远离人类用户,并且用户可以使用无线和/或有线通信来控制控制终端300。
以下,将结合图1或图2中的场景对本申请的实施例提供的确定目标跟随策略的方法进行详细介绍。需知,图1或图2中的场景仅用于解释本申请实施例提供的确定目标跟随策略的方法,但并不构成对本申请实施例提供的确定目标跟随策略的方法应用场景的限定。
请参阅图3,图3是本申请实施例提供的一种确定目标跟随策略的方法的步骤示意流程图。
如图3所示,该确定目标跟随策略的方法包括步骤S301至步骤S307。
步骤S301、获得拍摄设备采集到的图像。
步骤S303、通过检测所述拍摄设备采集到的图像,从所述拍摄设备采集到的图像中确定待跟随目标。
步骤S305、根据所述待跟随目标的特征,确定所述待跟随目标对应的跟随策略。
步骤S307、根据所述跟随策略,所述拍摄设备对所述待跟随目标进行跟随。
在需要确定待跟随目标跟随策略时,获取拍摄设备采集到的图像,并在拍摄设备采集到的图像中确定待跟随目标,并识别该待跟随目标的特征,确定对应该待跟随目标的跟随策略,并根据待跟随目标的跟随策略,对待跟随目标进行跟随,从而提高针对不同待跟随目标进行跟随的准确度和可靠性。其中,目标确定装置用于与拍摄设备通信连接,拍摄设备可以为智能手机,也可以为相机,例如为单反相机,还可以为摄像头。拍摄设备采集的图像可以为原图像,也可以为经过处理后的图像。图像的处理方式可 以为对图像进行降噪,也可以为对图像添加滤镜。待跟随目标可以为婴儿,也可以为成人。跟随策略可以为婴儿跟随策略,也可以为成人跟随策略,其中婴儿跟随策略的跟随速度比成人跟随策略的跟随速度慢。待跟随目标和跟随策略可基于实际情况进行设置。
在一实施例中,通过检测该拍摄设备采集到的图像,从所述拍摄设备采集到的图像中确定待跟随目标,确定拍摄设备采集到的图像中包含有待跟随目标第一目标对象,根据该第一目标对象的特征,确定对应该第一目标对象的跟随策略,对待跟随目标进行跟随。
其中,该待跟随目标的特征可以为该待跟随目标的面部特征;和/或,该待跟随目标的形貌轮廓;和/或,该待跟随目标的运动属性。其中,该待跟随目标的特征可以基于实际情况进行设置,本申请实施例对此不做具体限定。
在一个实施例中,待跟随目标的特征是根据待跟随目标的面部特征来确定的,例如,当待跟随目标的眼间距离近时,可以确定该待跟随目标为婴儿,而当采集到的图像中的待跟随目标眼间距离远时,可以确定该待跟随目标为成人。其中,该待跟随目标的面部特征可以基于实际情况进行设置,本申请实施例对此不做具体限定。
在一实施例中,待跟随目标的特征是根据待跟随目标的形貌轮廓来确定的,例如,当待跟随目标的形貌轮廓为襁褓时,可以确定该待跟随目标为婴儿,而当采集到的图像中的待跟随目标的形貌轮廓为四肢舒展时,可以确定该待跟随目标为成人。其中,该待跟随目标的形貌轮廓可以基于实际情况进行设置,本申请实施例对此不做具体限定。
在一个实施例中,待跟随目标的特征是根据待跟随目标的运动属性来确定的,例如,当待跟随目标的运动缓慢时,可以确定该待跟随目标为婴儿的概率相对较高,而当采集到的图像中的待跟随目标运动剧烈时,可以确定该待跟随目标为成人的概率相对较高。其中,该待跟随目标的运动属性可以基于实际情况进行设置,本申请实施例对此不做具体限定。
在一个实施例中,拍摄设备采集到的图像包含有待跟随目标婴儿,通过检测该拍摄设备采集到的图像,从所述拍摄设备采集到的图像中确定待跟随目标婴儿,根据该待跟随目标婴儿的特征,确定对应该待跟随目标婴 儿的跟随策略为婴儿跟随策略。
其中,婴儿跟随策略可以为根据拍摄设备当前采集到的图像、拍摄设备先前采集到的图像和拍摄设备后续采集到的图像,采用较为缓慢的跟随速度对待跟随目标婴儿进行跟随。
在一个实施例中,婴儿跟随策略可以为根据拍摄设备当前采集到的图像、拍摄设备先前采集到的图像和拍摄设备后续采集到的图像,在获得待跟随目标当前采集的图像时,将拍摄设备向待跟随目标的所在位置进行精确调整。
在一个实施例中,拍摄设备采集到的图像包含有待跟随目标成人,通过检测该拍摄设备采集到的图像,从所述拍摄设备采集到的图像中确定待跟随目标成人,根据该待跟随目标的特征,确定对应该待跟随目标婴儿的跟随策略为成人跟随策略。
其中,成人跟随策略可以为根据拍摄设备当前采集到的图像、拍摄设备先前采集到的图像和拍摄设备后续采集到的图像,采用较为迅速的跟随速度对待跟随目标成人进行跟随。
在一个实施例中,成人跟随策略可以为根据拍摄设备当前采集到的图像、拍摄设备先前采集到的图像和拍摄设备后续采集到的图像,在待跟随目标运动之前,根据拍摄设备当前采集到的图像和拍摄设备先前采集到的图像计算得出待跟随目标的运动状态,并根据该运动状态计算得出待跟随目标的后续位置,在待跟随目标运动之前,预先将拍摄设备向待跟随目标的后续位置进行微调,并在获得当前采集的图像时,将拍摄设备向待跟随目标的所在位置进行精确调整。
请参阅图4,图4是本申请实施例提供的一种确定目标对象特征库的步骤示意流程图。
如图4所示,该确定目标对象特征库包括步骤S401至S405。
步骤S401、在模型图像中,确定第一目标对象。
其中,模型图像为用于确定目标对象的图像。也就是说,含有第一目标对象的图像可以为模型图像。第一目标对象可以为婴儿,也可以为成人。
在一个实施例中,确定第一目标对象包括确定该第一目标对象的属性和/或该第一目标对象对应的图像区域。
第一目标对象的属性用于表示第一目标对象的类别。例如,该第一目标对象的属性可以为婴儿。又例如,该第一目标对象的属性也可以为成人。然而本发明并非限于此,第一目标对象的类别也可以包括其他的类别。例如,该第一目标对象的属性可以进一步包括:中年人,老人,儿童等。该第一目标对象对应的图像区域可以为该第一目标对象在图像中出现的区域。
在一个实施例中,当确定第一目标对象对应的图像区域后,可以使用位置标识在模型图像中指示第一目标对象对应的图像区域。上述在模型图像中指示第一目标对象对应的图像区域的位置标识包括在第一目标对象所处的区域显示矩形框和/或标识图标。
示例性的,如图5所示,模型图像中包括第一目标对象501、第二目标对象503和背景505。第一目标对象501所处的区域显示有矩形框507。其中,标识第一目标对象的方式可以基于实际情况进行设计,本申请实施例对此不做具体限定。
在一个实施例中,确定第一目标对象的属性为婴儿。第一目标对象的属性可基于实际情况进行设置,本申请实施例对此不做具体限定。
步骤S403、提取第一目标对象的特征。
其中,第一目标对象的特征可以为该待跟随目标的面部特征;和/或,该待跟随目标的形貌轮廓;和/或,该待跟随目标的运动属性。其中,该第一目标对象的特征可以基于实际情况进行设置,本申请实施例对此不做具体限定。
在一实施例中,当确定的第一目标对象为婴儿时,提取该第一目标对象的特征,该特征可以为婴儿的面部特征,该特征也可以为婴儿的形貌轮廓,该特征还可以为婴儿的运动属性。其中,该第一目标对象的特征可以基于实际情况进行设置,本申请实施例对此不做具体限定。
在一实施例中,当确定的第一目标对象为成人时,提取该第一目标对象的特征,该特征可以为成人的面部特征,该特征也可以为成人的形貌轮廓,该特征还可以为成人的运动属性。其中,该第一目标对象的特征可以基于实际情况进行设置,本申请实施例对此不做具体限定。
步骤S405、根据已提取的特征,建立关于所述第一目标对象的特征库。
其中,第一目标对象的特征库可以包含该第一目标对象的面部特征;和/或,该第一目标对象的形貌轮廓;和/或,该第一目标对象的运动属性。
在一实施例中,当确定的第一目标对象为婴儿时,提取该第一目标对象的特征,建立关于婴儿的第一目标对象的特征库,该特征库包括:婴儿的面部特征;和/或,婴儿的形貌轮廓;和/或,婴儿的运动属性。该特征库包含内容可基于实际情况进行设置,本申请实施例对此不做具体限定。
在一实施例中,当确定的第一目标对象为成人时,提取该第一目标对象的特征,建立关于成人的第一目标对象的特征库,该特征库包括:成人的面部特征;和/或,成人的形貌轮廓;和/或,成人的运动属性。该特征库包含内容可基于实际情况进行设置,本申请实施例对此不做具体限定。
在识别待跟随目标之前,需要确定上述目标对象特征库。
请参阅图6,图6是图3中的确定目标跟随策略的方法的一子步骤示意流程图。
如图6所示,该确定目标跟随策略的方法在确定所述目标对象特征库之后可以包括子步骤S601至子步骤S605。
子步骤S601、确定关于所述第一目标对象的特征。
在一个实施例中,当第一目标对象为婴儿时,需要确定该第一目标对象的特征。该特征可以包括婴儿的面部特征;和/或,婴儿的形貌轮廓;和/或,婴儿的运动属性。该第一目标对象可基于实际情况进行设置,本申请实施例对此不做具体限定。
子步骤S603、根据所述第一目标对象的特征,在所述模板图像中确定是否存在第一候选目标对象。
其中,从给定模型图像中,根据上述第一目标对象的特征,可以在模板图像中确定是否存在第一候选目标对象,该第一候选目标对象具有上述第一目标对象的特征。
子步骤S605、若所述模型图像中包括所述第一候选目标对象,则确定所述第一候选目标对象是所述第一目标对象的概率,以优化所述确定目标跟随策略的方法。
在一个实施例中,当第一目标对象为婴儿时,该第一目标对象的特征可以为婴儿的面部特征;和/或,婴儿的形貌轮廓;和/或,婴儿的运动属 性。根据上述特征在模型图像中确定是否存在具有上述第一目标对象的特征的第一候选目标对象。当该模型图像中存在该第一候选目标对象时,判断该第一候选目标对象为婴儿的概率。
如图7所示,确定所述第一候选目标对象是所述第一目标对象的概率,以优化所述确定目标跟随的方法包括子步骤S701至子步骤S705。
子步骤S701、确定所述第一候选目标对象和所述第一目标对象是同一类别的第一概率。
子步骤S703、确定所述第一候选目标对象在第一位置的第二概率。
确定第一候选目标对象是第一目标对象的概率可以为:确定该第一候选目标对象和该第一目标对象是同一类别的第一概率,其中,该第一概率可以为该第一候选目标对象的特征与该第一目标对象的特征的一致程度;确定该第一候选目标对象和该第一目标对象在第一位置的第二概率,其中,该第一位置可以为该第一目标对象在图像中的位置,该第二概率可以为该第一候选目标对象在图像中的位置与该第一位置的距离。
其中,第一候选目标对象的第一概率可以根据第一预设映射关系和第一候选目标对象与第一目标对象是同一类别的一致程度确定,第一预设映射关系包括不同一致程度各自对应的该第一候选目标对象和该第一目标对象是同一类别的第一概率,例如,该第一候选目标对象的特征与该第一目标对象的特征的一致程度为60%、70%、90%、95%对应的第一候选目标对象和该第一目标对象是同一类别的第一概率分别为60分、70分、90分、95分,因此,当该第一候选目标对象的特征与该第一目标对象的特征的一致程度为95%,则该第一候选目标对象和该第一目标对象是同一类别的第一概率为95分。
第一候选目标对象和第一目标对象在第一位置的第二概率可以根据第二预设映射关系和第一候选目标对象在图像中的位置与该第一目标对象在图像中的位置即第一位置之间的距离确定,第二预设关系包括不同距离对应的跟随指数,例如,距离为0.5厘米、1厘米、1.5厘米对应的第二概率分别为95分、90分、85分,因此,第一候选目标对象在图像中的位置与第一位置之间的距离为0.5厘米,则该第一候选目标对象和该第一目标对象在第一位置的第二概率为95分。
子步骤S705、根据所述第一概率和所述第二概率,得出所述第一候选目标对象是否是第一目标对象的预测结果。
在一实施例中,根据第一概率和第二概率,确定第一候选目标对象是否是第一目标对象的预测结果的方式可以为:若仅考虑第一候选目标对象和所述第一目标对象是同一类别的第一概率,则将第一概率确定为第一候选目标对象是否为第一目标对象的预测结果;若仅考虑第一候选目标对象在第一位置的第二概率,则将第二概率确定为第一候选目标对象是否为第一目标对象的预测结果;若考虑第一候选目标对象和所述第一目标对象是同一类别的第一概率和第一候选目标对象在第一位置的第二概率,则将第一概率和第二概率之和确定为第一候选目标对象是否为第一目标对象的预测结果。
在一实施例中,若考虑第一候选目标对象和所述第一目标对象是同一类别的第一概率和第一候选目标对象在第一位置的第二概率,则计算第一预设权重与第一概率的乘积,并计算第二预设权重与第二概率的乘积;计算上述两个乘积的和,并将上述两个乘积的和确定为第一候选目标对象是否为第一目标对象的预测结果,其中,第一预设权重与第二预设权重之和为1。
根据第一概率和第二概率以及预测结果,可以优化确定目标跟随策略的方法。
在一个实施例中,可以根据第一概率和第二概率以及预测结果得出确定目标跟随策略的方法的目标函数,并根据该目标函数更新上述确定目标跟随策略的方法。其中,上述目标函数可以为表示确定目标跟随策略的方法的可靠性的函数,上述目标函数还可以为修正确定目标跟随策略的方法的函数,上述目标函数还可以用于优化如何确定待跟随目标的方法。上述更新上述确定目标跟随策略的方法可以为将目标函数加入确定待跟随目标策略的方法中,使得上述确定目标跟随策略的方法更加可靠。
在一实施例中,将图像中的第二目标对象确定为待跟随目标的方式可以为:若拍摄设备采集到的图像中存在多个第二目标对象,则确定每个第二目标对象的类别;根据拍摄设备的当前拍摄模式和每个第二目标对象的类别,从多个第二目标对象中确定待跟随目标。其中,拍摄设备的拍摄模 式包括家庭拍摄模式、人像拍摄模式、宠物拍摄模式、植物拍摄模式、车辆拍摄模式和全景拍摄模式。通过基于拍摄设备的当前拍摄模式和每个第二目标对象的类别,确定待跟随目标,使得确定的待跟随目标更加符合用户的需求,极大地提高了用户体验。
在一实施例中,根据拍摄设备的当前拍摄模式和每个第二目标对象的类别,从多个第二目标对象中确定待跟随目标的方式可以为:从多个第二目标对象中确定是否存在符合拍摄设备的当前拍摄模式的第二目标对象,若符合拍摄设备的当前拍摄模式的第二目标对象仅有一个,则将该第二目标对象确定为待跟随目标;若类符合拍摄设备的当前拍摄模式的第二目标对象的数量至少为两个,则符合拍摄设备的当前拍摄模式的每个第二目标对象的跟随优先级,并将该跟随优先级最高对应的第二目标对象确定为待跟随目标。
如图8所示,从所述拍摄设备采集到的图像中确定待跟随目标可以为通过用户操作,选择待跟随目标,包括子步骤S801至子步骤S803。
子步骤S801、响应于用户的点击操作,在点击位置附近的图像区域内识别所述待跟随目标。
其中,响应于用户对拍摄设备采集到的图像的点击操作,能够从该图像中确定待跟随目标。其中,点击操作包括单击操作、双击操作和长按操作等。
子步骤S803、标注所述待跟随目标的类别,和/或所述待跟随目标的所在位置。
其中,待跟随目标的类别可以为婴儿,也可以为成人,还可以为植物等。当待跟随目标的类别为婴儿时,该待跟随目标的特征可以为婴儿的面部特征;和/或,婴儿的形貌轮廓;和/或,婴儿的运动属性。该待跟随目标的类别以及该待跟随目标的特征可基于实际情况进行设置,本申请实施例对此不做具体限定。
在一个实施例中,如图9所示,用户对拍摄设备采集到的图像中的位置901进行点击操作,在位置901附近的图像区域903内识别到待跟随目标905。
在一个实施例中,如图9所示,在用户点击位置附近的图像区域内识 别待跟随目标905,并标注待跟随目标905的类别907,例如婴儿;和/或,标注待跟随目标905的所在位置,例如矩形框909和/或标识图标911。
如图10所示,从所述拍摄设备采集到的图像中确定待跟随目标可以为通过用户操作,选择待跟随目标,包括子步骤S1001至S1003。
子步骤S1001、响应于用户对模式选择按键的第一按压操作,在图像中央区域内识别所述待跟随目标。
其中模式选择按键用于选择不同的拍摄模式,拍摄模式包括家庭拍摄模式、人像拍摄模式、宠物拍摄模式、植物拍摄模式、车辆拍摄模式和全景拍摄模式等。
响应于用户对该跟随控制按键的第一按压操作,获取拍摄设备采集到的图像,并通过显示装置显示图像,使得目标确定装置能够从该图像中确定待跟随目标。其中,第一按压操作包括单击操作、双击操作和长按操作等。
子步骤S1003、标注所述待跟随目标的类别,和/或标示所述待跟随目标的所在位置。
在一个实施例中,响应于用户对模式选择按键的第一按压操作,当用户选择的模式为家庭拍摄模式时,在拍摄设备采集的图像的预设区域识别是否存在待跟随目标婴儿。其中,拍摄设备采集的图像的预设区域可以为该图像的中央区域。拍摄设备采集的图像的预设区域可以基于实际情况进行设置,本申请实施例对此不做具体限定。
在一个实施例中,当用户选择的模式为家庭拍摄模式时,在拍摄设备采集的图像的中央区域识别是否存在婴儿目标对象,当识别结果为存在婴儿时,标注该待跟随目标的类别为婴儿,和/或标示该待跟随目标的所在位置。
在一个实施例中,确定目标跟随策略的方法可以为,通过检测拍摄设备采集到的图像,从拍摄设备采集到的图像中确定待跟随目标,识别待跟随目标的特征。
如图11所示,从拍摄设备采集到的图像中确定待跟随目标,识别待跟随目标的特征包括子步骤S1101至S1113。
子步骤S1101、将所述待跟随目标和模型图像中的第一目标对象进行 比较。
在一个实施方式中,第一目标对象可以为婴儿,也可以为成人。
子步骤S1103、若所述待跟随目标的第一特征和所述第一目标对象的第一目标特征相似,则将所述待跟随目标标记为第一类别。
其中,第一特征和第一目标特征包括面部特征、形貌轮廓、运动属性中的至少一个。
步骤S1105、所述根据所述待跟随目标的特征,确定所述待跟随目标对应的跟随策略。
步骤S1107、若所述待跟随目标被标记为第一类别,则所述待跟随目标对应的跟随策略为第一跟随策略。
步骤S1109、若所述待跟随目标的第一特征和所述第一目标对象的第一目标特征不相似;则将所述待跟随目标和模型图像中的第二目标对象进行比较。
步骤S1111、若所述待跟随目标的第一特征和所述第二目标对象的第二目标对象特征相似,则将所述待跟随目标标记为第二类别。
其中,第一目标对象和第二目标对象不同。
步骤S1113、若所述待跟随目标被标记为第二类别,则所述待跟随目标对应的跟随策略为第二跟随策略。
其中,第一跟随策略和第二跟随策略不同。例如,第一跟随策略的跟随速度比第二跟随策略的跟随速度缓慢。
在一个实施例中,检测拍摄设备采集到的图像,从拍摄设备采集到的图像中确定待跟随目标,通过识别待跟随目标的特征,例如,识别待跟随目标的面部特征;和/或,识别待跟随目标的形貌轮廓;和/或,识别待跟随目标的运动属性。示例性地,当待跟随目标的第一特征为面部特征,模型图像中的第一目标对象为婴儿,模型图像中的第一目标对象的第一目标特征为面部特征时,将待跟随目标的面部特征和模型图像中的面部特征进行比较。当待跟随目标的面部特征和模型图像中的面部特征相似时,标记该待跟随目标为婴儿。根据待跟随目标的特征,确定待跟随目标对应的跟随策略,示例性地,当待跟随目标为婴儿时,根据婴儿运动属性较为缓慢的特点,可以确定该待跟随目标对应的跟随策略为第一跟随策略。
在一个实施例中,检测拍摄设备采集到的图像,从拍摄设备采集到的图像中确定待跟随目标,通过识别待跟随目标的特征,例如,识别待跟随目标的面部特征;和/或,识别待跟随目标的形貌轮廓;和/或,识别待跟随目标的运动属性。示例性地,当待跟随目标的第一特征为面部特征,模型图像中的第一目标对象为婴儿,模型图像中的第一目标的第一目标特征为第一面部特征时,将待跟随目标的面部特征和模型图像中的面部特征进行比较。当待跟随目标的面部特征和模型图像中的第一目标特征的面部特征不相似时,将待跟随目标的面部特征与模型图像中的第二目标对象的第二目标特征进行比较。其中,第二目标对象的第二目标特征也为第二面部特征。也就是说,将待跟随目标的面部特征与第二目标对象的第二面部特征进行比较。例如,当模型图像中的第二目标对象为成人时,将待跟随目标的面部特征和模型图像中的成人的面部特征进行比较。当待跟随目标的面部特征和成人的面部特征相似时,标记该待跟随目标为成人。根据待跟随目标的特征,确定待跟随目标对应的跟随策略,示例性地,当待跟随目标为成人时,根据成人运动属性较为迅速的特点,可以确定该待跟随目标对应的跟随策略为第二跟随策略。
如图12所示,确定目标跟随策略的方法中,识别待跟随目标的特征,包括子步骤S1201至S1203。
子步骤S1201、将所述待跟随目标和模型图像中的多个目标对象进行比较,以从所述多个目标对象中确定第一目标对象。
其中,目标对象可以包括婴儿目标对象;和/或成人目标对象。第一目标对象可以为婴儿,也可以为成人。第一目标对象的特征和待跟随目标的特征可以为面部特征;和/或,形貌轮廓;和/或,运动属性,所述第一目标对象的特征与所述待跟随目标的特征相似。
子步骤S1203、根据所述第一目标对象的类别,确定所述待跟随目标对应的跟随策略。
其中,所述第一目标对象的类别位于预设类别库,该预设类别库可以包括婴儿预设类别库,也可以包括成人预设类别库。该预测类别库包括多个目标对象各自对应的类别。例如,婴儿目标对象对应婴儿预设类别库,成人目标对象对应成人预设类别库。
在一个实施例中,模型图像中的目标对象有婴儿目标对象和成人目标对象,通过将待跟随目标与上述两个目标对象进行比较,示例性地,当该待跟随目标的面部特征与婴儿目标对象的面部特征相似,和/或,该待跟随目标的形貌轮廓与婴儿目标对象的形貌轮廓相似,和/或,该待跟随目标的运动属性与婴儿目标对象的运动属性相似时,确定婴儿目标对象为第一目标对象。根据该第一目标对象的类别,即婴儿,确定该待跟随目标对应的跟随策略为婴儿跟随策略。
在一个实施例中,模型图像中的目标对象有婴儿目标对象和成人目标对象,通过将待跟随目标与上述两个目标对象进行比较,示例性地,当该待跟随目标的面部特征与成人目标对象的面部特征相似,和/或,该待跟随目标的形貌轮廓与成人目标对象的形貌轮廓相似,和/或,该待跟随目标的运动属性与成人目标对象的运动属性相似时,确定成人目标对象为第一目标对象。根据该第一目标对象的类别,即成人,确定该待跟随目标对应的跟随策略为成人跟随策略。
如图13所示,确定目标跟随策略的方法中,通过检测拍摄设备采集到的图像,从上述拍摄设备采集到的图像中确定待跟随目标,包括子步骤S1301至S1307。
子步骤S1301、从所述拍摄设备采集到的图像中确定第一目标和第二目标。
其中,所述第一目标和所述第二目标不同。
子步骤S1303、将所述第一目标与模型图像中的多个目标对象进行比较,以确定第一识别结果。
子步骤S1305、将所述第二目标与所述多个目标对象进行比较,以确定第二识别结果。
子步骤S1307、根据所述第一识别结果和所述第二识别结果,从所述第一目标和所述第二目标中确定所述待跟随目标。
其中,第一识别结果和第二识别结果可以为该第一目标和第二目标的类别;和/或,占拍摄设备采集到的图像的画幅比例;和/或在拍摄设备采集到的图像中的位置。示例性地,第一目标在拍摄设备采集到的图像中的位置可以为与预设位置之间的距离,预设位置可以为图像的中心位置,也 可以为图像的构图最佳位置。
根据第一识别结果和第二识别结果,可以确定第一目标和第二目标的跟随优先级。
在一个实施例中,当第一识别结果指示第一目标属于第一类别,例如婴儿,第二识别结果指示第二目标属于第二目标类别,例如成人时。确定第一目标的跟随优先级为一级,第二目标的跟随优先级为二级,其中一级对应的优先级最高,将第一目标作为待跟随目标。其中,第一类别和第二类别可基于实际情况进行设置,本申请实施例对此不做具体限定。
在一个实施例中,当第一识别结果为第一目标对应的目标对象的类别为第一目标对象,第二识别结果为第二目标对应的目标对象的类别为第二目标对象则上述第一识别结果指示第一目标属于第一类别。
在一个实施例中,当上述第一识别结果指示第一目标占拍摄设备采集到的图像的画幅比例为30%,上述第二识别结果至少第二目标对象占拍摄设备采集的图像的画幅比例为10%,则确定第一目标的跟随优先级为一级,第二目标对象的跟随优先级为二级,其中一级对应的优先级最高,将第一目标作为待跟随目标。
在一个实施例中,当上述第一识别结果指示第一目标在拍摄设备采集到的图像的位置与中心位置之间的距离为a,上述第二识别结果至少第二目标对象在拍摄设备采集的图像的位置与中心位置之间的距离为b,其中,a<b,则确定第一目标的跟随优先级为一级,第二目标对象的跟随优先级为二级,其中一级对应的优先级最高,将第一目标作为待跟随目标。
在一实施例中,根据每个目标对应的目标对象的类别、占该图像的画幅比例和/或在该图像中的位置,确定每个目标的第一跟随优先级的方式可以为:根据每个目标对应的目标对象的类别,确定每个目标的第一跟随指数;根据每个目标占该图像的画幅比例,确定第二跟随指数;确定每个目标的位置与该图像的中心位置之间的距离,并根据每个目标的位置与该图像的中心位置之间的距离,确定每个特定目标的第三跟随指数,并根据每个目标的第一跟随指数、第二跟随指数和/或第三跟随指数,确定每个目标跟随指数。通过比较不同目标的跟随指数确定每个目标的跟随优先级。
其中,第一目标的第一跟随指数可以根据第一预设映射关系和第一目 标对应的目标对象的类别确定,第一预设映射关系包括不同目标对象的类别各自对应的跟随指数,例如,目标对象的类别为婴儿、成人对应的第一跟随指数分别为80分、90分,因此,第一目标的对应的目标对象的类别为婴儿,则第一目标的第一跟随指数为90分。
第一目标的第二跟随指数可以根据第二预设映射关系和第一目标的位置与该图像的中心位置之间的距离确定,第二预设关系包括不同距离对应的跟随指数,例如,距离为0.5厘米、1厘米、1.5厘米对应的跟随指数分别为90分、80分和70分,因此,第一目标的位置与该图像的中心位置之间的距离为0.5厘米,则第一目标的第二跟随指数为90分。
第一目标的第三跟随指数可以根据第三预设映射关系和第一目标占该图像的画幅比例确定,第三预设映射关系包括不同画幅比例对应的跟随指数,例如,画幅比例为5%、10%、12%对应的跟随指数分别为60分、65分和70分,因此,第一目标占该图像的画幅比例为12%,则第一目标的第三跟随指数为70分。第一预设映射关系、第二预设映射关系和第三预设映射关系可基于实际情况进行设置,本申请实施例对此不做具体限定。
在一实施例中,根据第一跟随指数、第二跟随指数和/或第三跟随指数,确定第一目标的目标跟随指数的方式可以为:若仅考虑第一目标对应的目标对象的类别,则将第一跟随指数确定为第一目标的目标跟随指数;若仅考虑第一目标在该图像中的位置,则将第二跟随指数确定为第一目标的目标跟随指数;若仅考虑第一目标占该图像的画幅比例,则将第三跟随指数确定为第一目标的目标跟随指数;若考虑第一目标的类别和在该图像中的位置,则将第一跟随指数和第二跟随指数之和确定为第一目标的目标跟随指数;若考虑第一目标对应的目标对象的类别和占该图像的画幅比例,则将第一跟随指数和第三跟随指数之和确定为第一目标的目标跟随指数;若考虑第一目标在该图像中的位置和占该图像的画幅比例,则将第二跟随指数和第三跟随指数之和确定为第一目标的目标跟随指数;若考虑第一目标对应的目标对象的类别、在该图像中的位置和占该图像的画幅比例,则将第一跟随指数、第二跟随指数和第三跟随指数之和确定为第一目标的目标跟随指数。
在一实施例中,若考虑第一目标对应的目标对象的类别和第一目标在 该图像中的位置,则计算第一预设权重与第一目标的第一跟随指数的乘积,并计算第二预设权重与第一目标的第二跟随指数的乘积;计算上述两个乘积的和,并将上述两个乘积的和确定为第一目标的目标跟随指数,其中,第一预设权重与第二预设权重之和为1。
或者,若考虑第一目标对应的目标对象的类别和占该图像的画幅比例,则计算第一预设权重与第一目标的第一跟随指数的乘积,并计算第三预设权重与第一目标的第三跟随指数的乘积;计算上述两个乘积的和,并将上述两个乘积的和确定为第一目标的目标跟随指数,其中,第一预设权重与第三预设权重之和为1。
或者,若考虑第一目标在该图像中的位置和第一目标占该图像的画幅比例,则计算第二预设权重与第一目标的第二跟随指数的乘积,并计算第三预设权重与第一目标的第三跟随指数的乘积;计算上述两个乘积的和,并将上述两个乘积的和确定为第一目标的目标跟随指数,其中,第二预设权重与第三预设权重之和为1。
或者,若考虑第一目标对应的目标对象的类别、在该图像中的位置和占该图像的画幅比例,则计算第一预设权重与第一目标的第一跟随指数的乘积、计算第二预设权重与第一目标的第二跟随指数的乘积以及计算第三预设权重与第一目标的第三跟随指数的乘积;计算上述三个乘积的和,并将上述三个乘积的和确定为第一目标的目标跟随指数,其中,第一预设权重、第二预设权重与第三预设权重之和为1。
在一个实施例中,当拍摄模式为第一模式时,第一类别的优先级高于第二类别的优先级,确定第一类别对应的第一目标为待跟随目标,示例性地,当第一模式为家庭拍摄模式时,第一类别婴儿的优先级高于第二类别成人的优先级,确定第一目标婴儿为待跟随目标。
在一个实施例中,当第一识别结果指示第一目标属于第一类别,例如婴儿,第二识别结果指示第二目标属于第二目标类别,例如成人时。当第一识别结果对应的优先级高于所述第二识别结果对应的优先级时,则将第一目标作为待跟随目标。
在一个实施例中,当第一识别结果为拍摄设备采集到的图像中不存在第一目标对象,则根据所述第二识别结果,将拍摄设备采集到的图像中的 所述第二目标确定为所述待跟随目标。
确定目标跟随策略的方法可以包括从拍摄设备采集到的图像中确定待跟随目标,若拍摄设备采集到的图像中不存在第一目标,存在多个第二目标时,若所述图像中存在多个第二目标,则根据多个第二目标对象中的每一个在拍摄设备采集到的图像中的显著程度,确定每个所述第二目标的所述第二跟随优先级,根据图像中的每个第二目标的第二跟随优先级,从多个第二目标中确定一个作为待跟随目标。其中,拍摄设备采集到的图像中包含有第一目标的特征的目标对象为第二目标。
在一实施例中,上述第二目标在拍摄设备采集到的图像中的显著程度可以根据第二目标在图像中的预设位置的停留时长确定和/或可以根据第二目标在采集到的图像中所处的图像区域与相邻图像区域之间的显著性值确定。可以理解的是,第二目标在图像中的预设位置的停留时长越长,则第二目标在采集到的图像中的显著程度越高,而第二目标在图像中的预设位置的停留时长越短,则第二目标在采集到的图像中的显著程度越低。第二目标在采集到的图像中所处的图像区域与相邻图像区域之间的显著性值越大,则第二目标在采集到的图像中的显著程度越高,而第二目标在采集到的图像中所处的图像区域与相邻图像区域之间的显著性值越小,则第二目标在采集到的图像中的显著程度越低。预设位置、预设停留时长和预设显著性值可基于实际情况进行设置或由用户自行设置,例如预设位置可以图像的中心位置,预设停留时长为10秒,预设显著值为50。
如果拍摄设备采集到的图像中存在多个第二目标,则根据第二目标的识别结果,确定每个第二目标的第二跟随优先级,并将第二跟随优先级最高对应的第二目标确定为待跟随目标。例如,第二目标A、第二目标B和第二目标C的第二跟随优先级分别为一级、二级和三级,一级对应的优先级最高,因此选择第二目标A作为待跟随目标。其中,第二跟随优先级用于描述选择第二目标作为待跟随目标的概率高低。若第二目标的第二跟随优先级越高,则选择该第二目标作为待跟随目标的概率越高。若第二目标的第二跟随优先级越低,则选择该第二目标作为待跟随目标的概率越低。通过确定每个第二目标的第二跟随优先级,并将第二跟随优先级最高对应的第二目标确定为待跟随目标,使得确定的待跟随目标更加符合用户的需 求,提高用户体验。
在一实施例中,根据第二目标对象的第二识别结果,确定每个第二目标的第二跟随优先级的方式可以为:从第二识别结果中获取每个第二目标的跟随概率,其中,该跟随概率为根据目标检测算法对图像中的对象进行识别时输出的对象为第二目标的概率;和/或,从第二识别结果中获取每个第二目标在该图像中的位置;和/或,从第二识别结果中获取每个第二目标在该图像中的位置信息,并根据每个第二目标在该图像中的位置信息,确定每个第二目标占该图像的画幅比例;根据每个第二目标的跟随概率、在该图像中的位置和/或占该图像的画幅比例,确定每个第二目标的第二跟随优先级。
例如,第二目标A、第二目标B和第二目标C的跟随概率分别为90%、80%和85%,由于90%>85%>80%,因此,通过90%>85%>80%的大小关系,可以确定第二目标A的第二跟随优先级为一级、第二目标B的第二跟随优先级为二级,第二目标C的第二跟随优先级为三级。又例如,第二目标A、第二目标B和第二目标C占该图像的画幅比例分别为8%、12%和15%,由于15%>12%>8%,因此,通过15%>12%>8%的大小关系,可以确定第二目标A的第二跟随优先级为三级、第二目标B的第二跟随优先级为二级,第二目标C的第二跟随优先级为一级。
在一实施例中,根据每个第二目标在该图像中的位置,确定每个第二目标的第二跟随优先级的方式可以为:获取该图像的中心位置,并根据每个第二目标在该图像中的位置和该图像的中心位置,确定每个第二目标的位置与中心位置之间的距离;根据每个第二目标的位置与中心位置之间的距离,确定每个第二目标的第二跟随优先级。例如,第二目标A、第二目标B和第二目标C在该图像中的位置与中心位置之间的距离分别为d、e和f,且d>e>f,因此,通过d>e>f的大小关系,可以确定第二目标A的第二跟随优先级为三级、第二目标B的第二跟随优先级为二级,第二目标C的第二跟随优先级为一级。
在一实施例中,根据每个第二目标的跟随概率、在该图像中的位置和/或占该图像的画幅比例,确定每个第二目标的第二跟随优先级的方式可以为:根据每个第二目标的跟随概率,确定每个第二目标的第一跟随指数; 确定每个第二目标的位置与该图像的中心位置之间的距离,并根据每个第二目标的位置与该图像的中心位置之间的距离,确定每个第二目标的第二跟随指数;根据每个第二目标占该图像的画幅比例,确定第三跟随指数,并根据每个第二目标的第一跟随指数、第二跟随指数和/或第三跟随指数,确定每个第二目标的目标跟随指数;根据每个第二目标的目标跟随指数,确定每个第二目标的第二跟随优先级。
其中,第二目标的第一跟随指数可以根据第一预设映射关系和第二目标的跟随概率确定,第一预设映射关系包括不同跟随概率各自对应的跟随指数,例如,跟随概率为60%、70%、90%、95%对应的跟随指数分别为60分、70分、90分、95分,因此,第二目标的跟随概率为90%,则第二目标的第一跟随指数为90分。
第二目标的第二跟随指数可以根据第二预设映射关系和第二目标的位置与该图像的中心位置之间的距离确定,第二预设关系包括不同距离对应的跟随指数,例如,距离为0.5厘米、1厘米、1.5厘米对应的跟随指数分别为90分、80分和70分,因此,第二目标的位置与该图像的中心位置之间的距离为1厘米,则第二目标的第二跟随指数为80分。
本申请并非限于此。根据本申请的另一实施方式,根据每个第二目标在该图像中的位置,确定每个第二目标的第二跟随优先级的方式可以为:在图像中指定一预定位置,并根据每个第二目标在该图像中的位置和该图像的预定位置,确定每个第二目标的位置与预定位置之间的距离;根据每个第二目标的位置与预定位置之间的距离,确定每个第二目标的第二跟随优先级。
第二目标的第三跟随指数可以根据第三预设映射关系和第二目标占该图像的画幅比例确定,第三预设映射关系包括不同画幅比例对应的跟随指数,例如,画幅比例为5%、10%、12%对应的跟随指数分别为60分、65分和70分,因此,第二目标占该图像的画幅比例为10%,则第二目标的第三跟随指数为65分。
在一实施例中,根据第一跟随指数、第二跟随指数和/或第三跟随指数,确定第二目标的目标跟随指数的方式可以为:若仅考虑第二目标的跟随概率,则将第一跟随指数确定为第二目标的目标跟随指数;若仅考虑第二目 标在该图像中的位置,则将第二跟随指数确定为第二目标的目标跟随指数;若仅考虑第二目标占该图像的画幅比例,则将第三跟随指数确定为第二目标的目标跟随指数;若考虑第二目标的跟随概率和在该图像中的位置,则将第一跟随指数和第二跟随指数之和确定为第二目标的目标跟随指数;若考虑第二目标的跟随概率和占该图像的画幅比例,则将第一跟随指数和第三跟随指数之和确定为第二目标的目标跟随指数;若考虑第二目标在该图像中的位置和占该图像的画幅比例,则将第二跟随指数和第三跟随指数之和确定为第二目标的目标跟随指数;若考虑第二目标的跟随概率、在该图像中的位置和占该图像的画幅比例,则将第一跟随指数、第二跟随指数和第三跟随指数之和确定为第二目标的目标跟随指数。
在一实施例中,若考虑第二目标的跟随概率和第二目标在该图像中的位置,则计算第一预设权重与第二目标的第一跟随指数的乘积,并计算第二预设权重与第二目标的第二跟随指数的乘积;计算上述两个乘积的和,并将上述两个乘积的和确定为第二目标的目标跟随指数,其中,第一预设权重与第二预设权重之和为1。
或者,若考虑第二目标的跟随概率和占该图像的画幅比例,则计算第一预设权重与第二目标的第一跟随指数的乘积,并计算第三预设权重与第二目标的第三跟随指数的乘积;计算上述两个乘积的和,并将上述两个乘积的和确定为第二目标的目标跟随指数,其中,第一预设权重与第三预设权重之和为1。
或者,若考虑第二目标在该图像中的位置和显著性目标占该图像的画幅比例,则计算第二预设权重与第二目标的第二跟随指数的乘积,并计算第三预设权重与第二目标的第三跟随指数的乘积;计算上述两个乘积的和,并将上述两个乘积的和确定为第二目标的目标跟随指数,其中,第二预设权重与第三预设权重之和为1。
或者,若考虑第二目标的跟随概率、在该图像中的位置和占该图像的画幅比例,则计算第一预设权重与第二目标的第一跟随指数的乘积、计算第二预设权重与第二目标的第二跟随指数的乘积以及计算第三预设权重与第二目标的第三跟随指数的乘积;计算上述三个乘积的和,并将上述三个乘积的和确定为第二目标的目标跟随指数,其中,第一预设权重、第二 预设权重与第三预设权重之和为1。
在进行目标跟随时,当第一目标为待跟随目标,则采用第一跟随模式进行跟随;当第二目标对象为待跟随目标,则采用第二跟随模式进行跟随。其中,第一目标的类别和第二目标对象的类别不同,第一跟随模式和第二跟随模式不同。第一目标、第二目标对象、第一跟随模式、第二跟随模式可基于实际情况进行设置,本申请实施例对此不做具体限定。
在一个实施例中,当第一目标为婴儿,第一目标为待跟随目标时,采用第一跟随模式进行跟随。
在一个实施例中,当第二目标对象为成人,第二目标对象为待跟随目标时,采用第二跟随模式进行跟随。
示例性地,上述第一跟随模式下拍摄设备的跟随速度可以比在上述第二跟随模式下拍摄设备的跟随速度慢。
拍摄设备采集到的图像包括拍摄设备当前采集到的图像、拍摄设备先前采集到的图像和拍摄设备后续采集到的图像,在第一跟随模式下,待跟随目标在拍摄设备先前采集到的图像中位于第一位置,待跟随目标在拍摄设备当前采集到的图像中位于第二位置,根据第二位置,调整拍摄设备的姿态;以及在第二跟随模式下,根据第一位置和第二位置的关系,预测待跟随目标在拍摄设备后续采集到的图像中的第三位置;根据第三位置,调整拍摄设备的姿态。
如图14所示,在一个实施例中,在第一跟随模式下,待跟随目标1401在拍摄设备先前采集到的图像中位于第一位置1403,在拍摄设备当前采集到的图像中位于第二位置1405。根据第二位置1405,调整拍摄设备的姿态使待跟随目标1401保持在图像中的预设区域。其中,该预设区域可以为画面的中心区域。
如图14所示,在一个实施例中,在第二跟随模式下,待跟随目标1401在拍摄设备先前采集到的图像中位于第一位置1403,在拍摄设备当前采集到的图像中位于第二位置1405,根据第一位置1403和第二位置1405的关系,预测待跟随目标在拍摄设备后续采集到的图像中的第三位置1407;根据第三位置1407,调整拍摄设备的姿态使待跟随目标1401保持在图像中的预设区域。其中,该预设区域可以为画面的中心区域。
如图15所示,确定目标跟随策略的方法可以包括子步骤S1501至S1507。
子步骤S1501、所述通过检测所述拍摄设备采集到的所述图像,从所述拍摄设备采集到的图像中确定待跟随目标。
其中,拍摄设备采集到的图像包括拍摄设备当前采集到的图像和拍摄设备先前采集到的图像。
子步骤S1503、在所述拍摄设备当前采集到的图像中确定多个第三目标。
子步骤S1505、将所述多个第三目标与所述拍摄设备先前采集到的图像中的待跟随目标进行比较,以确定第三识别结果。
其中,第三识别结果为对比上述第三目标的特征与上述待跟随目标的特征的相似程度。
子步骤S1507、若所述第三识别结果指示所述多个第三目标不与所述第一待跟随目标相似,则将所述多个第三目标与模型图像中的第一目标对象进行比较,以确定所述多个第三目标中是否存在所述第一待跟随目标。
在一个实施例中,通过检测拍摄设备采集到的所述图像,识别拍摄设备当前采集到的图像中存在多个第三目标C,将多个第三目标C的特征与所述拍摄设备先前采集到的图像中的待跟随目标A的特征进行比较,当上述比较的结果为第三目标C的特征与上述待跟随目标A的特征的不相似时,需要比对多个第三目标的特征与模型图像中的第一目标对象B的特征。
在一个实施例中,确定目标跟随策略的方法包括,当通过检测拍摄设备采集到的所述图像,识别拍摄设备当前采集到的图像中存在多个第三目标C,示例性地,待跟随目标A和第一目标对象B为婴儿,需要首先将第三目标C的面部特征;和/或,形貌轮廓;和/或,运动属性与待跟随目标A的面部特征;和/或,形貌轮廓;和/或,运动属性进行比较得出第三识别结果,当第三识别结果为第三目标C与待跟随目标A的特征不相似时,需要将第三目标C的面部特征;和/或,形貌轮廓;和/或,运动属性与第一目标对象B的面部特征;和/或,形貌轮廓;和/或,运动属性进行比较,若比较的结果为相似时,可以判断第三目标C为待跟随目标B;若比较的结果为不相似时,可以判断待跟随目标B在跟随的过程中被丢失。
确定目标跟随策略的方法可以包括将所述待跟随目标作为第一目标对象;提取所述第一目标对象的特征;根据已提取的特征,更新关于所述第一目标对应的特征库。
在一个实施例中,待跟随目标为婴儿,提取该待跟随目标的特征,包括面部特征;和/或,形貌轮廓;和/或,运动属性,并将上述特征添加入第一目标对应的特征库。以达到丰富特征库的效果,从而不断提高识别的准确性。
请参阅图16,图16是本申请实施例提供的一种确定目标跟随策略的装置的结构示意性框图。
如图16所示,确定目标跟随策略的装置1601包括处理器1603和存储器1605,处理器1603和存储器1605通过总线1607连接,该总线1607比如为I2C(Inter-integrated Circuit)总线。确定目标跟随策略的装置1601用于与拍摄设备通信连接。
具体地,处理器1603可以是微控制单元(Micro-controller Unit,MCU)、中央处理单元(Central Processing Unit,CPU)或数字信号处理器(Digital Signal Processor,DSP)等。
具体地,存储器1605可以是Flash芯片、只读存储器(ROM,Read-Only Memory)磁盘、光盘、U盘或移动硬盘等。
其中,所述处理器1603用于运行存储在存储器1605中的计算机程序,并在执行所述计算机程序时实现如下步骤:
获取拍摄设备采集到的图像。
通过检测所述拍摄设备采集到的图像,从所述拍摄设备采集到的图像中确定待跟随目标。
根据所述待跟随目标的特征,确定所述待跟随目标对应的跟随策略。
根据所述跟随策略,所述拍摄设备对所述待跟随目标进行跟随。
在一个实施例中,所述确定目标跟随策略的装置用于与拍摄设备通信连接,所述确定目标跟随策略的装置包括存储器和处理器;所述存储器用于存储计算机程序;所述处理器,用于执行所述计算机程序并在执行所述计算机程序时,实现如下步骤:获取拍摄设备采集到的图像;通过检测所述拍摄设备采集到的图像,从所述拍摄设备采集到的图像中确定待跟随目 标;根据所述待跟随目标的特征,确定所述待跟随目标对应的跟随策略;以及根据所述跟随策略,所述拍摄设备对所述待跟随目标进行跟随。
在一个实施例中,在识别所述待跟随目标之前,确定目标对象特征库。
在一个实施例中,所述确定目标对象特征库,包括:在模型图像中,确定第一目标对象;提取第一目标对象的特征;根据已提取的特征,建立关于所述第一目标对象的特征库。
在一个实施例中,所述在模型图像中,确定第一目标对象,包括:在所述模型图像中,确定所述第一目标对象的属性和/或所述第一目标对象对应的图像区域。
在一个实施例中,所述确定所述第一目标对象的属性和/或所述第一目标对象对应的图像区域,包括:通过位置标识指示所述第一目标对象对应的所述图像区域。
在一个实施例中,确定所述目标对象特征库之后,进一步包括:确定关于所述第一目标对象的特征;根据所述第一目标对象的特征,在所述模型图像中确定是否存在第一候选目标对象;若所述模型图像中包括所述第一候选目标对象,则确定所述第一候选目标对象是所述第一目标对象的概率。
在一个实施例中,所述确定所述第一候选目标对象是所述第一目标对象的概率,以优化所述确定目标跟随的方法,包括:确定所述第一候选目标对象和所述第一目标对象是同一类别的第一概率,确定所述第一候选目标对象在第一位置的第二概率;根据所述第一概率和所述第二概率,得出所述第一候选目标对象是否是第一目标对象的预测结果。
在一个实施例中,所述处理器进一步用于实现如下步骤:根据所述第一概率和所述第二概率以及所述预测结果,优化所述确定目标跟随策略的装置;其中,根据所述第一概率和所述第二概率以及所述预测结果得出所述确定目标跟随策略的装置的目标函数;以及根据所述目标函数更新所述确定目标跟随策略的装置。
在一个实施例中,其特征在于,所述从所述拍摄设备采集到的图像中确定待跟随目标,包括:通过用户操作,选择所述待跟随目标。
在一个实施例中,所述通过用户操作,选择所述待跟随目标,包括: 响应于用户的点击操作,在点击位置附近的图像区域内识别所述待跟随目标;以及标注所述待跟随目标的类别,和/或所述待跟随目标的所在位置。
在一个实施例中,所述通过用户操作,选择所述待跟随目标,包括:响应于用户对模式选择按键的第一按压操作,在图像中央区域内识别所述待跟随目标;以及标注所述待跟随目标的类别,和/或标示所述待跟随目标的所在位置。
在一个实施例中,所述通过检测所述拍摄设备采集到的所述图像,从所述拍摄设备采集到的图像中确定待跟随目标,包括:识别所述待跟随目标的特征。
在一个实施例中,所述识别所述待跟随目标的特征,包括:将所述待跟随目标和模型图像中的第一目标对象进行比较;若所述待跟随目标的第一特征和所述第一目标对象的第一目标特征相似,则将所述待跟随目标标记为第一类别;以及所述根据所述待跟随目标的特征,确定所述待跟随目标对应的跟随策略,包括:若所述待跟随目标被标记为第一类别,则所述待跟随目标对应的跟随策略为第一跟随策略。
在一个实施例中,若所述待跟随目标的第一特征和所述第一目标对象的第一目标特征不相似;则将所述待跟随目标和模型图像中的第二目标对象进行比较;其中,所述第一目标对象和所述第二目标对象不同。
在一个实施例中,所述第一特征和所述第一目标特征包括面部特征、形貌轮廓、运动属性中的至少一个。
在一个实施例中,所述识别所述待跟随目标的特征,包括:将所述待跟随目标和模型图像中的多个目标对象进行比较,以从所述多个目标对象中确定第一目标对象,其中,所述第一目标对象的特征与所述待跟随目标的特征相似;以及所述根据所述待跟随目标的特征,确定所述待跟随目标对应的跟随策略,包括:根据所述第一目标对象的类别,确定所述待跟随目标对应的跟随策略;其中,所述第一目标对象的类别位于预设类别库;所述预测类别库包括多个目标对象各自对应的类别。
在一个实施例中,所述通过检测所述拍摄设备采集到的所述图像,从所述拍摄设备采集到的图像中确定待跟随目标,包括:从所述拍摄设备采集到的图像中确定第一目标和第二目标,其中所述第一目标和所述第二目 标不同;将所述第一目标与模型图像中的多个目标对象进行比较,以确定第一识别结果;将所述第二目标与所述多个目标对象进行比较,以确定第二识别结果;以及根据所述第一识别结果和所述第二识别结果,从所述第一目标和所述第二目标中确定所述待跟随目标。
在一个实施例中,根据所述第一识别结果和所述第二识别结果,确定所述第一目标和所述第二目标的优先级。
在一个实施例中,所述第一识别结果指示所述第一目标属于第一类别;以及所述第一识别结果指示所述第一目标占所述图像的画幅比例和/或在所述图像中的位置;以及所述第二识别结果指示所述第二目标属于第二类别;所述第二识别结果指示所述第二目标占所述图像的画幅比例和/或在所述图像中的位置。其中,所述第一类别和所述第二类别不同。
在一个实施例中,所述第一类别为婴儿,所述第二类别为成人。
在一个实施例中,当所述拍摄模式为第一模式时,所述第一类别的优先级高于所述第二类别的优先级;确定所述第一类别对应的第一目标为所述待跟随目标。
在一个实施例中,所述第一识别结果对应的优先级高于所述第二识别结果对应的优先级,则将所述第一目标作为所述待跟随目标。
在一个实施例中,若所述第一识别结果为所述拍摄设备采集到的图像中不存在所述第一目标对象,则根据所述第二识别结果,将所述拍摄设备采集到的图像中的所述第二目标确定为所述待跟随目标。
在一个实施例中,将所述图像中的所述第二目标确定为所述待跟随目标,包括:若所述图像中存在多个所述第二目标,则根据多个所述第二目标对象中的每一个在所述拍摄设备采集到的图像中的显著程度,确定每个所述第二目标的所述第二跟随优先级;根据所述图像中的每个所述第二目标的所述第二跟随优先级,从多个所述第二目标中确定所述待跟随目标。
在一个实施例中,目标跟随包括:若所述第一目标为所述待跟随目标,则采用第一跟随模式进行跟随;若所述第二目标为所述待跟随目标,则采用第二跟随模式进行跟随;所述第一目标的类别和所述第二目标的类别不同,所述第一跟随模式和所述第二跟随模式不同。
在一个实施例中,在所述第一跟随模式下所述拍摄设备的跟随速度比 在所述第二跟随模式下所述拍摄设备的跟随速度慢。
在一个实施例中,所述拍摄设备采集到的所述图像包括拍摄设备当前采集到的图像、拍摄设备先前采集到的图像和拍摄设备后续采集到的图像;在所述第一跟随模式下,所述待跟随目标在所述拍摄设备先前采集到的图像中位于第一位置,所述待跟随目标在所述拍摄设备当前采集到的图像中位于第二位置,根据所述第二位置,调整所述拍摄设备的姿态;以及在所述第二跟随模式下,根据所述第一位置和所述第二位置的关系,预测所述待跟随目标在所述拍摄设备后续采集到的图像中的第三位置;根据所述第三位置,调整所述拍摄设备的姿态。
在一个实施例中,所述拍摄设备采集到的所述图像包括拍摄设备当前采集到的图像和拍摄设备先前采集到的图像;以及所述通过检测所述拍摄设备采集到的所述图像,从所述拍摄设备采集到的图像中确定待跟随目标,包括:在所述拍摄设备当前采集到的图像中确定多个第三目标;将所述多个第三目标与所述拍摄设备先前采集到的图像中的待跟随目标进行比较,以确定第三识别结果;若所述第三识别结果指示所述多个第三目标不与所述第一待跟随目标相似,则将所述多个第三目标与模型图像中的第一目标对象进行比较,以确定所述多个第三目标中是否存在所述第一待跟随目标。
在一个实施例中,进一步包括:将所述待跟随目标作为第一目标对象;提取所述第一目标对象的特征;根据已提取的特征,更新关于所述第一目标对应的特征库。
请参阅图17,图17是本申请实施例提供的一种确定目标跟随策略的系统的结构示意性框图。
如图17所示,确定目标跟随策略的系统1701包括确定目标跟随策略的装置1703、云台1705、搭载于云台1705上的拍摄设备1707,该确定目标跟随策略的装置1703与拍摄设备1707通信连接。在一实施例中,云台1705连接于手柄部,确定目标跟随策略的装置1703设置在手柄部上。在另一实施例中,云台1705搭载在可移动平台上,确定目标跟随策略的装置1703还用于控制可移动平台移动。
在一个实施例中,所述云台连接于手柄部,所述确定目标跟随策略的装置设置在所述手柄部上。
在一个实施例中,所述云台搭载在可移动平台上,所述确定目标跟随策略的装置还用于控制所述可移动平台移动。需要说明的是,所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,上述描述的目标跟随系统的具体工作过程,可以参考前述待跟随目标的确定方法实施例中的对应过程,在此不再赘述。
请参阅图18,图18是本申请实施例提供的一种手持云台的结构示意性框图。
如图18所示,手持云台1801包括确定目标跟随策略的装置1803、手柄部、连接于手柄部的云台1805,云台1805用于搭载拍摄设备,确定目标跟随策略的装置1803设置在手柄部上。确定目标跟随策略的装置1803与云台1805连接。
需要说明的是,所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,上述描述的手持云台的具体工作过程,可以参考前述待跟随目标的确定方法实施例中的对应过程,在此不再赘述。
请参阅图19,图19是本申请实施例提供的一种可移动平台的结构示意性框图。
如图19所示,可移动平台1901包括平台本体、搭载于平台本体的云台1903和确定目标跟随策略的装置1905,云台1903用于搭载拍摄设备,确定目标跟随策略的装置1905设置在平台本体上。
需要说明的是,所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,上述描述的可移动平台的具体工作过程,可以参考前述待跟随目标的确定方法实施例中的对应过程,在此不再赘述。
本申请实施例还提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序中包括程序指令,所述处理器执行所述程序指令,实现上述实施例提供的确定目标跟随策略的方法的步骤。
其中,所述计算机可读存储介质可以是前述任一实施例所述的控制终端或无人飞行器的内部存储单元,例如所述控制终端或无人飞行器的硬盘或内存。所述计算机可读存储介质也可以是所述控制终端或无人飞行器的外部存储设备,例如所述控制终端或无人飞行器上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪 存卡(Flash Card)等。
应当理解,在此本申请说明书中所使用的术语仅仅是出于描述特定实施例的目的而并不意在限制本申请。如在本申请说明书和所附权利要求书中所使用的那样,除非上下文清楚地指明其它情况,否则单数形式的“一”、“一个”及“该”意在包括复数形式。
还应当理解,在本申请说明书和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到各种等效的修改或替换,这些修改或替换都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以权利要求的保护范围为准。

Claims (64)

  1. 一种确定目标跟随策略的方法,其特征在于,所述确定目标跟随策略的方法包括:
    获取拍摄设备采集到的图像;
    通过检测所述拍摄设备采集到的图像,从所述拍摄设备采集到的图像中确定待跟随目标;
    根据所述待跟随目标的特征,确定所述待跟随目标对应的跟随策略;以及
    根据所述跟随策略,所述拍摄设备对所述待跟随目标进行跟随。
  2. 根据权利要求1所述的确定目标跟随策略的方法,其特征在于,在识别所述待跟随目标之前,确定目标对象特征库。
  3. 根据权利要求2所述的确定目标跟随的方法,其特征在于,所述确定目标对象特征库,包括:
    在模型图像中,确定第一目标对象;
    提取第一目标对象的特征;
    根据已提取的特征,建立关于所述第一目标对象的特征库。
  4. 根据权利要求3所述的确定目标跟随策略的方法,其特征在于,所述在模型图像中,确定第一目标对象,包括:
    在所述模型图像中,确定所述第一目标对象的属性和/或所述第一目标对象对应的图像区域。
  5. 根据权利要求4所述的确定目标跟随策略的方法,其特征在于,所述确定所述第一目标对象的属性和/或所述第一目标对象对应的图像区域,包括:
    通过位置标识指示所述第一目标对象对应的所述图像区域。
  6. 根据权利要求3所述的确定目标跟随策略的方法,其特征在于,确定所述目标对象特征库之后,进一步包括:
    确定关于所述第一目标对象的特征;
    根据所述第一目标对象的特征,在所述模型图像中确定是否存在 第一候选目标对象;
    若所述模型图像中包括所述第一候选目标对象,则确定所述第一候选目标对象是所述第一目标对象的概率,以优化所述确定目标跟随策略的方法。
  7. 根据权利要求6所述的确定目标跟随策略的方法,其特征在于,所述确定所述第一候选目标对象是所述第一目标对象的概率,以优化所述确定目标跟随的方法,包括:
    确定所述第一候选目标对象和所述第一目标对象是同一类别的第一概率;
    确定所述第一候选目标对象在第一位置的第二概率;
    根据所述第一概率和所述第二概率,得出所述第一候选目标对象是否是第一目标对象的预测结果。
  8. 根据权利要求7所述的确定目标跟随策略的方法,其特征在于,进一步包括:
    根据所述第一概率和所述第二概率以及所述预测结果,优化所述确定目标跟随策略的方法;
    其中,根据所述第一概率和所述第二概率以及所述预测结果得出所述确定目标跟随策略的方法的目标函数;以及
    根据所述目标函数更新所述确定目标跟随策略的方法。
  9. 根据权利要求1中所述的确定目标跟随策略的方法,其特征在于,所述从所述拍摄设备采集到的图像中确定待跟随目标,包括:
    通过用户操作,选择所述待跟随目标。
  10. 根据权利要求9所述的确定目标跟随策略的方法,其特征在于,所述通过用户操作,选择所述待跟随目标,包括:
    响应于用户的点击操作,在点击位置附近的图像区域内识别所述待跟随目标;以及
    标注所述待跟随目标的类别,和/或所述待跟随目标的所在位置。
  11. 根据权利要求9所述的确定目标跟随策略的方法,其特征在于,所述通过用户操作,选择所述待跟随目标,包括:
    响应于用户对模式选择按键的第一按压操作,在图像中央区域内 识别所述待跟随目标;以及
    标注所述待跟随目标的类别,和/或标示所述待跟随目标的所在位置。
  12. 根据权利要求1所述的确定目标跟随策略的方法,其特征在于,所述通过检测所述拍摄设备采集到的所述图像,从所述拍摄设备采集到的图像中确定待跟随目标,包括:
    识别所述待跟随目标的特征。
  13. 根据权利要求12所述的确定目标跟随策略的方法,其特征在于,所述识别所述待跟随目标的特征,包括:
    将所述待跟随目标和模型图像中的第一目标对象进行比较;
    若所述待跟随目标的第一特征和所述第一目标对象的第一目标特征相似,则将所述待跟随目标标记为第一类别;以及
    所述根据所述待跟随目标的特征,确定所述待跟随目标对应的跟随策略,包括:
    若所述待跟随目标被标记为第一类别,则所述待跟随目标对应的跟随策略为第一跟随策略。
  14. 根据权利要求13所述的确定目标跟随策略的方法,其特征在于,若所述待跟随目标的第一特征和所述第一目标对象的第一目标特征不相似;则将所述待跟随目标和模型图像中的第二目标对象进行比较;
    其中,所述第一目标对象和所述第二目标对象不同。
  15. 根据权利要求13所述的确定目标跟随策略的方法,其特征在于,所述第一特征和所述第一目标特征包括面部特征、形貌轮廓、运动属性中的至少一个。
  16. 根据权利要求1所述的确定目标跟随策略的方法,其特征在于,所述识别所述待跟随目标的特征,包括:
    将所述待跟随目标和模型图像中的多个目标对象进行比较,以从所述多个目标对象中确定第一目标对象,其中,所述第一目标对象的特征与所述待跟随目标的特征相似;以及
    根据所述第一目标对象的类别,确定所述待跟随目标对应的跟随策略;
    其中,所述第一目标对象的类别位于预设类别库;所述预测类别库包括多个目标对象各自对应的类别。
  17. 根据权利要求1所述的确定目标跟随策略的方法,其特征在于,所述通过检测所述拍摄设备采集到的所述图像,从所述拍摄设备采集到的图像中确定待跟随目标,包括:
    从所述拍摄设备采集到的图像中确定第一目标和第二目标,其中所述第一目标和所述第二目标不同;
    将所述第一目标与模型图像中的多个目标对象进行比较,以确定第一识别结果;
    将所述第二目标与所述多个目标对象进行比较,以确定第二识别结果;以及
    根据所述第一识别结果和所述第二识别结果,从所述第一目标和所述第二目标中确定所述待跟随目标。
  18. 根据权利要求17所述的确定目标跟随策略的方法,其特征在于,根据所述第一识别结果和所述第二识别结果,确定所述第一目标和所述第二目标的优先级。
  19. 根据权利要求17所述的确定目标跟随策略的方法,其特征在于,所述第一识别结果指示所述第一目标属于第一类别;以及
    所述第一识别结果指示所述第一目标占所述图像的画幅比例和/或在所述图像中的位置;以及
    所述第二识别结果指示所述第二目标属于第二类别;
    所述第二识别结果指示所述第二目标占所述图像的画幅比例和/或在所述图像中的位置,
    其中,所述第一类别和所述第二类别不同。
  20. 根据权利要求19所述的确定目标跟随策略的方法,其特征在于,所述第一类别为婴儿,所述第二类别为成人。
  21. 根据权利要求19所述的确定目标跟随策略的方法,其特征在于,当所述拍摄模式为第一模式时,所述第一类别的优先级高于所 述第二类别的优先级;
    确定所述第一类别对应的第一目标为所述待跟随目标。
  22. 根据权利要求17所述的确定目标跟随策略的方法,其特征在于,所述第一识别结果对应的优先级高于所述第二识别结果对应的优先级,则将所述第一目标作为所述待跟随目标。
  23. 根据权利要求17所述的确定目标跟随策略的方法,其特征在于,若所述第一识别结果为所述拍摄设备采集到的图像中不存在第一目标对象,则根据所述第二识别结果,将所述拍摄设备采集到的图像中的所述第二目标确定为所述待跟随目标。
  24. 根据权利要求23所述的确定目标跟随策略的方法,其特征在于,将所述图像中的所述第二目标确定为所述待跟随目标,包括:
    若所述图像中存在多个所述第二目标,则根据多个所述第二目标对象中的每一个在所述拍摄设备采集到的图像中的显著程度,确定每个所述第二目标的所述第二跟随优先级;
    根据所述图像中的每个所述第二目标的所述第二跟随优先级,从多个所述第二目标中确定所述待跟随目标。
  25. 根据权利要求17所述的确定目标跟随策略的方法,其特征在于,目标跟随包括:
    若所述第一目标为所述待跟随目标,则采用第一跟随模式进行跟随;
    若所述第二目标为所述待跟随目标,则采用第二跟随模式进行跟随;
    所述第一目标的类别和所述第二目标的类别不同,所述第一跟随模式和所述第二跟随模式不同。
  26. 根据权利要求25所述的确定目标跟随策略的方法,其特征在于,在所述第一跟随模式下所述拍摄设备的跟随速度比在所述第二跟随模式下所述拍摄设备的跟随速度慢。
  27. 根据权利要求25所述的确定目标跟随策略的方法,其特征在于,所述拍摄设备采集到的所述图像包括拍摄设备当前采集到的图像、拍摄设备先前采集到的图像和拍摄设备后续采集到的图像;
    在所述第一跟随模式下,
    所述待跟随目标在所述拍摄设备先前采集到的图像中位于第一位置,所述待跟随目标在所述拍摄设备当前采集到的图像中位于第二位置,根据所述第二位置,调整所述拍摄设备的姿态;以及
    在所述第二跟随模式下,
    根据所述第一位置和所述第二位置的关系,预测所述待跟随目标在所述拍摄设备后续采集到的图像中的第三位置;
    根据所述第三位置,调整所述拍摄设备的姿态。
  28. 根据权利要求1所述的确定目标跟随策略的方法,其特征在于,所述拍摄设备采集到的所述图像包括拍摄设备当前采集到的图像和拍摄设备先前采集到的图像;以及
    所述通过检测所述拍摄设备采集到的所述图像,从所述拍摄设备采集到的图像中确定待跟随目标,包括:
    在所述拍摄设备当前采集到的图像中确定多个第三目标;
    将所述多个第三目标与所述拍摄设备先前采集到的图像中的待跟随目标进行比较,以确定第三识别结果;
    若所述第三识别结果指示所述多个第三目标不与所述第一待跟随目标相似,则将所述多个第三目标与模型图像中的第一目标对象进行比较,以确定所述多个第三目标中是否存在所述第一待跟随目标。
  29. 根据权利要求1所述的确定目标跟随策略的方法,其特征在于,进一步包括:
    将所述待跟随目标作为第一目标对象;
    提取所述第一目标对象的特征;
    根据已提取的特征,更新关于所述第一目标对应的特征库。
  30. 一种确定目标跟随策略的装置,其特征在于,所述确定目标跟随策略的装置用于与拍摄设备通信连接,所述确定目标跟随策略的装置包括存储器和处理器;
    所述存储器用于存储计算机程序;
    所述处理器,用于执行所述计算机程序并在执行所述计算机程序时,实现如下步骤:
    获取拍摄设备采集到的图像;
    通过检测所述拍摄设备采集到的图像,从所述拍摄设备采集到的图像中确定待跟随目标;
    根据所述待跟随目标的特征,确定所述待跟随目标对应的跟随策略;以及
    根据所述跟随策略,所述拍摄设备对所述待跟随目标进行跟随。
  31. 根据权利要求30所述的确定目标跟随策略的装置,其特征在于,在识别所述待跟随目标之前,确定目标对象特征库。
  32. 根据权利要求31所述的确定目标跟随策略的装置,其特征在于,所述确定目标对象特征库,包括:
    在模型图像中,确定第一目标对象;
    提取第一目标对象的特征;
    根据已提取的特征,建立关于所述第一目标对象的特征库。
  33. 根据权利要求32所述的确定目标跟随策略的装置,其特征在于,所述在模型图像中,确定第一目标对象,包括:
    在所述模型图像中,确定所述第一目标对象的属性和/或所述第一目标对象对应的图像区域。
  34. 根据权利要求33所述的确定目标跟随策略的装置,其特征在于,所述确定所述第一目标对象的属性和/或所述第一目标对象对应的图像区域,包括:
    通过位置标识指示所述第一目标对象对应的所述图像区域。
  35. 根据权利要求32所述的确定目标跟随策略的装置,其特征在于,确定所述目标对象特征库之后,进一步包括:
    确定关于所述第一目标对象的特征;
    根据所述第一目标对象的特征,在所述模型图像中确定是否存在第一候选目标对象;
    若所述模型图像中包括所述第一候选目标对象,则确定所述第一候选目标对象是所述第一目标对象的概率。
  36. 根据权利要求35所述的确定目标跟随策略的装置,其特征在于,所述确定所述第一候选目标对象是所述第一目标对象的概率, 以优化所述确定目标跟随的方法,包括:
    确定所述第一候选目标对象和所述第一目标对象是同一类别的第一概率;
    确定所述第一候选目标对象在第一位置的第二概率;
    根据所述第一概率和所述第二概率,得出所述第一候选目标对象是否是第一目标对象的预测结果。
  37. 根据权利要求36所述的确定目标跟随策略的装置,其特征在于,所述处理器进一步用于实现如下步骤:
    根据所述第一概率和所述第二概率以及所述预测结果,优化所述确定目标跟随策略的装置;
    其中,根据所述第一概率和所述第二概率以及所述预测结果得出所述确定目标跟随策略的装置的目标函数;以及
    根据所述目标函数更新所述确定目标跟随策略的装置。
  38. 根据权利要求30中所述的确定目标跟随策略的装置,其特征在于,所述从所述拍摄设备采集到的图像中确定待跟随目标,包括:
    通过用户操作,选择所述待跟随目标。
  39. 根据权利要求38所述的确定目标跟随策略的装置,其特征在于,所述通过用户操作,选择所述待跟随目标,包括:
    响应于用户的点击操作,在点击位置附近的图像区域内识别所述待跟随目标;以及
    标注所述待跟随目标的类别,和/或所述待跟随目标的所在位置。
  40. 根据权利要求38所述的确定目标跟随策略的装置,其特征在于,所述通过用户操作,选择所述待跟随目标,包括:
    响应于用户对模式选择按键的第一按压操作,在图像中央区域内识别所述待跟随目标;以及
    标注所述待跟随目标的类别,和/或标示所述待跟随目标的所在位置。
  41. 根据权利要求30所述的确定目标跟随策略的装置,其特征在于,所述通过检测所述拍摄设备采集到的所述图像,从所述拍摄设备采集到的图像中确定待跟随目标,包括:
    识别所述待跟随目标的特征。
  42. 根据权利要求41所述的确定目标跟随策略的装置,其特征在于,所述识别所述待跟随目标的特征,包括:
    将所述待跟随目标和模型图像中的第一目标对象进行比较;
    若所述待跟随目标的第一特征和所述第一目标对象的第一目标特征相似,则将所述待跟随目标标记为第一类别;以及
    所述根据所述待跟随目标的特征,确定所述待跟随目标对应的跟随策略,包括:
    若所述待跟随目标被标记为第一类别,则所述待跟随目标对应的跟随策略为第一跟随策略。
  43. 根据权利要求42所述的确定目标跟随策略的装置,其特征在于,若所述待跟随目标的第一特征和所述第一目标对象的第一目标特征不相似;则将所述待跟随目标和模型图像中的第二目标对象进行比较;
    其中,所述第一目标对象和所述第二目标对象不同。
  44. 根据权利要求42所述的确定目标跟随策略的装置,其特征在于,所述第一特征和所述第一目标特征包括面部特征、形貌轮廓、运动属性中的至少一个。
  45. 根据权利要求30所述的确定目标跟随策略的装置,其特征在于,所述识别所述待跟随目标的特征,包括:
    将所述待跟随目标和模型图像中的多个目标对象进行比较,以从所述多个目标对象中确定第一目标对象,其中,所述第一目标对象的特征与所述待跟随目标的特征相似;以及
    所述根据所述待跟随目标的特征,确定所述待跟随目标对应的跟随策略,包括:
    根据所述第一目标对象的类别,确定所述待跟随目标对应的跟随策略;
    其中,所述第一目标对象的类别位于预设类别库;所述预测类别库包括多个目标对象各自对应的类别。
  46. 根据权利要求30所述的确定目标跟随策略的装置,其特征 在于,所述通过检测所述拍摄设备采集到的所述图像,从所述拍摄设备采集到的图像中确定待跟随目标,包括:
    从所述拍摄设备采集到的图像中确定第一目标和第二目标,其中所述第一目标和所述第二目标不同;
    将所述第一目标与模型图像中的多个目标对象进行比较,以确定第一识别结果;
    将所述第二目标与所述多个目标对象进行比较,以确定第二识别结果;以及
    根据所述第一识别结果和所述第二识别结果,从所述第一目标和所述第二目标中确定所述待跟随目标。
  47. 根据权利要求46所述的确定目标跟随策略的装置,其特征在于,根据所述第一识别结果和所述第二识别结果,确定所述第一目标和所述第二目标的优先级。
  48. 根据权利要求46所述的确定目标跟随策略的装置,其特征在于,
    所述第一识别结果指示所述第一目标属于第一类别;以及
    所述第一识别结果指示所述第一目标占所述图像的画幅比例和/或在所述图像中的位置;以及
    所述第二识别结果指示所述第二目标属于第二类别;
    所述第二识别结果指示所述第二目标占所述图像的画幅比例和/或在所述图像中的位置,
    其中,所述第一类别和所述第二类别不同。
  49. 根据权利要求48所述的确定目标跟随策略的装置,其特征在于,所述第一类别为婴儿,所述第二类别为成人。
  50. 根据权利要求48所述的确定目标跟随策略的装置,其特征在于,当所述拍摄模式为第一模式时,所述第一类别的优先级高于所述第二类别的优先级;
    确定所述第一类别对应的第一目标为所述待跟随目标。
  51. 根据权利要求46所述的确定目标跟随策略的装置,其特征在于,所述第一识别结果对应的优先级高于所述第二识别结果对应的 优先级,则将所述第一目标作为所述待跟随目标。
  52. 根据权利要求46所述的确定目标跟随策略的装置,其特征在于,若所述第一识别结果为所述拍摄设备采集到的图像中不存在所述第一目标对象,则根据所述第二识别结果,将所述拍摄设备采集到的图像中的所述第二目标确定为所述待跟随目标。
  53. 根据权利要求52所述的确定目标跟随策略的装置,其特征在于,将所述图像中的所述第二目标确定为所述待跟随目标,包括:
    若所述图像中存在多个所述第二目标,则根据多个所述第二目标对象中的每一个在所述拍摄设备采集到的图像中的显著程度,确定每个所述第二目标的所述第二跟随优先级;
    根据所述图像中的每个所述第二目标的所述第二跟随优先级,从多个所述第二目标中确定所述待跟随目标。
  54. 根据权利要求46所述的确定目标跟随策略的装置,其特征在于,目标跟随包括:
    若所述第一目标为所述待跟随目标,则采用第一跟随模式进行跟随;
    若所述第二目标为所述待跟随目标,则采用第二跟随模式进行跟随;
    所述第一目标的类别和所述第二目标的类别不同,所述第一跟随模式和所述第二跟随模式不同。
  55. 根据权利要求54所述的确定目标跟随策略的装置,其特征在于,在所述第一跟随模式下所述拍摄设备的跟随速度比在所述第二跟随模式下所述拍摄设备的跟随速度慢。
  56. 根据权利要求54所述的确定目标跟随策略的装置,其特征在于,所述拍摄设备采集到的所述图像包括拍摄设备当前采集到的图像、拍摄设备先前采集到的图像和拍摄设备后续采集到的图像;
    在所述第一跟随模式下,
    所述待跟随目标在所述拍摄设备先前采集到的图像中位于第一位置,所述待跟随目标在所述拍摄设备当前采集到的图像中位于第二位置,根据所述第二位置,调整所述拍摄设备的姿态;以及
    在所述第二跟随模式下,
    根据所述第一位置和所述第二位置的关系,预测所述待跟随目标在所述拍摄设备后续采集到的图像中的第三位置;
    根据所述第三位置,调整所述拍摄设备的姿态。
  57. 根据权利要求30所述的确定目标跟随策略的装置,其特征在于,所述拍摄设备采集到的所述图像包括拍摄设备当前采集到的图像和拍摄设备先前采集到的图像;以及
    所述通过检测所述拍摄设备采集到的所述图像,从所述拍摄设备采集到的图像中确定待跟随目标,包括:
    在所述拍摄设备当前采集到的图像中确定多个第三目标;
    将所述多个第三目标与所述拍摄设备先前采集到的图像中的待跟随目标进行比较,以确定第三识别结果;
    若所述第三识别结果指示所述多个第三目标不与所述第一待跟随目标相似,则将所述多个第三目标与模型图像中的第一目标对象进行比较,以确定所述多个第三目标中是否存在所述第一待跟随目标。
  58. 根据权利要求30所述的确定目标跟随策略的装置,其特征在于,进一步包括:
    将所述待跟随目标作为第一目标对象;
    提取所述第一目标对象的特征;
    根据已提取的特征,更新关于所述第一目标对应的特征库。
  59. 一种确定目标跟随策略的系统,其特征在于,所述确定目标跟随策略的系统包括云台、搭载于所述云台上的拍摄设备和如权利要求30-58中任一项所述的确定目标跟随策略的装置。
  60. 根据权利要求59所述的确定目标跟随策略的系统,其特征在于,所述云台连接于手柄部,所述确定目标跟随策略的装置设置在所述手柄部上。
  61. 根据权利要求59所述的确定目标跟随策略的系统,其特征在于,所述云台搭载在可移动平台上,所述确定目标跟随策略的装置还用于控制所述可移动平台移动。
  62. 一种手持云台,其特征在于,所述手持云台包括手柄部、连 接于所述手柄部的云台和如权利要求30-58中任一项所述的确定目标跟随策略的装置,所述云台用于搭载拍摄设备,所述确定目标跟随策略的装置设置在所述手柄部上。
  63. 一种可移动平台,其特征在于,所述可移动平台包括平台本体、搭载于所述平台本体的云台和如权利要求30-58中任一项所述的确定目标跟随策略的装置,所述云台用于搭载拍摄设备,所述确定目标跟随策略的装置设置在所述平台本体上。
  64. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时使所述处理器实现如权利要求1-29中任一项所述的确定目标跟随策略的方法的步骤。
PCT/CN2020/122234 2020-10-20 2020-10-20 确定目标跟随策略的方法、装置、系统、设备及存储介质 WO2022082440A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2020/122234 WO2022082440A1 (zh) 2020-10-20 2020-10-20 确定目标跟随策略的方法、装置、系统、设备及存储介质
CN202080035340.1A CN113841380A (zh) 2020-10-20 2020-10-20 确定目标跟随策略的方法、装置、系统、设备及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/122234 WO2022082440A1 (zh) 2020-10-20 2020-10-20 确定目标跟随策略的方法、装置、系统、设备及存储介质

Publications (1)

Publication Number Publication Date
WO2022082440A1 true WO2022082440A1 (zh) 2022-04-28

Family

ID=78963291

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/122234 WO2022082440A1 (zh) 2020-10-20 2020-10-20 确定目标跟随策略的方法、装置、系统、设备及存储介质

Country Status (2)

Country Link
CN (1) CN113841380A (zh)
WO (1) WO2022082440A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023123254A1 (zh) * 2021-12-30 2023-07-06 深圳市大疆创新科技有限公司 无人机的控制方法、装置、无人机及存储介质
CN115623336B (zh) * 2022-11-07 2023-06-30 北京拙河科技有限公司 一种亿级摄像设备的图像跟踪方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110169867A1 (en) * 2009-11-30 2011-07-14 Innovative Signal Analysis, Inc. Moving object detection, tracking, and displaying systems
CN108292141A (zh) * 2016-03-01 2018-07-17 深圳市大疆创新科技有限公司 用于目标跟踪的方法和系统
CN108323192A (zh) * 2018-01-05 2018-07-24 深圳市大疆创新科技有限公司 手持云台的控制方法和手持云台
CN109740462A (zh) * 2018-12-21 2019-05-10 北京智行者科技有限公司 目标的识别跟随方法
US20200065976A1 (en) * 2018-08-23 2020-02-27 Seoul National University R&Db Foundation Method and system for real-time target tracking based on deep learning

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102286006B1 (ko) * 2016-11-23 2021-08-04 한화디펜스 주식회사 추종 장치 및 추종 시스템
WO2021026804A1 (zh) * 2019-08-14 2021-02-18 深圳市大疆创新科技有限公司 基于云台的目标跟随方法、装置、云台和计算机存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110169867A1 (en) * 2009-11-30 2011-07-14 Innovative Signal Analysis, Inc. Moving object detection, tracking, and displaying systems
CN108292141A (zh) * 2016-03-01 2018-07-17 深圳市大疆创新科技有限公司 用于目标跟踪的方法和系统
CN108323192A (zh) * 2018-01-05 2018-07-24 深圳市大疆创新科技有限公司 手持云台的控制方法和手持云台
US20200065976A1 (en) * 2018-08-23 2020-02-27 Seoul National University R&Db Foundation Method and system for real-time target tracking based on deep learning
CN109740462A (zh) * 2018-12-21 2019-05-10 北京智行者科技有限公司 目标的识别跟随方法

Also Published As

Publication number Publication date
CN113841380A (zh) 2021-12-24

Similar Documents

Publication Publication Date Title
US11914370B2 (en) System and method for providing easy-to-use release and auto-positioning for drone applications
US11340606B2 (en) System and method for controller-free user drone interaction
US11644832B2 (en) User interaction paradigms for a flying digital assistant
US11649052B2 (en) System and method for providing autonomous photography and videography
US20220083078A1 (en) Method for controlling aircraft, device, and aircraft
CN111344644B (zh) 用于基于运动的自动图像捕获的技术
US20180181119A1 (en) Method and electronic device for controlling unmanned aerial vehicle
WO2019242553A1 (zh) 控制拍摄装置的拍摄角度的方法、控制装置及可穿戴设备
JP2021144260A (ja) 情報処理装置、情報処理方法、プログラム、および情報処理システム
WO2021127888A1 (zh) 控制方法、智能眼镜、可移动平台、云台、控制系统及计算机可读存储介质
WO2019051832A1 (zh) 可移动物体控制方法、设备及系统
JP2016045874A (ja) 情報処理装置、情報処理方法、及びプログラム
WO2022021027A1 (zh) 目标跟踪方法、装置、无人机、系统及可读存储介质
WO2022082440A1 (zh) 确定目标跟随策略的方法、装置、系统、设备及存储介质
CN107831791B (zh) 一种无人机的操控方法、装置、操控设备及存储介质
EP4252064A1 (en) Image based finger tracking plus controller tracking
KR20180000110A (ko) 드론 및 그 제어방법
WO2020042186A1 (zh) 可移动平台的控制方法、可移动平台、终端设备和系统
WO2022021028A1 (zh) 目标检测方法、装置、无人机及计算机可读存储介质
WO2022061615A1 (zh) 待跟随目标的确定方法、装置、系统、设备及存储介质
WO2022188151A1 (zh) 影像拍摄方法、控制装置、可移动平台和计算机存储介质
US20200346753A1 (en) Uav control method, device and uav
US12125229B2 (en) UAV control method, device and UAV

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20958023

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20958023

Country of ref document: EP

Kind code of ref document: A1