WO2020237565A1 - 一种目标追踪方法、装置、可移动平台及存储介质 - Google Patents

一种目标追踪方法、装置、可移动平台及存储介质 Download PDF

Info

Publication number
WO2020237565A1
WO2020237565A1 PCT/CN2019/089248 CN2019089248W WO2020237565A1 WO 2020237565 A1 WO2020237565 A1 WO 2020237565A1 CN 2019089248 W CN2019089248 W CN 2019089248W WO 2020237565 A1 WO2020237565 A1 WO 2020237565A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
target object
shooting
component
position area
Prior art date
Application number
PCT/CN2019/089248
Other languages
English (en)
French (fr)
Inventor
张伟
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN201980005359.9A priority Critical patent/CN111345029B/zh
Priority to PCT/CN2019/089248 priority patent/WO2020237565A1/zh
Priority to EP19877539.7A priority patent/EP3771198B1/en
Priority to US16/880,553 priority patent/US10999519B2/en
Publication of WO2020237565A1 publication Critical patent/WO2020237565A1/zh
Priority to US17/222,627 priority patent/US20210227144A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/45Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/28Mobile studios
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/194Terrestrial scenes using hyperspectral data, i.e. more or other wavelengths than RGB
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/14Relay systems
    • H04B7/15Active relay systems
    • H04B7/185Space-based or airborne stations; Stations for satellite systems
    • H04B7/18502Airborne stations
    • H04B7/18506Communications with or from aircraft, i.e. aeronautical mobile service
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • H04N23/11Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths for generating image signals from visible and infrared light wavelengths

Definitions

  • This application relates to the field of image processing technology, and in particular to a target tracking method, device, movable platform and storage medium.
  • the current target tracking is generally to identify the target object in the image captured by the camera, and adjust the shooting posture of the camera so that the target object is always in the shooting screen of the camera to realize the monitoring environment Tracking of target objects.
  • the tracking method used in the prior art is to perform image recognition on the image taken by the visible light camera to determine the target object in the image, adjust the shooting posture of the visible light camera to keep the target object in the shooting picture taken by the visible light camera, and then Realize the tracking of the target object by the visible light camera. Since the image captured by the visible light imaging device has rich feature information, it is convenient to identify the target object, so that the visible light imaging device can accurately track the target object. However, at present, the images output by certain types of shooting devices cannot or cannot accurately identify the target object, so that these types of shooting devices cannot or cannot accurately track the target object.
  • Embodiments of the present invention provide a target tracking method, device, and movable platform, so that a photographing device whose output image cannot or cannot accurately recognize a target object can track the target object.
  • an embodiment of the present invention provides a target tracking method, which is applied to a movable platform, the movable platform includes a camera, the camera includes a first camera component and a second camera component, the method includes:
  • another embodiment of the present application provides a target tracking device, the target tracking device is applied to a movable platform, the movable platform includes a camera, the camera includes a first camera component and a second camera A photographing component, the target tracking device includes:
  • a calling unit configured to call the second photographing component to photograph the environment to obtain a second image, and the imaging modes of the first photographing component and the second photographing component are different;
  • a recognition unit configured to recognize a target object in the second image to obtain a tracking position area of the target object to be tracked in the second image
  • the tracking position area tracking unit in the photographing picture of the first photographing component is configured to adjust the photographing posture of the photographing device according to the tracking position area of the target object in the second image, so that the target object is located at the In the shooting screen of the first shooting component.
  • another embodiment of the present application provides a movable platform that includes a processor, a memory, and a photographing device.
  • the photographing device includes a first photographing component and a second photographing component, wherein the The photographing device is used to photograph the environment, the memory is used to store a computer program, the computer program includes program instructions, and the processor is configured to call the program instructions to execute the method of the first aspect described above.
  • an embodiment of the present invention provides a computer-readable storage medium, the computer storage medium stores a computer program, the computer program includes program instructions, and the program instructions are executed by a processor to execute the foregoing One side approach.
  • another embodiment of the present application provides a control device that is communicatively connected with a shooting device, the shooting device includes a first shooting component and a second shooting component, and the control device and the shooting device are communicatively connected ,
  • the control device includes a memory and a processor, wherein,
  • the memory is used to store a computer program, and the computer program includes program instructions
  • the processor is configured to call the program instructions for:
  • the shooting posture of the shooting device is adjusted according to the tracking position area of the target object in the second image, so that the target object is located in the shooting picture of the first shooting component.
  • the embodiment of the present invention realizes the recognition of the target object through the second image output by the second photographing component in the photographing device to obtain the tracking position area of the target object in the second image, according to the target object in the second image
  • the tracking position area in the image adjusts the shooting posture of the shooting device so that the target object is located in the shooting screen of the first shooting component in the shooting device, so that even if the image output by the first shooting component cannot or cannot be accurately performed
  • the recognition of the target object can still realize the tracking of the target object by the first photographing component.
  • FIG. 1 is an application scenario diagram of a target tracking method provided by an embodiment of the present invention
  • FIG. 2 is a schematic diagram of an imaging process of a photographing device provided by an embodiment of the present invention.
  • FIG. 3 is a schematic flowchart of a target tracking method provided by an embodiment of the present invention.
  • FIG. 4 is a schematic flowchart of a target tracking method provided by another embodiment of the present application.
  • FIG. 5 is a schematic block diagram of a target tracking device provided by an embodiment of the present invention.
  • Fig. 6 is a structural block diagram of a movable platform provided by an embodiment of the present invention.
  • the embodiment of the present invention provides a target tracking method applied to a movable platform.
  • the movable platform may be any device that relies on external force to move or through its own configured power system.
  • the movable platform may include an aircraft.
  • the movable platform includes a photographing device for photographing the environment, and the photographing device can be carried on the body of the movable platform directly or through a movable part (such as a pan-tilt device).
  • the photographing device includes a first photographing component and a second photographing component with different imaging modes, and the movable platform can also adjust the posture of the body of the movable platform, or adjust the movable parts on the movable platform connected to the shooting device (for example, as described above).
  • the posture of the pan-tilt device is used to adjust the shooting device to rotate up, down, left, and right, and/or to translate or rotate up, down, left, and right, to change the shooting posture of the shooting device.
  • the first photographing component and the second photographing component may be fixedly connected.
  • the second photographing component of the photographing device is first called to photograph the environment containing the target object (such as humans and animals) to obtain a second image, where the second image may be Any type of image that is convenient for target object recognition, and then control the shooting posture of the shooting device according to the location area of the target object in the second image, so that the target object is always in the shooting screen of the first shooting component, that is, the target object It is always within the shooting range of the first shooting component, so that the first shooting component can track the target object.
  • the target object such as humans and animals
  • the movable platform first calls the second photographing component to photograph the environment to obtain a second image, and then performs target object recognition on the second image to obtain the tracking position area of the target object to be tracked in the second image, and
  • the tracking position area of the target object in the second image is used to adjust the shooting posture of the shooting device so that the target object is located in the shooting picture of the first shooting component.
  • this application uses the second image to identify the target object and the tracking position area of the target object in the second image, and then adjusts the shooting posture of the camera according to the tracking position area of the target object in the second image, so that the target object It is located in the shooting picture in the first shooting component, thereby realizing the indirect target tracking of the target object by the first shooting component.
  • this application provides a target tracking method that enables the first photographing component to track the target object. Even if the image output by the first photographing component cannot or cannot accurately identify the target object, it can still be realized. The tracking of the target object by the first photographing component.
  • the first adjustment method is to directly adjust the shooting attitude of the shooting device based on the tracking position area of the target object in the second image. After the shooting posture of the device is adjusted, the target object is in the preset position area of the shooting frame of the second shooting component. When the target object is in the preset position area of the shooting picture of the second photographing component, the target object is in the target position area corresponding to the preset position area in the shooting picture of the first photographing component, and the target position area Can be a central location area.
  • the second adjustment method is to determine the tracking position area of the target object in the shooting screen of the first photographing component according to the tracking position area of the target object in the second image, and according to the tracking position area of the target object
  • the tracking position area in the shooting picture of the first shooting component adjusts the shooting posture of the shooting device so that the target object is in the shooting picture of the first shooting component, and further, the target object is in the shooting picture of the first shooting component In the target location area. Specifically, according to the tracking position area of the target object in the second image, and the relative positional relationship between the first photographing component and the second photographing component, it is determined that the target object is in the photographing image of the first photographing component of the first photographing component.
  • the shooting posture of the shooting device based on the tracking position area of the target object in the shooting picture of the first shooting component in the shooting picture of the first shooting component, so that the target object is located in the shooting picture of the first shooting component
  • the target object is placed in the target location area as described above in the shooting picture of the first shooting component.
  • the target object remains in the image captured by the first imaging component
  • the target The position range of the object in the image taken by the second photographing component, and the position range is taken as the preset position area.
  • the position range of the target object in the image taken by the second photographing component is used as the preset position area.
  • the target location area corresponding to the location area.
  • the camera is adjusted based on the tracking position area of the target object in the shooting screen of the first shooting component in the shooting screen of the first shooting component.
  • the difference between the first photographing component and the second photographing component is known or known, and then the relative position relationship between the first photographing component and the second photographing component is determined according to the relative position between the first photographing component and the second photographing component, that is, the first photographing component and the second photographing component
  • the position conversion relationship between the corresponding pixel points of the first image and the second image respectively captured by the photographing component can quickly convert the tracking position area of the target object in the shooting image of the first photographing component according to the tracking position area of the target object in the second image.
  • the present application will fix the relative position between the first imaging component and the second imaging component, and the first imaging component is a thermal infrared imaging device, and the first image captured by the first imaging component is thermal infrared
  • the first image is the image corresponding to the shooting screen of the first shooting component
  • the second shooting component is a visible light shooting device
  • the second image captured by the second shooting component is an optical image as an example. Detailed description in the examples.
  • the photographing device on the above-mentioned movable platform is a dual-lens camera, and the relative position between the first photographing component and the second photographing component contained in the double-lens camera is fixed.
  • the photographing component is a thermal infrared photographing device
  • the first image is a thermal infrared image
  • the second photographing component is a visible light photographing device
  • the second image is an optical image.
  • the second photographing component B photographs the environment in the form of visible light imaging
  • the photographed second image B1B2B3B4 is obtained as an optical image.
  • the first pixel on the first image A1A2A3A4 corresponds to the second pixel on the second image B1B2B3B4, that is, the first pixel and the second pixel are imaged for the same target point on the target object .
  • the relative position between the first shooting component and the second shooting component in the dual-lens camera is fixed, that is, the external parameters of the two shooting components on the dual-lens camera are known, and the external parameters are used to indicate the two
  • the relative position of the photographing components is specifically determined according to the relative position and relative angle of the installation, and the internal parameters of the two photographing components (determined according to the focal length of the photographing components and the position of the optical center) are also known, so the second The second pixel in the image is projected to the first pixel in the first image.
  • the coordinates of the target point on the target object in the camera coordinate system of the first photographing component are (X 1 , Y 1 , Z 1 ), where X 1 , Y 1 and Z 1 are the horizontal coordinate values,
  • the longitudinal coordinate value and the depth coordinate value, the relative position offset between the second photographing component and the first photographing component is ( ⁇ X, ⁇ Y, ⁇ Z), so the coordinates of the target object in the camera coordinate system of the second photographing component are ( X 1 + ⁇ X, Y 1 + ⁇ Y, Z 1 ⁇ Z), the coordinates of the first pixel in the first image captured by the first photographing component of the target point of the target object are (u 1 , v 1 ), the target point
  • the coordinates of the second pixel in the second image captured by the second photographing component are (u 2 , v 2 ), the coordinates of the first pixel (u 1 , v 1 ) and the coordinates of the second pixel (u 2 ,v 2 ) has the following relative
  • f 1 and f 2 are the focal lengths of the first and second photographing components respectively.
  • the depth of observation of the target object is generally above 5m, that is, Z 1 > 5m, and the two photographing components
  • the thermal infrared imaging method refers to the thermal infrared imaging device that detects the infrared radiation emitted by the object itself, and converts the temperature distribution of the object into a thermal infrared image by means of photoelectric conversion, signal processing, etc., so the thermal infrared method is used
  • the first image A1A2A3A4 obtained can reflect the temperature distribution information of objects with thermal radiation, such as human beings, animals, and electromagnetic equipment, etc.
  • thermal imaging methods have the advantage of being able to shoot without light and shade.
  • Objects can be photographed well in special environments, such as visible light imaging methods, such as red-green-blue (RGB, red-green-blue) imaging methods, etc.
  • the imaging principle is to use the reflection of visible light on the surface of the object for imaging.
  • the optical image B1B2B3B4 obtained by imaging contains detailed information such as the color and shape of the object, but the imaging result is greatly affected by light and occlusion.
  • both the first photographing component and the second photographing component are aimed at shooting the same target object in the environment.
  • the imaging methods of the first and second imaging components are different, the size of the images obtained by the first and second imaging components may be different (for example, due to technical limitations, the range of infrared imaging is smaller than that of visible light imaging)
  • the image obtained by infrared imaging is smaller than the image obtained by visible light imaging
  • the position of the target object in the first image and the second image is different.
  • the relative position relationship between the first photographing component and the second photographing component can be determined, once the position of the target object in one image is determined, the position of the target object in another image can be easily converted.
  • the thermal infrared image has insufficient texture information compared with the optical image. If the target recognition and tracking effect is directly performed on the thermal infrared image, the effect of target recognition and tracking is very poor, but the thermal infrared image is not hindered by light and occlusion. For the advantages of imaging objects with thermal radiation in the environment, the target tracking method based on thermal infrared images has very important practical significance.
  • the present application uses the optical image output by the visible light shooting device to recognize the target object to obtain the tracking position area of the target object in the optical image, and adjust the shooting according to the tracking position area of the target object in the optical image
  • the shooting posture of the device so that the target object is located in the shooting frame of the thermal infrared camera in the camera, so that even if the thermal infrared image output by the thermal infrared camera cannot or cannot accurately identify the target object, it can still be achieved Thermal infrared camera tracking the target object.
  • the user can specify the target object to be tracked based on the first image output by the first photographing component, thereby realizing the tracking of the target object.
  • the movable platform sends the first image to the control terminal so that the control terminal displays the first image.
  • An image and then the user performs a selection operation on the first image displayed on the control terminal, for example, frame the area containing the target object to be tracked on the first image, and then the control terminal generates it according to the area containing the target object to be tracked Obtain the first indication area information, and send the first indication area information to the movable platform.
  • the movable platform After receiving the first area indication information sent by the control terminal, the movable platform uses the first area indication information and the first shooting The relative positional relationship between the component and the second photographing component is determined to obtain the second area indication information of the second image, and the area indicated by the second area indication information in the second image is subjected to target recognition to determine the target object , And get the tracking position area of the target object in the second image.
  • target recognition is performed on the area indicated by the second area indication information in the second image, and the target object in the area is recognized, wherein the recognition may be through a neural network.
  • the determined target object can be identified from the second image output by the second photographing component to obtain the tracking position area of the target object in the second image. Further, recognizing the determined target object from the second image output by the second photographing component may be recognized through a neural network, or may be recognized through image tracking.
  • the user can view the target tracking result based on the first image on the control terminal.
  • After determining the tracking position area of the target object in the second image according to the tracking position area of the target object in the second image and the relative positional relationship between the first photographing component and the second photographing component, it is determined that the target object is The tracking position area in the first image, and then based on the tracking position area of the target object in the first image, mark the target object in the first image, and add all or part of the image information in the second image information to the mark In the first image of, enrich the contour features of the first image after marking, and finally send the first image after marking to the control terminal so that the control terminal displays the first image after marking.
  • the embodiment of the present invention proposes a more detailed target tracking method in FIG. 3, and the target tracking method can be executed by the aforementioned movable platform.
  • the movable platform calls the second photographing component to photograph the environment to obtain a second image, and at the same time, it may also call the first photographing device to photograph the environment to obtain the first image.
  • the first imaging component is a thermal infrared imaging device
  • the second imaging component is a visible light imaging device.
  • the imaging methods of the first imaging component and the second imaging component are different.
  • the first imaging component uses thermal infrared imaging to obtain the first thermal infrared image.
  • the second imaging component uses visible light imaging to obtain a second image that is an optical image, and the second image is an optical image.
  • the position of the target object in the first image and the second image may be different, but due to the difference between the first imaging component and the second imaging component
  • the relative position relationship can be determined, so once the position of the target object in one image is determined, the position of the target object in another image can be easily converted.
  • target object recognition is performed on the above-mentioned second image to identify the target object in the second image, and segment to obtain the tracking position area of the target object to be tracked in the second image.
  • target object recognition is to determine the target object in the second image and the tracking position area of the target object through the image processing methods of target detection and target segmentation.
  • Target detection and target segmentation can be traditional target detection methods and target segmentation.
  • the method may also be a target detection method and a target segmentation method based on deep learning (for example, a neural network), which is not limited in the embodiment of the present invention.
  • the shooting posture of the shooting device is adjusted according to the tracking position area of the target object in the second image, so that the target object is located in the shooting picture of the first shooting component.
  • the movable platform can adjust the shooting posture of the shooting device by changing the overall posture of the movable platform itself (such as the body), and can also control the shooting device to adjust the shooting posture through the pan-tilt device connected to the shooting device, that is, by adjusting The posture of the pan-tilt device is used to adjust the shooting posture of the camera.
  • the corresponding relationship between the first image and the second image can also be known, so that when the target object is in the preset position area of the second image In the middle, the target object should also be in the shooting image of the first shooting component, and further, it may be in the target location area of the shooting image.
  • the above adjusting the shooting posture of the shooting device according to the tracking position area of the target object in the second image may refer to adjusting the shooting posture of the shooting device according to the tracking position area of the target object in the second image, so that the shooting posture of the shooting device is After the adjustment, it is ensured that the target object is in the preset position area of the image taken by the second photographing component, so that the target object is located in the photographing frame of the first photographing component, and further, can be in the target position area of the photographing frame.
  • adjusting the shooting posture of the camera according to the tracking position area of the target object in the second image may also refer to first determining that the target object is in the first imaging component according to the tracking position area of the target object in the second image. And then adjust the shooting posture of the shooting device according to the tracking position area of the target object in the shooting screen of the first shooting component.
  • the thermal infrared image has insufficient texture information compared with the optical image. If the target recognition and tracking effect is directly performed on the thermal infrared image, the effect of target recognition and tracking is very poor, but the thermal infrared image is not hindered by light and occlusion. It has the advantage of imaging objects with thermal radiation in the environment, so the target tracking method based on thermal infrared image is very important.
  • This application can solve this problem well, because this application uses the optical image output by the visible light camera to recognize the target object to obtain the tracking position area of the target object in the optical image, and according to the location of the target object
  • the tracking position area in the optical image adjusts the shooting posture of the shooting device, so that the target object is located in the shooting picture of the thermal infrared camera in the shooting device, so that even if the thermal infrared image output by the thermal infrared camera cannot or The target object cannot be accurately identified, and the thermal infrared camera can still track the target object.
  • the first image is first sent to the control terminal of the movable platform, so that the control terminal displays the first image,
  • the user can perform a selection operation on the first image on the control terminal (for example, frame the area where the target object is located), and then the terminal device can obtain the first image indicating the area selected by the user in the first image according to the selection operation.
  • Area indication information and send the first area indication information to the movable platform.
  • the movable platform After receiving the first area indication information, the movable platform determines the target according to the tracking position area of the target object in the second image as described above
  • the second area indication information is determined according to the first area indication information and the relative position relationship between the first photographing component and the second photographing component, and the second area indication information is used To instruct the user to map the area in the first image to the area selected by the user in the second image, and finally perform target recognition on the area indicated by the second area indication information in the second image to determine the target object and obtain the target object The tracking position area in the second image.
  • the user can specify the target object, which improves the target tracking efficiency of the present application in one embodiment.
  • the tracking position area of the target object in the first image is determined according to the tracking position area of the target object in the second image , And then mark the target object in the first image according to the tracking position area of the target object in the first image (for example, as shown in the first image in Figure 2, the location of the target object is framed), and finally The control terminal of the mobile platform sends the marked first image so that the control terminal displays the marked first image.
  • the determination of the tracking position area of the target object in the first image according to the tracking position area of the target object in the second image has been described in detail above, and will not be repeated here.
  • the embodiment of the present invention determines the tracking position area of the target object in the second image by performing target object recognition on the second image, and then determines that the target object is in the first image according to the tracking position area of the target object in the second image
  • the target object is marked in the first image, and finally displayed to the user through the control terminal, thereby realizing indirect target tracking based on the first image, especially when the first image is a thermal infrared image.
  • the embodiments of the invention can realize target tracking based on thermal infrared images, and have very important practical value.
  • the target object is marked in the first image
  • first extract all or part of the second image information and combine all or part of the second image information.
  • Part of the image information is added to the first image after the marking to enrich the contour features of the first image after the marking, and then the first image after the marking is sent to the control terminal, so that the control terminal finally presents the first image to the user.
  • the image is not only marked with the target object, but the details of the image are also greatly enriched, which improves the shortcomings of insufficient image details such as thermal imaging images to a certain extent.
  • the embodiment of the present invention can not only realize target tracking based on thermal imaging images, but also interact with users through thermal imaging images, and use the details of optical images to enrich the contour details of thermal imaging images that are not rich in details. This greatly improves the practicability of thermal imaging images.
  • the embodiment of the present invention also proposes a better target tracking method in FIG. 3, and the target tracking method can be executed by the aforementioned movable platform.
  • the first photographing component and the second photographing component of the photographing device of the movable platform are called to photograph the environment to obtain the first image and the second image.
  • the first imaging component is a thermal infrared imaging device
  • the second imaging component is a visible light imaging device.
  • the imaging methods of the first imaging component and the second imaging component are different.
  • the first imaging component uses thermal infrared imaging to obtain the first thermal infrared image.
  • the second imaging component uses visible light imaging to obtain a second image that is an optical image, and the second image is an optical image.
  • the first image is sent to the control terminal of the movable platform so that the control terminal displays the first image.
  • the target object to be tracked in the second image is determined according to the above-mentioned first area indication information, and the tracking position area of the target object in the second image is obtained. Specifically, according to the first area indication information and the relative positional relationship between the first photographing component and the second photographing component, the second area indication information is determined, and then the area indicated by the second area indication information in the second image is performed Target recognition to determine the target object and obtain the tracking position area of the target object in the second image.
  • the shooting posture of the shooting device is adjusted according to the tracking position area of the target object in the second image, so that the target object is located in the shooting picture of the first shooting component.
  • the movable platform can adjust the shooting posture of the shooting device by changing the overall posture of the movable platform itself, and can also control the shooting device to adjust the shooting posture through the pan-tilt device connected to the shooting device, that is, by adjusting the posture of the pan-tilt device To adjust the shooting posture of the camera.
  • adjusting the shooting posture of the shooting device according to the tracking position area of the target object in the second image may be adjusting the target object to the preset position area of the image captured by the second shooting component so that the target object is in the first image.
  • the shooting image of a shooting component further, it may be in the target location area of the shooting image.
  • adjusting the shooting posture of the shooting device according to the tracking position area of the target object in the second image may also be based on the tracking position area of the target object in the second image, and the first and second shooting components Determine the tracking position area of the target object in the shooting image of the first shooting component, and then adjust the shooting posture of the shooting device according to the tracking position area of the target object in the shooting image of the first shooting component, so that The target object is in the shooting image of the first shooting component, and further, may be in the target location area of the shooting image.
  • the tracking position area of the target object in the shooting picture of the first photographing component is determined. Specifically, the tracking position area of the target object in the first image is determined according to the tracking position area of the target object in the second image and the relative position relationship between the first photographing component and the second photographing component.
  • the target object is marked in the first image according to the tracking position area of the target object in the first image.
  • the target object is marked in the first image
  • detailed information in the second image is extracted to enrich the contour features in the first image. Specifically, extract all or part of the image information in the second image information, and then add all or part of the image information in the second image information to the first image after marking to enrich the contour features of the first image after marking .
  • the first image after the marking is sent to the control terminal of the movable platform so that the control terminal displays the first image after the marking.
  • the embodiment of the present invention also provides a schematic structural diagram of a target tracking device as shown in FIG. 5, which is applied to a movable platform and a movable platform. It includes a photographing device, the photographing device includes a first photographing component and a second photographing component, and the target tracking device includes:
  • the calling unit 510 is used to call the second photographing component to photograph the environment to obtain a second image.
  • the imaging modes of the first photographing component and the second photographing component are different;
  • the identifying unit 520 is used to target the second image Object recognition to obtain the tracking position area of the target object to be tracked in the second image;
  • the tracking unit 530 is configured to adjust the shooting posture of the photographing device according to the tracking position area of the target object in the second image, so that The target object is located in the shooting picture of the first shooting component.
  • the above-mentioned movable platform includes a pan-tilt device carrying the above-mentioned camera
  • the above-mentioned tracking unit 530 is specifically configured to adjust the posture of the above-mentioned movable platform and/or the above-mentioned cloud according to the tracking position area of the above-mentioned target object in the above-mentioned second image.
  • the posture of the stage device is adjusted to adjust the shooting posture of the above-mentioned shooting device.
  • the first imaging component is a thermal infrared imaging device; the second imaging component is a visible light imaging device, and the second image is an optical image.
  • the tracking unit 530 is specifically configured to adjust the shooting attitude of the shooting device according to the tracking position area of the target object in the second image. After the shooting attitude of the shooting device is adjusted, the target object is in the second shooting. In the preset location area of the component's shooting screen.
  • the target tracking device further includes a determining unit 540, configured to determine the tracking position area of the target object in the shooting frame of the first photographing component according to the tracking position area of the target object in the second image;
  • the tracking unit 530 is specifically configured to adjust the shooting posture of the shooting device according to the tracking position area of the target object in the shooting frame of the first shooting component.
  • the calling unit 510 is also used to call the first photographing component to photograph the environment to obtain the first image;
  • the target tracking device further includes a sending unit 550, which is used to send a message to the mobile platform The control terminal sends the first image to enable the control terminal to display the first image;
  • the target tracking device further includes an acquiring unit 560 for acquiring the first area indication information sent by the control terminal, wherein the first area indication information It is determined by the control terminal by detecting the user's selection operation of the target object on the first image displayed by the control terminal;
  • the identification unit 520 is specifically configured to determine the waiting area in the second image according to the first area indication information. Track the target object, and obtain the tracking position area of the target object in the second image.
  • the identification unit 520 is specifically configured to determine to obtain the second area indication information according to the first area indication information and the relative positional relationship between the first photographing component and the second photographing component; Target recognition is performed on the area indicated by the second area indication information in the second image to determine the target object, and obtain the tracking position area of the target object in the second image.
  • the calling unit 510 is further used to call the first photographing component to photograph the environment to obtain a first image;
  • the target tracking device further includes a determining unit 540, which is used to determine whether the target object is The tracking position area in the second image determines the tracking position area of the target object in the shooting picture of the first photographing component;
  • the target tracking device further includes a marking unit 570 for marking the target object in the first photographing component The tracking position area in the shooting screen of the above-mentioned first image is marked with the above-mentioned target object;
  • the above-mentioned target tracking device further includes a sending unit 550 for sending the marked first image to the control terminal of the above-mentioned movable platform to make The control terminal displays the first image after the mark.
  • the target tracking device further includes an extracting unit 580, which is used to extract all or part of the image information in the second image information; the target tracking device further includes an adding unit 590, which is used to extract the second image information. All or part of the image information in is added to the first image after the mark to enrich the contour features of the first image after the mark.
  • the determining unit 540 is specifically configured to determine the target object according to the tracking position area of the target object in the second image, and the relative position relationship between the first photographing component and the second photographing component The tracking position area in the above first image.
  • the embodiment of the present invention also provides a schematic structural diagram of a movable platform as shown in FIG. 6.
  • the internal structure of the movable platform may include at least a processor 610, a memory 620, and a camera 630.
  • the aforementioned photographing device includes a first photographing component 631 and a second photographing component 632.
  • the aforementioned processor 610, memory 620, and photographing device 630 may be connected via a bus 640 or other means.
  • the memory 620 here may be used to store a computer program.
  • the above-mentioned computer program includes program instructions.
  • the processor 610 here may be used to execute the program instructions stored in the memory 620.
  • the processor 610 may be a central processing unit (Central Processing Unit, CPU), and the processor may also be another general-purpose processor, that is, a microprocessor or any conventional processor, such as a digital signal Processor (Digital Signal Processor, DSP), Application Specific Integrated Circuit (ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gates or transistor logic devices, Discrete hardware components, etc.
  • CPU Central Processing Unit
  • DSP Digital Signal Processor
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • the memory 620 may include a read-only memory and a random access memory, and provides instructions and data to the processor 610. Therefore, the processor 610 and the memory 620 are not limited here.
  • the processor 610 loads and executes one or more instructions stored in the computer storage medium to implement the corresponding steps of the method in the above corresponding embodiment; in a specific implementation, at least one of the computer storage media The instructions are loaded and executed by the processor 610. specific:
  • the aforementioned photographing device 630 is used to photograph the environment
  • the foregoing memory 620 is used to store a computer program, and the foregoing computer program includes program instructions;
  • the processor 610 is configured to call the program instructions for calling the second photographing component to photograph the environment to obtain a second image.
  • the imaging modes of the first photographing component and the second photographing component are different;
  • the second image performs target object recognition to obtain the tracking position area of the target object to be tracked in the second image; it is also used to adjust the shooting posture of the photographing device according to the tracking position area of the target object in the second image , So that the target object is located in the shooting frame of the first shooting component.
  • the processor 610 is specifically configured to adjust the posture of the movable platform and/or the posture of the pan/tilt device according to the tracking position area of the target object in the second image to adjust the shooting posture of the camera.
  • the first photographing component 631 is a thermal infrared photographing device; the second photographing component 632 is a visible light photographing device, and the second image is an optical image.
  • the processor 610 is specifically configured to adjust the shooting attitude of the shooting device according to the tracking position area of the target object in the second image. After the shooting attitude of the shooting device is adjusted, the target object is in the second shooting. In the preset location area of the component's shooting screen.
  • the processor is specifically configured to determine, according to the tracking position area of the target object in the second image, the tracking position area of the target object in the shooting picture of the first photographing component;
  • the tracking position area in the shooting picture of a shooting component adjusts the shooting posture of the above-mentioned shooting device.
  • the processor 610 is further configured to call the first photographing component to photograph the environment to obtain a first image;
  • the above-mentioned movable platform further includes a communication interface 650, and the above-mentioned communication interface is used for the above-mentioned movable
  • the platform performs data interaction with other terminal devices, and is specifically used to send the first image to the control terminal of the movable platform so that the control terminal can display the first image;
  • the processor 610 is specifically used to obtain the data sent by the control terminal
  • the first area indication information wherein the first area indication information is determined by the control terminal by detecting a user's operation of selecting a target object on the first image displayed by the control terminal; according to the first area indication information, the determination The target object to be tracked in the second image, and the tracking position area of the target object in the second image is obtained.
  • the processor 610 is specifically configured to determine to obtain the second area indication information according to the first area indication information and the relative positional relationship between the first photographing component and the second photographing component; Target recognition is performed on the area indicated by the second area indication information to determine the target object, and obtain the tracking position area of the target object in the second image.
  • the processor 610 is further configured to call the first photographing component to photograph the environment to obtain a first image; the processor 610 is specifically configured to be based on the target object in the second image The tracking position area of the target object in the first image is determined; the processor 610 is further configured to mark in the first image according to the tracking position area of the target object in the first image Out the above target object;
  • the above-mentioned movable platform further includes a communication interface 650, which is used to send the marked first image to the control terminal of the above-mentioned movable platform so that the above-mentioned control terminal displays the marked first image.
  • the processor 610 is further configured to extract all or part of the image information in the second image information; add all or part of the image information in the second image information to the first image after the mark, To enrich the contour features of the first image after the above marking.
  • the processor 610 is specifically configured to determine that the target object is in the first image according to the tracking position area of the target object in the second image, and the relative positional relationship between the first photographing component and the second photographing component. The tracking location area in the image.
  • An embodiment of the present invention also provides a control device, which is in communication connection with a photographing device, the photographing device includes a first photographing component and a second photographing component, and is characterized in that the control device includes a memory and a processor, wherein,
  • the memory is used to store a computer program, and the computer program includes program instructions
  • the processor is configured to call the program instructions for:
  • the shooting posture of the shooting device is adjusted according to the tracking position area of the target object in the second image, so that the target object is located in the shooting picture of the first shooting component.
  • control device is arranged in a movable platform, and the movable platform includes the control device and the photographing device, and the control device may be communicatively connected with the photographing device.
  • the processor of the control device executes the method steps described above. For details, please refer to the foregoing part, which will not be repeated here.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of this application is essentially or the part that contributes to the existing technology, or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium It includes several instructions to make a computer device (which may be a personal computer, an image processing device, or a network device, etc.) execute all or part of the steps of the methods in the various embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program code .
  • the above-mentioned program can be stored in a computer-readable storage medium. When executed, it may include the procedures of the above-mentioned method embodiments.
  • the aforementioned storage medium may be a magnetic disk, an optical disc, a read-only memory (Read-Only Memory, ROM), or a random access memory (Random Access Memory, RAM), etc.

Abstract

本申请公开了一种目标追踪方法、装置、可移动平台及存储介质。其中,方法应用于可移动平台,可移动平台包括拍摄装置,拍摄装置包括第一拍摄组件和第二拍摄组件,方法包括:调用第二拍摄组件分别对环境进行拍摄,得到第二图像;对第二图像进行目标物体识别,得到待跟踪的目标物体在第二图像中的跟踪位置区域;根据目标物体在第二图像中的跟踪位置区域调整拍摄装置的拍摄姿态,以使目标物体位于第一拍摄组件的拍摄画面中。

Description

一种目标追踪方法、装置、可移动平台及存储介质 技术领域
本申请涉及图像处理技术领域,尤其涉及一种目标追踪方法、装置、可移动平台及存储介质。
背景技术
目前的目标追踪一般是对拍摄装置拍摄得到的图像进行目标识别,从而确定出图像中的目标物体,调节拍摄装置的拍摄姿态以使目标物体始终在拍摄装置的拍摄画面中,实现对监控环境中目标物体的追踪。
现有技术中使用的追踪方式是对可见光拍摄装置拍摄的图像进行图像识别以确定图像中的目标物体,调节可见光拍摄装置的拍摄姿态以使目标物体始终处于可见光拍摄装置拍摄的拍摄画面中,进而实现可见光拍摄装置对目标对象的追踪。由于可见光拍摄装置拍摄的图像具有丰富的特征信息便于进行目标物体的识别,使得可见光拍摄装置能够准确地实现对目标物体的追踪。然而,目前,某些类型的拍摄装置输出的图像不能或者不能准确地进行目标物体识别,使得这些类型的拍摄装置不能或者不能准确地实现对目标物体的追踪。
发明内容
本发明实施例提供一种目标追踪方法、装置、可移动平台,使得输出的图像不能或者不能准确地进行目标物体识别的拍摄装置能够实现对目标物体的追踪。
第一方面,本发明实施例提供了一种目标追踪方法,应用于可移动平台,所述可移动平台包括拍摄装置,所述拍摄装置包括第一拍摄组件和第二拍摄组件,该方法包括:
调用所述第二拍摄组件对环境进行拍摄,得到第二图像,所述第一拍摄组件和第二拍摄组件的成像方式不同;
对所述第二图像进行目标物体识别,得到待跟踪的目标物体在所述第二图 像中的跟踪位置区域;
根据所述目标物体在所述第二图像中的跟踪位置区域调整所述拍摄装置的拍摄姿态,以使所述目标物体位于所述第一拍摄组件的拍摄画面中第一拍摄组件的拍摄画面中的跟踪位置区域。
第二方面,本申请另一实施例提供了一种目标追踪装置,所述目标追踪装置应用于可移动平台,所述可移动平台包括拍摄装置,所述拍摄装置包括第一拍摄组件和第二拍摄组件,该目标追踪装置包括:
调用单元,用于调用所述第二拍摄组件对环境进行拍摄,得到第二图像,所述第一拍摄组件和第二拍摄组件的成像方式不同;
识别单元,用于对所述第二图像进行目标物体的识别,得到待跟踪的目标物体在所述第二图像中的跟踪位置区域;
第一拍摄组件的拍摄画面中的跟踪位置区域追踪单元,用于根据所述目标物体在所述第二图像中的跟踪位置区域调整所述拍摄装置的拍摄姿态,以使所述目标物体位于所述第一拍摄组件的拍摄画面中。
第三方面,本申请另一实施例提供了一种可移动平台,该可移动平台包括处理器、存储器和拍摄装置,所述拍摄装置包括第一拍摄组件和第二拍摄组件,其中,所述拍摄装置用于对环境进行拍摄,所述存储器用于存储计算机程序,所述计算机程序包括程序指令,所述处理器被配置用于调用所述程序指令,用以执行上述第一方面的方法。
第四方面,本发明实施例提供了一种计算机可读存储介质,所述计算机存储介质存储有计算机程序,所述计算机程序包括程序指令,所述程序指令被处理器执行,用以执行上述第一方面的方法。
第五方面,本申请另一实施例提供了一种控制设备,该控制设备与拍摄装置通信连接,所述拍摄装置包括第一拍摄组件和第二拍摄组件,所述控制设备和拍摄装置通信连接,所述控制设备包括存储器和处理器,其中,
所述存储器用于存储计算机程序,所述计算机程序包括程序指令;
所述处理器被配置用于调用所述程序指令,用于:
调用所述第二拍摄组件对环境进行拍摄,得到第二图像,所述第一拍摄组件和第二拍摄组件的成像方式不同;
对所述第二图像进行目标物体识别,得到待跟踪的目标物体在所述第二图像中的跟踪位置区域;
根据所述目标物体在所述第二图像中的跟踪位置区域调整所述拍摄装置的拍摄姿态,以使所述目标物体位于所述第一拍摄组件的拍摄画面中。本发明实施例通过拍摄装置中的第二拍摄组件输出的第二图像实现对目标物体的识别以获取目标物体在所述第二图像中的跟踪位置区域,根据所述目标物体在所述第二图像中的跟踪位置区域调整所述拍摄装置的拍摄姿态,以使所述目标物体位于拍摄装置中的第一拍摄组件的拍摄画面中,这样即便第一拍摄组件输出的图像不能或者不能准确地进行目标物体的识别,依然可以实现第一拍摄组件对目标物体的追踪。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍。
图1是本发明实施例提供的一种目标追踪方法的应用场景图;
图2是本发明实施例提供的一种拍摄装置成像过程的示意图;
图3是本发明实施例提供的一种目标追踪方法的示意流程图;
图4是本申请另一实施例提供的一种目标追踪方法的示意流程图;
图5是本发明实施例提供的一种目标追踪装置的示意性框图;
图6是本发明实施例提供的一种可移动平台的结构性框图。
具体实施方式
本发明实施例提供了一种应用于可移动平台上的目标追踪方法。其中,可移动平台可以为任何依靠外力移动或者通过自身配置的动力系统移动的装置,例如,所述可移动平台可以包括飞行器。如图1所示,可移动平台上包含有用于对环境进行拍摄的拍摄装置,所述拍摄装置可以直接地或者通过活动部件(例如云台装置)承载在可移动平台的机身上。拍摄装置包括成像方式不同的第一拍摄组件和第二拍摄组件,并且可移动平台还可以通过调整可移动平台机身的姿态,或者调整可移动平台上与拍摄装置连接的活动部件(例如如前所述 的云台装置)的姿态,来调整拍摄装置进行上下左右转动和/或上下左右前后平移或者转动,从而来改变拍摄装置的拍摄姿态。第一拍摄组件和第二拍摄组件可以是固定连接。在可移动平台对目标物体进行目标追踪时,首先调用拍摄装置的第二拍摄组件对包含有目标物体(例如人和动物等)的环境进行拍摄以获取第二图像,其中,第二图像可以是便于进行目标物体识别的任何类型的图像,然后根据目标物体在第二图像中的位置区域来控制拍摄装置的拍摄姿态,以使目标对象始终在第一拍摄组件的拍摄画面中,即使得目标物体始终在第一拍摄组件的拍摄范围之内,从而实现第一拍摄组件对目标物体的追踪。具体的,可移动平台先调用第二拍摄组件对环境进行拍摄,得到第二图像,然后对第二图像进行目标物体识别,得到待跟踪的目标物体在第二图像中的跟踪位置区域,并根据目标物体在第二图像中的跟踪位置区域来对拍摄装置的拍摄姿态进行调整,使得目标物体位于第一拍摄组件的拍摄画面中。可见,本申请利用第二图像来识别目标物体以及目标物体在第二图像中的跟踪位置区域,然后再根据目标物体在第二图像中的跟踪位置区域来调整拍摄装置的拍摄姿态,使得目标物体位于第一拍摄组件中的拍摄画面中,从而实现了第一拍摄组件对目标物体的间接的目标追踪。总的来说,本申请提供了一种使得第一拍摄组件能够使用对目标物体实现追踪的目标追踪方法,即便第一拍摄组件输出的图像不能或者不能准确地进行目标物体的识别,依然可以实现第一拍摄组件对目标物体的追踪。
具体的,上述在拍摄装置的拍摄姿态时,有两种调整方法,第一种调整方法是直接基于目标物体在第二图像中的跟踪位置区域对拍摄装置的拍摄姿态进行调整,在所述拍摄装置的拍摄姿态调整之后,所述目标物体处于所述第二拍摄组件的拍摄画面的预设位置区域中。当目标物体处于所述第二拍摄组件的拍摄画面的预设位置区域中,目标物体处于第一拍摄组件中的拍摄画面中与所述预设位置区域对应的目标位置区域,所述目标位置区域可以为中央位置区域。第二种调整方法是根据所述目标物体在所述第二图像中的跟踪位置区域,确定所述目标物体在所述第一拍摄组件的拍摄画面中的跟踪位置区域,根据所述目标物体在所述第一拍摄组件的拍摄画面中的跟踪位置区域调整所述拍摄装置的拍摄姿态,使得目标物体在第一拍摄组件的拍摄画面中,进一步地,使得目 标物体处于第一拍摄组件的拍摄画面中的所述目标位置区域中。具体地,根据目标物体在第二图像中的跟踪位置区域,以及第一拍摄组件与第二拍摄组件的相对位置关系,确定目标物体在第一拍摄组件的拍摄画面第一拍摄组件的拍摄画面中的跟踪位置区域,然后基于目标物体在第一拍摄组件的拍摄画面第一拍摄组件的拍摄画面中的跟踪位置区域对拍摄装置的拍摄姿态进行调整,使得目标物体位于第一拍摄组件的拍摄画面中,例如使得目标物体处于第一拍摄组件的拍摄画面中如前所述的目标位置区域。
在实施上述第一种调整方法之前,首先需要确定第一拍摄组件和第二拍摄组件所拍摄的图像之间的区域对应关系,即当目标物体保持在第一拍摄组件拍摄的图像中时,目标物体在第二拍摄组件拍摄的图像中的位置范围,并将该位置范围作为预设位置区域。在某些实施例中,当目标物体保持在第一拍摄组件拍摄的图像中的目标位置区域时,目标物体在第二拍摄组件拍摄的图像中的位置范围,并将该位置范围作为预设位置区域。因此在实施上述第一种调整方法时,先确定目标物体在第二图像中的跟踪位置区域,然后基于该目标物体在第二图像中位置直接调整拍摄装置的拍摄姿态,使得目标物体处于所述第二拍摄组件拍摄的图像的预设位置区域中,也即是保证目标物体也位于第一拍摄组件的拍摄画面中,进一步地保证目标物体位于第一拍摄组件的拍摄画面中与所述预设位置区域对应的目标位置区域。
在实施上述第二种调整方法时,先根据目标物体在第二图像中的跟踪位置区域,以及第一拍摄组件与第二拍摄组件的相对位置关系,确定目标物体在第一拍摄组件的拍摄画面中的跟踪位置区域,然后基于目标物体在第一拍摄组件的拍摄画面第一拍摄组件的拍摄画面中的跟踪位置区域对拍摄装置进行调整。具体的,由于第一拍摄组件和第二拍摄组件之间的相对位置固定,或者,相对位置不固定但可以被可移动平台中的传感器测量得到,因此第一拍摄组件和第二拍摄组件之前的相对位置是已知或者可知的,然后根据第一拍摄组件与第二拍摄组件之间的相对位置确定第一拍摄组件和第二拍摄组件之间的相对位置关系,即第一拍摄组件和第二拍摄组件分别拍摄的第一图像和第二图像的相对应的像素点之间的位置换算关系。因此,本申请可以根据目标物体在第二图像中的跟踪位置区域,来快速换算得到目标物体在第一拍摄组件的拍摄画面中的 跟踪位置区域。
如图2所示,本申请将以第一拍摄组件与第二拍摄组件之间的相对位置固定,且第一拍摄组件为热红外拍摄装置,第一拍摄组件拍摄得到的第一图像为热红外图像,第一图像即为第一拍摄组件的拍摄画面对应的图像,第二拍摄组件为可见光拍摄装置,第二拍摄组件拍摄得到的第二图像为光学图像为例,对本发明实施例进行在一个实施例中详细说明。
在一个实施例中,如图2所示,上述可移动平台上的拍摄装置为双光相机,该双光相机中包含的第一拍摄组件与第二拍摄组件之间的相对位置固定,第一拍摄组件为热红外拍摄装置,第一图像为热红外图像,第二拍摄组件为可见光拍摄装置,所述第二图像为光学图像。在可移动平台调用011拍摄装置对包含有目标物体(两个人)的环境进行拍摄时,0111第一拍摄组件A以热红外成像的方式对环境进行拍摄,拍摄得到为热红外图像的第一图像A1A2A3A4,0112第二拍摄组件B以可见光成像的方式对环境进行拍摄,拍摄得到为光学图像的第二图像B1B2B3B4。假设第一图像A1A2A3A4上的第一像素点与第二图像B1B2B3B4上的第二像素点是相对应的,即第一像素点和第二像素点是针对于目标物体上的同一个目标点成像得到的,由于双光相机中的第一拍摄组件以及第二拍摄组件之间的相对位置固定,即该双光相机上的两个拍摄组件的外参是已知的,外参用于指示两个拍摄组件的相对位置,具体根据安装的相对位置与相对角度来确定,且该两个拍摄组件的内参(根据拍摄组件的焦距,光心位置确定)也是已知的,因此可以很容易将第二图像中的第二像素点投射到第一图像中的第一像素点。具体的,假设目标物体上的目标点在第一拍摄组件的相机坐标系下的坐标为(X 1,Y 1,Z 1),其中,X 1、Y 1以及Z 1分别为横向坐标值,纵向坐标值以及深度坐标值,第二拍摄组件与第一拍摄组件之间的相对位置偏移为(ΔX,ΔY,ΔZ),因此目标物体在第二拍摄组件的相机坐标系下的坐标为(X 1+ΔX,Y 1+ΔY,Z 1ΔZ),目标物体的目标点在第一拍摄组件拍摄得到的第一图像中的第一像素点的坐标为(u 1,v 1),目标点在第二拍摄组件拍摄得到的第二图像中的第二像素点的坐标为(u 2,v 2),第一像素点的坐标(u 1,v 1)与第二像素点的坐标(u 2,v 2)之间存在以下相对位置关系:
Figure PCTCN2019089248-appb-000001
在上述公式中,f 1和f 2分别为第一拍摄组件和第二拍摄组件的焦距,一般来说,目标物体观测的深度一般都在5m以上,即Z 1>5m,而两个拍摄组件之间的相对位置偏移(ΔX,ΔY,ΔZ)是非常小的,
Figure PCTCN2019089248-appb-000002
因此Z 1>>ΔX,ΔY,ΔZ,可以忽略相对位置偏移,得到第一图像与第二图像中相对应的第一像素点和第二像素点之间的相对位置关系为u 1/u 2=f 1/f 2,v 1/v 2=f 1/f 2。可见,根据第二像素点在第二图像中的位置,以及第一拍摄装置与第二拍摄装置之间的相对位置关系,可以容易换算得到第一像素点在第一图像中的位置。
需要说明的是,热红外成像方式指的是热红外拍摄装置探测物体自身发出的红外辐射,并通过光电转换、信号处理等手段,将物体的温度分布转换成热红外图像,因此通过热红外方式得到的第一图像A1A2A3A4可以反映出具有热辐射的物体的温度分布信息,具有热辐射的物体有人、动物和电磁设备等,热成像方式具有不受光线和遮蔽便能进行拍摄的优点,在夜间等特殊环境下可以很好的对物体进行拍摄,而可见光成像方式例如有红绿蓝(RGB,red-green-blue)成像方式等,其成像原理是利用可见光在物体表面上的反射进行成像,其成像得到的光学图像B1B2B3B4包含了物体的颜色和形状等细节信息,但是其成像结果是受光线和遮挡的影响很大。
还需要说明的是,虽然第一拍摄组件和第二拍摄组件的相对位置可以固定也可以不固定,但第一拍摄组件与第二拍摄组件都是针对于环境中的同一目标物体进行拍摄,只是由于第一拍摄组件与第二拍摄组件的成像方式不同,导致第一拍摄组件与第二拍摄组件拍摄环境得到的图像的尺寸可能不同(例如由于技术限制,红外成像的范围比可见光成像的范围小,一般来说,红外成像得到的图像比可见光成像得到的图像小),且目标物体在第一图像和第二图像中的位置不同。但是由于第一拍摄组件与第二拍摄组件之间的相对位置关系可以确定,因此一旦确定了目标物体在一个图像中的位置,便可以很容易的换算出目标物体在另一个图像中的位置。
可以看出,热红外图像相对于光学图像来说,纹理信息不足,如果直接在热红外图像上做目标识别与跟踪效果很差,但是热红外图像又具有其不受光线和遮挡的阻碍便可以对环境中具有热辐射的物体进行成像的优点,基于热红外图像的目标追踪方法具有十分重要的现实意义。本申请利用可见光拍摄装置输出的光学图像实现对目标物体的识别以获取目标物体在所述光学图像中的跟踪位置区域,根据所述目标物体在所述光学图像中的跟踪位置区域调整所述拍摄装置的拍摄姿态,以使所述目标物体位于拍摄装置中的热红外拍摄装置的拍摄画面中,这样即便热红外拍摄装置输出的热红外图像不能或者不能准确地进行目标物体的识别,依然可以实现热红外拍摄装置对目标物体的追踪。
在一个实施例中,用户可以基于第一拍摄组件输出的第一图像指定需要进行追踪的目标物体,从而实现对目标物体的追踪。具体的,如图1所示,在第一拍摄组件和第二拍摄组件分别对环境进行拍摄得到第一图像和第二图像之后,可移动平台向控制终端发送第一图像以使控制终端显示第一图像,然后用户针对控制终端上显示的第一图像中进行选择操作,例如在第一图像上框出包含有待追踪的目标物体的区域,然后控制终端根据该包含待追踪的目标物体的区域生成得到第一指示区域信息,并将该第一指示区域信息发送给可移动平台,可移动平台在接收到控制终端发送的第一区域指示信息之后,根据该第一区域指示信息,以及第一拍摄组件与所述第二拍摄组件的相对位置关系,确定得到第二图像的第二区域指示信息,并对第二图像中由该第二区域指示信息所指示的区域进行目标识别,以确定目标物体,并得到目标物体在第二图像中的跟踪位置区域。例如,对第二图像中由该第二区域指示信息所指示的区域进行目标识别,识别区域内的目标物体,其中,所述识别可以是通过神经网络来识别。在确定了所述目标物体之后,即可以从第二拍摄组件输出的第二图像中识别所述确定的目标物体以获取所述目标物体在所述第二图像中的跟踪位置区域。进一步地,从第二拍摄组件输出的第二图像中识别所述确定的目标物体可以通过神经网络来识别,也可以通过图像跟踪来识别。
在一个实施例中,用户可以在控制终端查看基于第一图像的目标追踪结果。在确定得到目标物体在第二图像中的跟踪位置区域之后,根据目标物体在第二图像中的跟踪位置区域,以及第一拍摄组件和第二拍摄组件之间的相对位置关 系,确定目标物体在第一图像中的跟踪位置区域,然后根据目标物体在第一图像中的跟踪位置区域,在第一图像中标记出目标物体,并将第二图像信息中的全部或者部分图像信息添加到标记之后的第一图像中,以丰富标记之后的第一图像的轮廓特征,最后向控制终端发送标记之后的第一图像以使控制终端显示标记之后的第一图像。
可以理解的是,本发明实施例描述的系统架构以及业务场景是为了更加清楚的说明本发明实施例的技术方案,并不构成对于本发明实施例提供的技术方案的限定,本领域普通技术人员可知,随着系统架构的演变和新业务场景的出现,本发明实施例提供的技术方案对于类似的技术问题,同样适用。
基于上述的描述,本发明实施例在图3中提出了一种更加详细的目标追踪方法,该目标追踪方法可以由前述的可移动平台来执行。
在S301中,可移动平台调用第二拍摄组件对环境进行拍摄,得到第二图像,与此同时,还可以调用第一拍摄装置对环境进行拍摄,得到第一图像。第一拍摄组件为热红外拍摄装置,第二拍摄组件为可见光拍摄装置,第一拍摄组件和第二拍摄组件的成像方式不同,第一拍摄组件以热红外成像方式成像得到为热红外图像的第一图像,第二拍摄组件以可见光成像方式成像得到为光学图像的第二图像,第二图像为光学图像。
需要说明的是,由于第一拍摄组件和第二拍摄组件的成像方式的不同,可能导致目标物体在第一图像和第二图像中的位置不同,但是由于第一拍摄组件与第二拍摄组件之间的相对位置关系可以确定,因此一旦确定了目标物体在一个图像中的位置,便可以很容易的换算出目标物体在另一个图像中的位置。
在S302中,对上述第二图像进行目标物体识别,以识别得到第二图像中的目标物体,并分割得到待跟踪的目标物体在第二图像中的跟踪位置区域。其中,目标物体识别是通过目标检测和目标分割的图像处理方法来分别来确定第二图像中的目标物体以及目标物体的跟踪位置区域,目标检测和目标分割可以是传统的目标检测方法和目标分割方法,也可以是基于深度学习(例如神经网络)的目标检测方法和目标分割方法,本发明实施例对此不做限定。
在S303中,根据目标物体在第二图像中的跟踪位置区域调整拍摄装置的拍摄姿态,以使目标物体位于第一拍摄组件的拍摄画面中。其中,可移动平台 可以通过改变可移动平台自身整体(例如机身)的姿态来调整拍摄装置的拍摄姿态,还可以通过与拍摄装置连接的云台装置来控制拍摄装置调整拍摄姿态,即通过调整云台装置的姿态以调整拍摄装置的拍摄姿态。
具体的,由于第一拍摄组件和第二拍摄组件之间的相对位置可知,因此第一图像和第二图像之间的对应关系也可知,从而当目标物体在第二图像中的预设位置区域中时,目标物体也应该在第一拍摄组件的拍摄画面中,进一步地,可以在所述拍摄画面的目标位置区域。上述根据目标物体在第二图像中的跟踪位置区域调整拍摄装置的拍摄姿态可以指的是,根据目标物体在第二图像中的跟踪位置区域调整拍摄装置的拍摄姿态,使得在拍摄装置的拍摄姿态调整之后,保证目标物体处于第二拍摄组件拍摄的图像的预设位置区域中,从而使得目标物体位于第一拍摄组件的拍摄画面中,进一步地,可以在所述拍摄画面的目标位置区域。
可选的,可以从更直观的方式来保证拍摄装置的拍摄姿态调整之后,目标物体在第一拍摄组件的拍摄画面中,进一步地,可以在所述拍摄画面的目标位置区域。具体的,上述根据目标物体在第二图像中的跟踪位置区域调整拍摄装置的拍摄姿态还可以指的是,先根据目标物体在第二图像中的跟踪位置区域,确定目标物体在第一拍摄组件的拍摄画面中的跟踪位置区域,然后根据目标物体在第一拍摄组件的拍摄画面中的跟踪位置区域调整拍摄装置的拍摄姿态。
可以看出,热红外图像相对于光学图像来说,纹理信息不足,如果直接在热红外图像上做目标识别与跟踪效果很差,但是热红外图像又具有其不受光线和遮挡的阻碍便可以对环境中具有热辐射的物体进行成像的优点,因此基于热红外图像的目标追踪方法是十分重要的。通过本申请则可以很好解决该问题,因为本申请利用可见光拍摄装置输出的光学图像实现对目标物体的识别以获取目标物体在所述光学图像中的跟踪位置区域,根据所述目标物体在所述光学图像中的跟踪位置区域调整所述拍摄装置的拍摄姿态,以使所述目标物体位于拍摄装置中的热红外拍摄装置的拍摄画面中,这样即便热红外拍摄装置输出的热红外图像不能或者不能准确地进行目标物体的识别,依然可以实现热红外拍摄装置对目标物体的追踪。
在一个实施例中,在拍摄装置的两个拍摄组件拍摄环境得到第一图像和第 二图像之后,先将第一图像发送给可移动平台的控制终端,以使控制终端显示该第一图像,从而使得用户可以在该控制终端上对第一图像进行选择操作(例如框出目标物体所在的区域),然后终端设备根据该选择操作得到用于指示用户在第一图像中所选择区域的第一区域指示信息,并将该第一区域指示信息发送给可移动平台,可移动平台接收到该第一区域指示信息之后,参照以上所描述的根据目标物体在第二图像中的跟踪位置区域确定目标物体在第一图像中的位置的过程,根据该第一区域指示信息,以及第一拍摄组件与第二拍摄组件的相对位置关系,来确定得到第二区域指示信息,该第二区域指示信息用于指示用户在第二图像中所选择区域映射在第一图像中的区域,最后对第二图像中由该第二区域指示信息所指示的区域进行目标识别,以确定目标物体,并得到目标物体在第二图像中的跟踪位置区域。
可见,在本发明实施例中,在环境中包含有多个物体时,用户可以对目标物体进行指定,在一个实施例中提高了本申请的目标追踪效率。
在一个实施例中,在S301确定了目标物体在第二图像中的跟踪位置区域之后,根据目标物体在第二图像中的跟踪位置区域,确定目标物体在所述第一图像中的跟踪位置区域,然后根据目标物体在第一图像中的跟踪位置区域,在第一图像中标记出该目标物体(例如图2中的第一图像所示的,目标物体所在位置被框出),最后向可移动平台的控制终端发送标记之后的第一图像以使控制终端显示标记之后的第一图像。其中,根据目标物体在第二图像中的跟踪位置区域确定目标物体在第一图像中的跟踪位置区域在上文中已有详细说明,在此不再赘述。
可见,本发明实施例通过对第二图像进行目标物体识别,从而确定目标物体在第二图像的跟踪位置区域,然后根据目标物体在第二图像中的跟踪位置区域确定目标物体在第一图像中的跟踪位置区域,并在第一图像中标记出目标物体,最后通过控制终端展示给用户,从而实现了基于第一图像的间接的目标追踪,尤其是当第一图像是热红外图像时,本发明实施例可以实现基于热红外图像的目标追踪,具有十分重要的实用价值。
在一个实施例中,在上一实施中,当在第一图像中标记出所述目标物体之后,先提取第二图像信息中的全部或者部分图像信息,并将第二图像信息中的 全部或者部分图像信息添加到标记之后的第一图像中,以丰富标记之后的第一图像的轮廓特征,然后再将标记之后的第一图像发送给控制终端,使得最终在控制终端呈现给用户的第一图像不仅被标记出了目标物体,而且图像的细节也被大大的丰富,一定程度上改善了例如热成像图像等图像的图像细节不丰富的缺点。
可见,本发明实施例不仅可以基于热成像图像来实现目标追踪,还可以通过热成像图像来与用户进行交互,并且利用光学图像的细节来丰富了原本细节不丰富的热成像图像的轮廓细节,使得热成像图像的实用性大大提高。
再一种实施方式中,本发明实施例在图3还提出了一种可更好的目标追踪方法,该目标追踪方法可以由前述的可移动平台来执行。
在S401中,调用可移动平台的拍摄装置的第一拍摄组件和第二拍摄组件对环境进行拍摄,得到第一图像和第二图像。第一拍摄组件为热红外拍摄装置,第二拍摄组件为可见光拍摄装置,第一拍摄组件和第二拍摄组件的成像方式不同,第一拍摄组件以热红外成像方式成像得到为热红外图像的第一图像,第二拍摄组件以可见光成像方式成像得到为光学图像的第二图像,第二图像为光学图像。
在S402中,向可移动平台的控制终端发送上述第一图像以使控制终端显示上述第一图像。
在S403中,获取控制终端发送的第一区域指示信息。
在S404中,根据上述第一区域指示信息,确定第二图像中的待追踪的目标物体,并得到目标物体在第二图像中的跟踪位置区域。具体的,根据第一区域指示信息,以及第一拍摄组件与第二拍摄组件的相对位置关系,确定得到第二区域指示信息,然后对第二图像中由第二区域指示信息所指示的区域进行目标识别,以确定目标物体,并得到目标物体在第二图像中的跟踪位置区域。
在S405中,根据目标物体在第二图像中的跟踪位置区域调整拍摄装置的拍摄姿态,以使目标物体位于第一拍摄组件的拍摄画面中。其中,可移动平台可以通过改变可移动平台自身整体的姿态来调整拍摄装置的拍摄姿态,还可以通过与拍摄装置连接的云台装置来控制拍摄装置调整拍摄姿态,即通过调整云 台装置的姿态以调整拍摄装置的拍摄姿态。
具体的,上述根据目标物体在第二图像中的跟踪位置区域调整拍摄装置的拍摄姿态可以是,将目标物体调整至第二拍摄组件拍摄的图像的预设位置区域中,使得该目标物体在第一拍摄组件的拍摄画面中,进一步地,可以在所述拍摄画面的目标位置区域。
可选的,上述根据目标物体在第二图像中的跟踪位置区域调整拍摄装置的拍摄姿态还可以是,根据目标物体在第二图像中的跟踪位置区域,以及第一拍摄组件与第二拍摄组件之间的相对位置关系,确定目标物体在第一拍摄组件的拍摄画面中的跟踪位置区域,然后根据目标物体在第一拍摄组件的拍摄画面中的跟踪位置区域调整拍摄装置的拍摄姿态,使得该目标物体在第一拍摄组件的拍摄画面中,进一步地,可以在所述拍摄画面的目标位置区域。
在S406中,根据目标物体在第二图像中的跟踪位置区域,确定目标物体在第一拍摄组件的拍摄画面中的跟踪位置区域。具体的,根据目标物体在第二图像中的跟踪位置区域,以及第一拍摄组件与第二拍摄组件之间的相对位置关系,确定目标物体在第一图像中的跟踪位置区域。
在S407中,根据目标物体在第一图像中的跟踪位置区域,在第一图像中标记出目标物体。
在一个实施例中,在第一图像中标记出目标物体之后,提取第二图像中的细节信息来丰富第一图像中的轮廓特征。具体的,提取第二图像信息中的全部或者部分图像信息,然后将第二图像信息中的全部或者部分图像信息添加到标记之后的第一图像中,以丰富标记之后的第一图像的轮廓特征。
在S408中,向可移动平台的控制终端发送标记之后的第一图像以使控制终端显示该标记之后的第一图像。
需要说明的是,上文对各个实施例的描述倾向于强调各个实施例之间的不同之处,其相同或相似之处可以互相参考,为了简洁,本文不再赘述。
基于上述方法实施例的描述,在一种实施方式中,本发明实施例还提供了一种如图5所示的目标追踪装置的结构示意图,该目标追踪装置应用于可移动平台,可移动平台包括拍摄装置,拍摄装置包括第一拍摄组件和第二拍摄组件, 该目标追踪装置包括:
调用单元510,用于调用上述第二拍摄组件对环境进行拍摄,得到第二图像,上述第一拍摄组件和第二拍摄组件的成像方式不同;识别单元520,用于对上述第二图像进行目标物体识别,得到待跟踪的目标物体在上述第二图像中的跟踪位置区域;追踪单元530,用于根据上述目标物体在上述第二图像中的跟踪位置区域调整上述拍摄装置的拍摄姿态,以使上述目标物体位于上述第一拍摄组件的拍摄画面中。
具体的,上述可移动平台包括承载上述拍摄装置的云台装置,上述追踪单元530具体用于根据上述目标物体在上述第二图像中的跟踪位置区域调整上述可移动平台的姿态和/或上述云台装置的姿态以调整上述拍摄装置的拍摄姿态。
需要说明是,上述第一拍摄组件为热红外拍摄装置;上述第二拍摄组件为可见光拍摄装置,上述第二图像为光学图像。
具体的,上述跟踪单元530具体用于根据上述目标物体在上述第二图像中的跟踪位置区域调整上述拍摄装置的拍摄姿态,在上述拍摄装置的拍摄姿态调整之后,上述目标物体处于上述第二拍摄组件的拍摄画面的预设位置区域中。
可选的,上述目标追踪装置还包括确定单元540,用于根据上述目标物体在上述第二图像中的跟踪位置区域,确定上述目标物体在上述第一拍摄组件的拍摄画面中的跟踪位置区域;上述追踪单元530,具体用于根据上述目标物体在第一拍摄组件的拍摄画面中的跟踪位置区域调整上述拍摄装置的拍摄姿态。
在一个实施例中,所述调用单元510,还用于调用所述第一拍摄组件对环境进行拍摄,得到第一图像;上述目标追踪装置还包括发送单元550,用于向上述可移动平台的控制终端发送上述第一图像以使上述控制终端显示上述第一图像;上述目标追踪装置还包括获取单元560,用于获取上述控制终端发送的第一区域指示信息,其中,上述第一区域指示信息是上述控制终端通过检测用户在上述控制终端显示的第一图像上的目标物体选择操作来确定的;上述识别单元520,具体用于根据上述第一区域指示信息,确定上述第二图像中的待追踪的目标物体,并得到上述目标物体在上述第二图像中的跟踪位置区域。
在一个实施例中,上述识别单元520,具体用于根据上述第一区域指示信息,以及上述第一拍摄组件与上述第二拍摄组件的相对位置关系,确定得到第 二区域指示信息;对上述第二图像中由上述第二区域指示信息所指示的区域进行目标识别,以确定目标物体,并得到上述目标物体在上述第二图像中的跟踪位置区域。
在一个实施例中,所述调用单元510,还用于调用所述第一拍摄组件对环境进行拍摄,得到第一图像;上述目标追踪装置还包括确定单元540,用于根据上述目标物体在上述第二图像中的跟踪位置区域,确定上述目标物体在上述第一拍摄组件的拍摄画面中的跟踪位置区域;上述目标追踪装置还包括标记单元570,用于根据上述目标物体在上述第一拍摄组件的拍摄画面中的跟踪位置区域,在上述第一图像中标记出上述目标物体;上述目标追踪装置还包括发送单元550,用于向上述可移动平台的控制终端发送标记之后的第一图像以使上述控制终端显示上述标记之后的第一图像。
在一个实施例中,上述目标追踪装置还包括提取单元580,用于提取上述第二图像信息中的全部或者部分图像信息;上述目标追踪装置还包括添加单元590,用于将上述第二图像信息中的全部或者部分图像信息添加到标记之后的第一图像中,以丰富上述标记之后的第一图像的轮廓特征。
在一个实施例中,上述确定单元540,具体用于根据上述目标物体在上述第二图像中的跟踪位置区域,以及上述第一拍摄组件与上述第二拍摄组件的相对位置关系,确定上述目标物体在上述第一图像中的跟踪位置区域。
基于上述方法实施例的描述,本发明实施例还提供了一种如图6所示的可移动平台的结构示意图,该可移动平台的内部结构可以至少包括处理器610、存储器620和拍摄装置630,上述拍摄装置包括第一拍摄组件631和第二拍摄组件632,上述处理器610、存储器620和拍摄装置630可通过总线640或其他方式连接,在本发明实施例所示图6中以通过总线连接为例。此处的存储器620可以用于存储计算机程序,上述计算机程序包括程序指令,此处的处理器610可以用于执行存储器620中所存储的程序指令。
在一种实施方式中,该处理器610可以是中央处理单元(Central Processing Unit,CPU),该处理器还可以是其他通用处理器,即微处理器或者任何常规的处理器,例如:数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列 (Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件,等等。
该存储器620可以包括只读存储器和随机存取存储器,并向处理器610提供指令和数据。因此,在此对于处理器610和存储器620不作限定。
在本发明实施例中,由处理器610加载并执行计算机存储介质中存放的一条或一条以上指令,以实现上述相应实施例中的方法的相应步骤;具体实现中,计算机存储介质中的至少一条指令由处理器610加载并执行。具体的:
上述拍摄装置630用于对环境进行拍摄;
上述存储器620用于存储计算机程序,上述计算机程序包括程序指令;
上述处理器610被配置用于调用上述程序指令,用于调用上述第二拍摄组件对环境进行拍摄,得到第二图像,上述第一拍摄组件和第二拍摄组件的成像方式不同;还用于对上述第二图像进行目标物体识别,得到待跟踪的目标物体在上述第二图像中的跟踪位置区域;还用于根据上述目标物体在上述第二图像中的跟踪位置区域调整上述拍摄装置的拍摄姿态,以使上述目标物体位于上述第一拍摄组件的拍摄画面中。
具体的,上述处理器610具体用于根据上述目标物体在上述第二图像中的跟踪位置区域调整上述可移动平台的姿态和/或上述云台装置的姿态以调整上述拍摄装置的拍摄姿态。
需要说明的是,上述第一拍摄组件631为热红外拍摄装置;上述第二拍摄组件632为可见光拍摄装置,上述第二图像为光学图像。
具体的,上述处理器610具体用于根据上述目标物体在上述第二图像中的跟踪位置区域调整上述拍摄装置的拍摄姿态,在上述拍摄装置的拍摄姿态调整之后,上述目标物体处于上述第二拍摄组件的拍摄画面的预设位置区域中。
可选的,上述处理器具体用于根据上述目标物体在上述第二图像中的跟踪位置区域,确定上述目标物体在上述第一拍摄组件的拍摄画面中的跟踪位置区域;根据上述目标物体在第一拍摄组件的拍摄画面中的跟踪位置区域调整上述拍摄装置的拍摄姿态。
在一个实施例中,所述处理器610,还用于调用所述第一拍摄组件对环境进行拍摄,得到第一图像;上述可移动平台还包括通信接口650,上述通信 接口用于上述可移动平台与其他终端设备进行数据交互,具体用于向上述可移动平台的控制终端发送上述第一图像以使上述控制终端显示上述第一图像;上述处理器610,具体用于获取上述控制终端发送的第一区域指示信息,其中,上述第一区域指示信息是上述控制终端通过检测用户在上述控制终端显示的第一图像上的目标物体选择操作来确定的;根据上述第一区域指示信息,确定上述第二图像中的待追踪的目标物体,并得到上述目标物体在上述第二图像中的跟踪位置区域。
具体的,上述处理器610具体用于根据上述第一区域指示信息,以及上述第一拍摄组件与上述第二拍摄组件的相对位置关系,确定得到第二区域指示信息;对上述第二图像中由上述第二区域指示信息所指示的区域进行目标识别,以确定目标物体,并得到上述目标物体在上述第二图像中的跟踪位置区域。
在一个实施例中,所述处理器610,还用于调用所述第一拍摄组件对环境进行拍摄,得到第一图像;上述处理器610,具体用于根据上述目标物体在上述第二图像中的跟踪位置区域,确定上述目标物体在上述第一图像中的跟踪位置区域;上述处理器610,还用于根据上述目标物体在上述第一图像中的跟踪位置区域,在上述第一图像中标记出上述目标物体;
在一个实施例中,上述可移动平台还包括通信接口650,上述通信接口用于向上述可移动平台的控制终端发送标记之后的第一图像以使上述控制终端显示上述标记之后的第一图像。
在一个实施例中,上述处理器610还用于提取上述第二图像信息中的全部或者部分图像信息;将上述第二图像信息中的全部或者部分图像信息添加到标记之后的第一图像中,以丰富上述标记之后的第一图像的轮廓特征。
具体的,上述处理器610具体用于根据上述目标物体在上述第二图像中的跟踪位置区域,以及上述第一拍摄组件与上述第二拍摄组件的相对位置关系,确定上述目标物体在上述第一图像中的跟踪位置区域。
本发明实施例还提供一种控制设备,控制设备与拍摄装置通信连接,所述拍摄装置包括第一拍摄组件和第二拍摄组件,其特征在于,所述控制设备包括存储器和处理器,其中,
所述存储器用于存储计算机程序,所述计算机程序包括程序指令;
所述处理器被配置用于调用所述程序指令,用于:
调用所述第二拍摄组件对环境进行拍摄,得到第二图像,所述第一拍摄组件和第二拍摄组件的成像方式不同;
对所述第二图像进行目标物体识别,得到待跟踪的目标物体在所述第二图像中的跟踪位置区域;
根据所述目标物体在所述第二图像中的跟踪位置区域调整所述拍摄装置的拍摄姿态,以使所述目标物体位于所述第一拍摄组件的拍摄画面中。
其中,所述控制设备设置在可移动平台中,可移动平台包括所述控制设备和所述拍摄装置,所述控制设备可以与所述拍摄装置通信连接。控制设备的处理器执行如前所述的方法步骤,请具体参见前述部分,此处不再赘述。
需要说明的是,上述描述的可移动平台的具体工作过程,可以参考前述各个实施例中的相关描述,在此不再赘述。
集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分,或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,图像处理设备,或者网络设备等)执行本申请各个实施例方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,上述的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,上述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(Random Access Memory,RAM)等。
以上所揭露的仅为本发明的部分实施例而已,当然不能以此来限定本发明之权利范围,本领域普通技术人员可以理解实现上述实施例的全部或部分流程, 并依本发明权利要求所作的等同变化,仍属于发明所涵盖的范围。

Claims (32)

  1. 一种目标追踪方法,其特征在于,所述方法应用于可移动平台,所述可移动平台包括拍摄装置,所述拍摄装置包括第一拍摄组件和第二拍摄组件,所述第一拍摄组件和第二拍摄组件的成像方式不同,所述方法包括:
    调用所述第二拍摄组件对环境进行拍摄,得到第二图像;
    对所述第二图像进行目标物体识别,得到待跟踪的目标物体在所述第二图像中的跟踪位置区域;
    根据所述目标物体在所述第二图像中的跟踪位置区域调整所述拍摄装置的拍摄姿态,以使所述目标物体位于所述第一拍摄组件的拍摄画面中。
  2. 根据权利要求1所述的方法,其特征在于,所述可移动平台包括承载所述拍摄装置的云台装置,所述根据所述目标物体在所述第二图像中的跟踪位置区域调整所述拍摄装置的拍摄姿态,包括:
    根据所述目标物体在所述第二图像中的跟踪位置区域调整所述可移动平台的姿态和/或所述云台装置的姿态以调整所述拍摄装置的拍摄姿态。
  3. 根据权利要求1或2所述的方法,其特征在于,所述第一拍摄组件为热红外拍摄装置;所述第二拍摄组件为可见光拍摄装置,所述第二图像为光学图像。
  4. 根据权利要求1至3任意一项所述的方法,其特征在于,所述根据所述目标物体在所述第二图像中的跟踪位置区域调整所述拍摄装置的拍摄姿态,包括:
    根据所述目标物体在所述第二图像中的跟踪位置区域调整所述拍摄装置的拍摄姿态,在所述拍摄装置的拍摄姿态调整之后,所述目标物体处于所述第二拍摄组件的拍摄画面的预设位置区域中。
  5. 根据权利要求1至3任意一项所述的方法,其特征在于,所述根据所 述目标物体在所述第二图像中的跟踪位置区域调整所述拍摄装置的拍摄姿态,包括:
    根据所述目标物体在所述第二图像中的跟踪位置区域,确定所述目标物体在所述第一拍摄组件的拍摄画面中的跟踪位置区域;
    根据所述目标物体在所述第一拍摄组件的拍摄画面中的跟踪位置区域调整所述拍摄装置的拍摄姿态。
  6. 根据权利要求1至5任意一项所述的方法,其特征在于,所述方法还包括:调用所述第一拍摄组件对环境进行拍摄,得到第一图像;向所述可移动平台的控制终端发送所述第一图像以使所述控制终端显示所述第一图像;
    所述对所述第二图像进行目标物体识别,得到待跟踪的目标物体在所述第二图像中的跟踪位置区域包括:
    获取所述控制终端发送的第一区域指示信息,其中,所述第一区域指示信息是所述控制终端通过检测用户在所述控制终端显示的第一图像上的目标物体选择操作来确定的;
    根据所述第一区域指示信息,确定所述第二图像中的待追踪的目标物体,并得到所述目标物体在所述第二图像中的跟踪位置区域。
  7. 根据权利要求6所述的方法,其特征在于,所述根据所述第一区域指示信息,确定所述第二图像中的待追踪的目标物体,并得到所述目标物体在所述第二图像中的跟踪位置区域,包括:
    根据所述第一区域指示信息,以及所述第一拍摄组件与所述第二拍摄组件的相对位置关系,确定得到第二区域指示信息;
    对所述第二图像中由所述第二区域指示信息所指示的区域进行目标识别,以确定目标物体,并得到所述目标物体在所述第二图像中的跟踪位置区域。
  8. 根据权利要求1至7任意一项所述的方法,其特征在于,所述方法还包括:
    调用所述第一拍摄组件对环境进行拍摄,得到第一图像;
    根据所述目标物体在所述第二图像中的跟踪位置区域,确定所述目标物体在所述第一图像中的跟踪位置区域;
    根据所述目标物体在所述第一图像中的跟踪位置区域,在所述第一图像中标记出所述目标物体;
    向所述可移动平台的控制终端发送标记之后的第一图像以使所述控制终端显示所述标记之后的第一图像。
  9. 根据权利要求8所述的方法,其特征在于,所述根据所述目标物体在所述第一图像中的跟踪位置区域,在所述第一图像中标记出所述目标物体之后,所述向所述可移动平台的控制终端发送标记之后的第一图像以使所述控制终端显示所述标记之后的第一图像之前,还包括:
    提取所述第二图像信息中的全部或者部分图像信息;
    将所述第二图像信息中的全部或者部分图像信息添加到标记之后的第一图像中,以丰富所述标记之后的第一图像的轮廓特征。
  10. 根据权利要求8所述的方法,其特征在于,所述根据所述目标物体在所述第二图像中的跟踪位置区域,确定所述目标物体在所述第一图像中的跟踪位置区域,包括:
    根据所述目标物体在所述第二图像中的跟踪位置区域,以及所述第一拍摄组件与所述第二拍摄组件的相对位置关系,确定所述目标物体在所述第一图像中的跟踪位置区域。
  11. 一种目标追踪装置,其特征在于,所述目标追踪装置应用于可移动平台,所述可移动平台包括拍摄装置,所述拍摄装置包括第一拍摄组件和第二拍摄组件,所述目标追踪装置包括:
    调用单元,用于调用所述第二拍摄组件对环境进行拍摄,得到第二图像,所述第一拍摄组件和第二拍摄组件的成像方式不同;
    识别单元,用于对所述第二图像进行目标物体识别,得到待跟踪的目标物体在所述第二图像中的跟踪位置区域;
    追踪单元,用于根据所述目标物体在所述第二图像中的跟踪位置区域调整所述拍摄装置的拍摄姿态,以使所述目标物体位于所述第一拍摄组件的拍摄画面中。
  12. 根据权利要求11所述的装置,其特征在于,所述可移动平台包括承载所述拍摄装置的云台装置,所述追踪单元具体用于根据所述目标物体在所述第二图像中的跟踪位置区域调整所述可移动平台的姿态和/或所述云台装置的姿态以调整所述拍摄装置的拍摄姿态。
  13. 根据权利要求11或12所述的装置,其特征在于,所述第一拍摄组件为热红外拍摄装置;所述第二拍摄组件为可见光拍摄装置,所述第二图像为光学图像。
  14. 根据权利要求11至13任意一项所述的装置,所述跟踪单元具体用于根据所述目标物体在所述第二图像中的跟踪位置区域调整所述拍摄装置的拍摄姿态,在所述拍摄装置的拍摄姿态调整之后,所述目标物体处于所述第二拍摄组件的拍摄画面的预设位置区域中。
  15. 根据权利要求11至13任意一项所述的装置,其特征在于,包括:
    所述目标追踪装置还包括确定单元,用于根据所述目标物体在所述第二图像中的跟踪位置区域,确定所述目标物体在所述第一拍摄组件的拍摄画面中的跟踪位置区域;
    所述追踪单元,具体用于根据所述目标物体在所述第一拍摄组件的拍摄画面中的跟踪位置区域调整所述拍摄装置的拍摄姿态。
  16. 根据权利要求11至15任意一项所述的装置,其特征在于,
    所述调用单元,还用于调用所述第一拍摄组件对环境进行拍摄,得到第一图像;
    所述目标追踪装置还包括发送单元,用于向所述可移动平台的控制终端发送所述第一图像以使所述控制终端显示所述第一图像;
    所述目标追踪装置还包括获取单元,用于获取所述控制终端发送的第一区域指示信息,其中,所述第一区域指示信息是所述控制终端通过检测用户在所述控制终端显示的第一图像上的目标物体选择操作来确定的;
    所述识别单元,具体用于根据所述第一区域指示信息,确定所述第二图像中的待追踪的目标物体,并得到所述目标物体在所述第二图像中的跟踪位置区域。
  17. 根据权利要求16所述的装置,其特征在于,所述识别单元,具体用于:
    根据所述第一区域指示信息,以及所述第一拍摄组件与所述第二拍摄组件的相对位置关系,确定得到第二区域指示信息;
    对所述第二图像中由所述第二区域指示信息所指示的区域进行目标识别,以确定目标物体,并得到所述目标物体在所述第二图像中的跟踪位置区域。
  18. 根据权利要求11至17任意一项所述的装置,其特征在于,
    所述调用单元,还用于调用所述第一拍摄组件对环境进行拍摄,得到第一图像;
    所述目标追踪装置还包括确定单元,用于根据所述目标物体在所述第二图像中的跟踪位置区域,确定所述目标物体在所述第一图像中的跟踪位置区域;
    所述目标追踪装置还包括标记单元,用于根据所述目标物体在所述第一图像中的跟踪位置区域,在所述第一图像中标记出所述目标物体;
    所述目标追踪装置还包括发送单元,用于向所述可移动平台的控制终端发送标记之后的第一图像以使所述控制终端显示所述标记之后的第一图像。
  19. 根据权利要求18所述的装置,其特征在于,包括:
    所述目标追踪装置还包括提取单元,用于提取所述第二图像信息中的全部或者部分图像信息;
    所述目标追踪装置还包括添加单元,用于将所述第二图像信息中的全部或者部分图像信息添加到标记之后的第一图像中,以丰富所述标记之后的第一图像的轮廓特征。
  20. 根据权利要求18所述的装置,其特征在于,所述确定单元,具体用于根据所述目标物体在所述第二图像中的跟踪位置区域,以及所述第一拍摄组件与所述第二拍摄组件的相对位置关系,确定所述目标物体在所述第一图像中的跟踪位置区域。
  21. 一种可移动平台,其特征在于,包括处理器、存储器和拍摄装置,所述拍摄装置包括第一拍摄组件和第二拍摄组件,所述第一拍摄组件和第二拍摄组件的成像方式不同,其中:
    所述拍摄装置用于对环境进行拍摄;
    所述存储器用于存储计算机程序,所述计算机程序包括程序指令;
    所述处理器被配置用于调用所述程序指令,用于:
    调用所述第二拍摄组件对环境进行拍摄,得到第二图像;
    对所述第二图像进行目标物体识别,得到待跟踪的目标物体在所述第二图像中的跟踪位置区域;
    根据所述目标物体在所述第二图像中的跟踪位置区域调整所述拍摄装置的拍摄姿态,以使所述目标物体位于所述第一拍摄组件的拍摄画面中。
  22. 根据权利要求21所述的可移动平台,其特征在于,所述可移动平台包括承载所述拍摄装置的云台装置,所述处理器具体用于:
    根据所述目标物体在所述第二图像中的跟踪位置区域调整所述可移动平台的姿态和/或所述云台装置的姿态以调整所述拍摄装置的拍摄姿态。
  23. 根据权利要求21或22所述的可移动平台,其特征在于,所述第一拍摄组件为热红外拍摄装置;所述第二拍摄组件为可见光拍摄装置,所述第二图像为光学图像。
  24. 根据权利要求21至23任意一项所述的可移动平台,其特征在于,所述处理器具体用于:
    根据所述目标物体在所述第二图像中的跟踪位置区域调整所述拍摄装置 的拍摄姿态,在所述拍摄装置的拍摄姿态调整之后,所述目标物体处于所述第二拍摄组件的拍摄画面的预设位置区域中。
  25. 根据权利要求21至23任意一项所述的可移动平台,其特征在于,所述处理器具体用于:
    根据所述目标物体在所述第二图像中的跟踪位置区域,确定所述目标物体在所述第一拍摄组件的拍摄画面中的跟踪位置区域;
    根据所述目标物体在所述第一拍摄组件的拍摄画面中的跟踪位置区域调整所述拍摄装置的拍摄姿态。
  26. 根据权利要求21至25任意一项所述的可移动平台,其特征在于,
    所述处理器,还用于调用所述第一拍摄组件对环境进行拍摄,得到第一图像;所述可移动平台还包括通信接口,具体用于向所述可移动平台的控制终端发送所述第一图像以使所述控制终端显示所述第一图像;
    所述处理器,具体用于:获取所述控制终端发送的第一区域指示信息,其中,所述第一区域指示信息是所述控制终端通过检测用户在所述控制终端显示的第一图像上的目标物体选择操作来确定的;根据所述第一区域指示信息,确定所述第二图像中的待追踪的目标物体,并得到所述目标物体在所述第二图像中的跟踪位置区域。
  27. 根据权利要求26所述的可移动平台,其特征在于,所述处理器具体用于:
    根据所述第一区域指示信息,以及所述第一拍摄组件与所述第二拍摄组件的相对位置关系,确定得到第二区域指示信息;
    对所述第二图像中由所述第二区域指示信息所指示的区域进行目标识别,以确定目标物体,并得到所述目标物体在所述第二图像中的跟踪位置区域。
  28. 根据权利要求21至27任意一项所述的可移动平台,其特征在于,
    所述处理器,还用于调用所述第一拍摄组件对环境进行拍摄,得到第一图 像;
    所述处理器,具体用于根据所述目标物体在所述第二图像中的跟踪位置区域,确定所述目标物体在所述第一图像中的跟踪位置区域;
    所述处理器,还用于根据所述目标物体在所述第一图像中的跟踪位置区域,在所述第一图像中标记出所述目标物体;
    所述可移动平台还包括通信接口,所述通信接口用于向所述可移动平台的控制终端发送标记之后的第一图像以使所述控制终端显示所述标记之后的第一图像。
  29. 根据权利要求28所述的可移动平台,其特征在于,所述处理器还用于:
    提取所述第二图像信息中的全部或者部分图像信息;
    将所述第二图像信息中的全部或者部分图像信息添加到标记之后的第一图像中,以丰富所述标记之后的第一图像的轮廓特征。
  30. 根据权利要求28所述的可移动平台,其特征在于,所述处理器具体用于:
    根据所述目标物体在所述第二图像中的跟踪位置区域,以及所述第一拍摄组件与所述第二拍摄组件的相对位置关系,确定所述目标物体在所述第一图像中的跟踪位置区域。
  31. 一种计算机可读存储介质,其特征在于,所述计算机存储介质存储有计算机程序,所述计算机程序包括程序指令,所述程序指令被处理器执行,用以执行如权利要求1-10任一项所述的方法。
  32. 一种控制设备,控制设备与拍摄装置通信连接,所述拍摄装置包括第一拍摄组件和第二拍摄组件,所述第一拍摄组件和第二拍摄组件的成像方式不同,其特征在于,所述控制设备包括存储器和处理器,其中,
    所述存储器用于存储计算机程序,所述计算机程序包括程序指令;
    所述处理器被配置用于调用所述程序指令,用于:
    调用所述第二拍摄组件对环境进行拍摄,得到第二图像;
    对所述第二图像进行目标物体识别,得到待跟踪的目标物体在所述第二图像中的跟踪位置区域;
    根据所述目标物体在所述第二图像中的跟踪位置区域调整所述拍摄装置的拍摄姿态,以使所述目标物体位于所述第一拍摄组件的拍摄画面中。
PCT/CN2019/089248 2019-05-30 2019-05-30 一种目标追踪方法、装置、可移动平台及存储介质 WO2020237565A1 (zh)

Priority Applications (5)

Application Number Priority Date Filing Date Title
CN201980005359.9A CN111345029B (zh) 2019-05-30 2019-05-30 一种目标追踪方法、装置、可移动平台及存储介质
PCT/CN2019/089248 WO2020237565A1 (zh) 2019-05-30 2019-05-30 一种目标追踪方法、装置、可移动平台及存储介质
EP19877539.7A EP3771198B1 (en) 2019-05-30 2019-05-30 Target tracking method and device, movable platform and storage medium
US16/880,553 US10999519B2 (en) 2019-05-30 2020-05-21 Target tracking method and device, movable platform, and storage medium
US17/222,627 US20210227144A1 (en) 2019-05-30 2021-04-05 Target tracking method and device, movable platform, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/089248 WO2020237565A1 (zh) 2019-05-30 2019-05-30 一种目标追踪方法、装置、可移动平台及存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/880,553 Continuation US10999519B2 (en) 2019-05-30 2020-05-21 Target tracking method and device, movable platform, and storage medium

Publications (1)

Publication Number Publication Date
WO2020237565A1 true WO2020237565A1 (zh) 2020-12-03

Family

ID=71187716

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/089248 WO2020237565A1 (zh) 2019-05-30 2019-05-30 一种目标追踪方法、装置、可移动平台及存储介质

Country Status (4)

Country Link
US (2) US10999519B2 (zh)
EP (1) EP3771198B1 (zh)
CN (1) CN111345029B (zh)
WO (1) WO2020237565A1 (zh)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112840374A (zh) * 2020-06-30 2021-05-25 深圳市大疆创新科技有限公司 图像处理方法、图像获取装置、无人机、无人机系统和存储介质
CN113973171B (zh) * 2020-07-23 2023-10-10 宁波舜宇光电信息有限公司 多摄摄像模组、摄像系统、电子设备和成像方法
CN114401371B (zh) * 2020-08-05 2024-03-26 深圳市浩瀚卓越科技有限公司 追踪控制方法、装置、对象追踪部件和存储介质
WO2022094772A1 (zh) * 2020-11-03 2022-05-12 深圳市大疆创新科技有限公司 位置估计方法、跟随控制方法、设备及存储介质
CN112601022B (zh) * 2020-12-14 2021-08-31 中标慧安信息技术股份有限公司 一种基于网络摄像机的现场监控系统和方法
CN113327271B (zh) * 2021-05-28 2022-03-22 北京理工大学重庆创新中心 基于双光孪生网络决策级目标跟踪方法、系统及存储介质
CN113409358A (zh) * 2021-06-24 2021-09-17 浙江大华技术股份有限公司 图像跟踪方法、装置、存储介质及电子设备
CN115190237B (zh) * 2022-06-20 2023-12-15 亮风台(上海)信息科技有限公司 一种确定承载设备的转动角度信息的方法与设备

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050128291A1 (en) * 2002-04-17 2005-06-16 Yoshishige Murakami Video surveillance system
CN106506941A (zh) * 2016-10-20 2017-03-15 深圳市道通智能航空技术有限公司 图像处理的方法及装置、飞行器
CN108496138A (zh) * 2017-05-25 2018-09-04 深圳市大疆创新科技有限公司 一种跟踪方法及装置

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4107273B2 (ja) * 2004-08-04 2008-06-25 日産自動車株式会社 移動体検出装置
IL199763B (en) * 2009-07-08 2018-07-31 Elbit Systems Ltd Automatic contractual system and method for observation
KR101172747B1 (ko) * 2010-08-16 2012-08-14 한국표준과학연구원 열화상 좌표를 이용한 보안용 카메라 추적 감시 시스템 및 방법
US8527445B2 (en) * 2010-12-02 2013-09-03 Pukoa Scientific, Llc Apparatus, system, and method for object detection and identification
US20140253737A1 (en) * 2011-09-07 2014-09-11 Yitzchak Kempinski System and method of tracking an object in an image captured by a moving device
US9769387B1 (en) * 2013-11-05 2017-09-19 Trace Live Network Inc. Action camera system for unmanned aerial vehicle
US9774797B2 (en) * 2014-04-18 2017-09-26 Flir Systems, Inc. Multi-sensor monitoring systems and methods
CN107577247B (zh) * 2014-07-30 2021-06-25 深圳市大疆创新科技有限公司 目标追踪系统及方法
US9442485B1 (en) * 2014-08-13 2016-09-13 Trace Live Network Inc. Pixel based image tracking system for unmanned aerial vehicle (UAV) action camera system
US10477157B1 (en) * 2016-03-02 2019-11-12 Meta View, Inc. Apparatuses, methods and systems for a sensor array adapted for vision computing
CN105915784A (zh) * 2016-04-01 2016-08-31 纳恩博(北京)科技有限公司 信息处理方法和装置
KR101634966B1 (ko) * 2016-04-05 2016-06-30 삼성지투비 주식회사 Vr 기반의 객체인식 정보를 이용한 영상 추적시스템, 그리고 영상 추적방법
WO2018018514A1 (en) * 2016-07-28 2018-02-01 SZ DJI Technology Co., Ltd. Target-based image exposure adjustment
EP3428884B1 (en) * 2017-05-12 2020-01-08 HTC Corporation Tracking system and tracking method thereof
JP6849272B2 (ja) * 2018-03-14 2021-03-24 エスゼット ディージェイアイ テクノロジー カンパニー リミテッドSz Dji Technology Co.,Ltd 無人航空機を制御するための方法、無人航空機、及び無人航空機を制御するためのシステム
US11399137B2 (en) * 2018-08-10 2022-07-26 Aurora Flight Sciences Corporation Object-tracking system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050128291A1 (en) * 2002-04-17 2005-06-16 Yoshishige Murakami Video surveillance system
CN106506941A (zh) * 2016-10-20 2017-03-15 深圳市道通智能航空技术有限公司 图像处理的方法及装置、飞行器
CN108496138A (zh) * 2017-05-25 2018-09-04 深圳市大疆创新科技有限公司 一种跟踪方法及装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3771198A4 *

Also Published As

Publication number Publication date
EP3771198B1 (en) 2022-08-24
US20210227144A1 (en) 2021-07-22
CN111345029A (zh) 2020-06-26
EP3771198A4 (en) 2021-06-16
US20200288065A1 (en) 2020-09-10
US10999519B2 (en) 2021-05-04
CN111345029B (zh) 2022-07-08
EP3771198A1 (en) 2021-01-27

Similar Documents

Publication Publication Date Title
WO2020237565A1 (zh) 一种目标追踪方法、装置、可移动平台及存储介质
KR102277048B1 (ko) 미리보기 사진 블러링 방법 및 장치 및 저장 매체
WO2019114617A1 (zh) 快速抓拍的方法、装置及系统
US10304164B2 (en) Image processing apparatus, image processing method, and storage medium for performing lighting processing for image data
WO2015184978A1 (zh) 摄像机控制方法、装置及摄像机
CN108648225B (zh) 目标图像获取系统与方法
WO2018233217A1 (zh) 图像处理方法、装置和增强现实设备
WO2020024576A1 (zh) 摄像头校准方法和装置、电子设备、计算机可读存储介质
CN108605087A (zh) 终端的拍照方法、拍照装置和终端
TWI554108B (zh) 電子裝置及影像處理方法
EP2648157A1 (en) Method and device for transforming an image
CN114640833A (zh) 投影画面调整方法、装置、电子设备和存储介质
WO2022218161A1 (zh) 用于目标匹配的方法、装置、设备及存储介质
KR20160146567A (ko) 가변적으로 빠르게 움직이는 객체를 검출하는 방법 및 장치
WO2020134123A1 (zh) 全景拍摄方法及装置、相机、移动终端
WO2018012524A1 (ja) 投影装置、投影方法および投影制御プログラム
JP6483661B2 (ja) 撮像制御装置、撮像制御方法およびプログラム
CN115174878B (zh) 投影画面校正方法、装置和存储介质
CN116051736A (zh) 一种三维重建方法、装置、边缘设备和存储介质
JP4871315B2 (ja) 複眼撮影装置およびその制御方法並びにプログラム
CN110650288A (zh) 对焦控制方法和装置、电子设备、计算机可读存储介质
JP2019062436A (ja) 画像処理装置、画像処理方法、及びプログラム
CN112229381A (zh) 一种利用臂长和摄像头的智能手机测距方法
JP2020191546A (ja) 画像処理装置、画像処理方法、およびプログラム
JP2013190938A (ja) ステレオ画像処理装置

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2019877539

Country of ref document: EP

Effective date: 20200609

NENP Non-entry into the national phase

Ref country code: DE