WO2019084756A1 - 一种图像处理方法、装置及飞行器 - Google Patents

一种图像处理方法、装置及飞行器 Download PDF

Info

Publication number
WO2019084756A1
WO2019084756A1 PCT/CN2017/108528 CN2017108528W WO2019084756A1 WO 2019084756 A1 WO2019084756 A1 WO 2019084756A1 CN 2017108528 W CN2017108528 W CN 2017108528W WO 2019084756 A1 WO2019084756 A1 WO 2019084756A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
reference object
auxiliary
previous
aircraft
Prior art date
Application number
PCT/CN2017/108528
Other languages
English (en)
French (fr)
Inventor
张伟
刘昂
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN201780012764.4A priority Critical patent/CN108780568A/zh
Priority to PCT/CN2017/108528 priority patent/WO2019084756A1/zh
Publication of WO2019084756A1 publication Critical patent/WO2019084756A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • G06V40/113Recognition of static hand signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing

Definitions

  • the present application relates to the field of aerial photography, and in particular to an image processing method, apparatus and aircraft.
  • the principle of taking a picture is: the terminal turns on the panoramic mode; after the terminal collects the first image of the first user, the second user slowly moves the lens until the first user does not appear in the preview interface; the first user from the back of the terminal (ie, opposite to the other side of the lens) moves to the next shooting scene, the second user moves the lens to the shooting scene, and collects the second image of the first user; the image is stitched by the above method.
  • a stitched image is obtained, which may include the avatars of the first user located in different shooting scenes.
  • the above-mentioned photo taking needs to be completed by two users, and the user needs to move from the back of the terminal to the next shooting scene, thereby reducing the convenience of operation.
  • the second user control terminal needs to collect images, which is cumbersome and reduces image processing efficiency.
  • the embodiment of the invention discloses an image processing method, a device and an aircraft, which can improve image processing efficiency and improve the convenience of operation.
  • the first aspect of the embodiment of the present invention discloses an image processing method, including:
  • Each of the first images is image-spliced to obtain a stitched image.
  • the second aspect of the embodiment of the present invention discloses an image processing apparatus, including:
  • An identification module configured to identify a reference object by using an image processing module of the aircraft
  • An image acquisition module configured to collect, by the image processing module, a first image that includes the reference object
  • An image splicing module is configured to splicing each of the first images to obtain a spliced image.
  • a third aspect of the embodiments of the present invention discloses an aircraft, including: a memory, a processor, and an image processing module;
  • the memory is configured to store program instructions
  • the processor is configured to invoke the program instruction, and when the program instruction is executed, perform the following operations:
  • Each of the first images is image-spliced to obtain a stitched image.
  • the identification of the reference object can automatically complete the photo taking and the image splicing, thereby improving the image processing efficiency. And by automatically completing the photo taking, the operation is convenient, and in this process, one person can participate, which is beneficial to improve the convenience and user experience of the operation.
  • FIG. 1 is a schematic flow chart of an image processing method according to an embodiment of the present invention.
  • FIG. 2A is a schematic diagram of a scene for opening a photo taken by an embodiment of the present invention
  • 2B is a schematic diagram of a scene for tracking and shooting a reference object according to an embodiment of the present invention
  • 2C is a schematic diagram of a photographing gesture disclosed in an embodiment of the present invention.
  • 2D is a schematic diagram of image stitching disclosed in an embodiment of the present invention.
  • 2E is a schematic diagram of a stitched image disclosed in an embodiment of the present invention.
  • FIG. 3 is a schematic flow chart of an image processing method according to another embodiment of the present invention.
  • FIG. 4A is a schematic diagram of a scenario in which a first image does not satisfy a splicing requirement according to an embodiment of the present disclosure
  • 4B is a schematic diagram of a scene in which a previous first image and an auxiliary image overlap according to an embodiment of the present invention
  • 4C is a schematic diagram of image stitching according to another embodiment of the present invention.
  • 4D is a schematic diagram of image stitching disclosed in another embodiment of the present invention.
  • 4E is a schematic diagram of image stitching according to another embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention.
  • FIG. 6 is a schematic structural view of an aircraft disclosed in an embodiment of the present invention.
  • FIG. 1 is a schematic flowchart diagram of an image processing method according to an embodiment of the present invention. Specifically, as shown in FIG. 1, the image processing method of the embodiment of the present invention may include the following steps:
  • the aircraft can identify the reference object through the image processing module of the aircraft. Taking the scene diagram of the tracking shooting of the reference object shown in FIG. 2B as an example, the aircraft can identify the user and determine the user as a reference object, and then perform tracking shooting on the reference object. Specifically, the aircraft may use an image matching algorithm, a pedestrian tracking algorithm, or other algorithms to track the reference object.
  • the image processing module may include an image capturing device, and the camera device may be integrated in the aircraft or may be externally connected to the aircraft.
  • the camera device can be a camera or a camera or the like.
  • the aircraft may be determined that the aircraft is in the photo-taking mode.
  • the user can control the aircraft to enter the avatar mode by clicking on the virtual buttons or physical buttons of the drone ground control station used to control the aircraft.
  • the user can control the aircraft to enter the avatar mode by sending a voice message to the aircraft (eg, "photographing").
  • the aircraft collects the user's avatar gesture through the image processing module, the aircraft enters the avatar mode.
  • the first image including the reference object may be acquired by the image processing module.
  • the first image including the reference object may be collected.
  • the first image is a key frame used in subsequent image stitching, and is also an image that the user really needs.
  • the aircraft may be determined whether the photographing gesture is detected.
  • the aircraft may collect the first image including the reference object through the image processing module.
  • the aircraft may acquire the first image through the image processing module.
  • the aircraft can image the plurality of first images to obtain a stitched image.
  • the aircraft may acquire the first image through the image processing module.
  • the second image is acquired in a preset period of time
  • the auxiliary image is acquired in the second image
  • the first image and the auxiliary image are image-spliced to obtain a stitched image.
  • the photographing gesture may be set by the aircraft by default, or may be set by the user operating the aircraft.
  • the user may take an image containing the gesture through the camera component of the aircraft and set the gesture as a camera gesture.
  • the user may determine the camera gesture by a selection operation according to the alternative camera gesture provided by the aircraft. In this way, when users use different aircraft, they can use the familiar camera gestures to set the aircraft, so as to avoid affecting the shooting process by forgetting the camera's camera gestures, which is beneficial to improve shooting efficiency and user experience. .
  • the aircraft may perform image stitching on each of the first images to obtain a stitched image.
  • the first image may have two, three or more. It should be noted that the number of the first images used for splicing is equal to the number of avatars in the spliced image obtained after splicing.
  • the aircraft may sequentially splicing the acquired first images in order of shooting (ie, shooting time). For example, in the process of taking pictures, a total of three first images are collected, and the aircraft may first splicing the first and second first images to obtain an intermediate image, and then performing the intermediate image and the third first image. Splicing to get a stitched image.
  • the aircraft may first splicing the second and third first images to obtain an intermediate image, and then splicing the first first image and the intermediate image to obtain a spliced image.
  • the aircraft may directly splicing the two first images.
  • the overlap ratio is 0.2
  • the overlap ratio between the first image b and the first image c is 0.3
  • the aircraft can directly directly image the first image a
  • first The image b and the first image c are spliced, wherein the shaded area on the left is an overlap area corresponding to the overlap ratio between the first image a and the first image b, and the shaded area on the right is the first image b and the first image c
  • the overlap ratio between the overlap ratios is a
  • the aircraft satisfies the splicing requirements by ensuring the overlap ratio between the first images used in the spliced image, performs feature point matching in these overlapping regions, and then performs ba optimization so that the relative position between the first images is more accurate, and then
  • the first image to be stitched is subjected to exposure compensation, and the stitching line is searched for, and finally, warp is deformed and projected into a stitched image (as shown in FIG. 2E).
  • the overlap ratio may be in the range of 0.2 to 0.3, or other ranges, which are not limited in the embodiment of the present invention.
  • the identification and tracking shooting of the reference object can automatically complete the photo taking and the image splicing, thereby improving the image processing efficiency. And by automatically completing the photo taking, the operation is convenient, and in this process, one person can participate, which is beneficial to improve the convenience and user experience of the operation.
  • FIG. 3 is a schematic flowchart diagram of another image processing method according to an embodiment of the present invention. Specifically, as shown in FIG. 3, another image processing method according to an embodiment of the present invention may include the following steps:
  • the reference object is identified by the image processing module of the aircraft.
  • the aircraft when the aircraft is in the split photographing mode, it can be determined whether the recognition gesture is detected, and when the recognition gesture is detected, the reference object can be identified by the image processing module of the aircraft. Among them, the recognition gesture is used to open the photo. Taking the scene diagram of the open-body photographing shown in FIG. 2A as an example, when the aircraft detects the recognition gesture (lifting the two-hand gesture), the split photograph can be turned on. Taking the scene diagram of the tracking shooting of the reference object shown in FIG. 2B as an example, when the aircraft detects the recognition gesture, the aircraft can also identify the user who made the recognition gesture, and determine the user as a reference object, and then the reference object. Tracking is taken and waiting to receive an instruction to take an image containing the reference.
  • the aircraft may use an image matching algorithm, a pedestrian tracking algorithm, or other algorithms to track the reference object.
  • the avatar is the subject in the image left by the reference object in different orientations, that is, the reference object itself.
  • the recognition gesture shown in FIG. 2A is for example only and does not constitute a limitation on the present invention.
  • the recognition gesture can also be other gestures such as one-handedness and love.
  • the recognition gesture may be set by default by the aircraft or may be set by the user operating the aircraft.
  • the user may take an image containing the gesture through the camera component of the aircraft and set the gesture to recognize the gesture.
  • the user may determine the recognition gesture by the selection operation according to the alternative recognition gesture provided by the aircraft. In this way, when users use different aircraft, they can use the familiar recognition gestures to shoot the aircraft, so as to avoid affecting the shooting process by forgetting the recognition gesture of the aircraft, which is beneficial to improve shooting efficiency and user experience. .
  • the aircraft may further collect a gesture image, and compare the gesture included in the gesture image with the recognition gesture, when the similarity between the gesture included in the gesture image and the recognition gesture is greater than a preset ratio threshold. , determining that the recognition gesture is detected.
  • the aircraft may employ a gesture detection algorithm to detect the similarity between the gesture included in the gesture image and the recognition gesture. Specifically, the aircraft transforms the gestures included in the gesture image and the edge images of the recognition gesture into the Euclidean distance space, and finds their Hausdorff distance or the modified Hausdorff distance. The distance value is used to represent the similarity between the gesture included in the gesture image and the recognition gesture. It should be noted that the aircraft may also use other recognition algorithms to detect the similarity between the gestures included in the gesture image and the recognition gesture, which is not limited in this embodiment of the present invention. It should also be noted that the preset ratio threshold may be a fixed value or a variable value that varies according to actual conditions.
  • the preset ratio threshold may be 0.8, 0.9, 0.95, or other higher value.
  • the preset ratio threshold may be 0.6, 0.7, 0.75, or other lower value, which is in the embodiment of the present invention.
  • the preset ratio threshold is not limited.
  • the aircraft may collect a first image including a reference object, wherein the photographing gesture is used to control the photographing.
  • the photographing gesture is used to control the photographing.
  • the aircraft may send a shooting instruction to the image processing module of the aircraft, the shooting instruction instructing the shooting component to capture the first image containing the reference object.
  • the first image is a key frame used in subsequent image stitching, and is also an image that the user really needs.
  • the photographing gesture may be set by the aircraft by default, or may be set by the user operating the aircraft.
  • the user may take an image containing the gesture through the camera component of the aircraft and set the gesture as a camera gesture.
  • the user may determine the camera gesture by a selection operation according to the alternative camera gesture provided by the aircraft. In this way, when users use different aircraft, they can use the familiar camera gestures to set the aircraft, so as to avoid affecting the shooting process by forgetting the camera's camera gestures, which is beneficial to improve shooting efficiency and user experience. .
  • the photographing gesture and the recognizing gesture are different gestures, so as to prevent the aircraft from misidentifying the photographing gesture as the recognition gesture, but not the normal photographing.
  • the photographing gesture shown in FIG. 2C is for example only and does not constitute a limitation of the present invention. In other embodiments, the photographing gesture may also be other gestures such as two-handed hands and two-handed hands.
  • the camera frame when the user uses the recognition gesture to trigger the start of the photo, the camera frame may be used to record the key frame (the first image), or the aircraft may move around to the current shooting point that needs to record the avatar, which may be in the process.
  • the first image acquired at the current shooting point and the previous first image do not satisfy the splicing requirement and cannot be spliced.
  • the scene shown in FIG. 4A does not satisfy the splicing requirement, and the overlap ratio between the first image captured by the current shooting point and the previous first image is zero, that is, there is no overlapping area, if the middle Without replenishing the image, it will result in the inability to stitch.
  • the embodiment of the present invention discloses that the aircraft can start from collecting the first image.
  • the second image is acquired in a preset period of time.
  • the second image is used to select an overlap ratio between the two images in the second image and the first image before and after the second image has an overlap ratio that does not satisfy the stitching requirement.
  • the required second image is used as a transition image.
  • the preset duration may be a fixed value or a variable value that varies according to actual conditions.
  • the aircraft can determine the preset duration by detecting the average moving speed of the reference object, for example, when the average moving speed of the reference object is fast, that is, the possibility that the adjacent first and second images of the first image are far away from each other.
  • the aircraft can set the preset duration to be shorter, such as 0.1 second, 0.2 second, etc., to obtain more second images, which is convenient for subsequent transition images as image stitching.
  • the average moving speed of the reference object is slow, that is, when the distance between the two preceding first and second images of the adjacent image is relatively small, the aircraft can set the preset time length to be longer, such as 0.5 seconds, 0.6 seconds, etc.
  • the number of second images collected between two adjacent first and second images may be one, two, three or more. Specifically, the number is related to a preset duration. .
  • the aircraft acquires a first overlap ratio between the current first image and the previous first image for automatically supplementing the auxiliary image (auxiliary frame) according to the first overlap ratio.
  • the first overlap ratio is greater than or equal to zero and less than 1.
  • the aircraft acquires at least one auxiliary image in the second image, wherein the acquisition time of the at least one auxiliary image is greater than the acquisition time of the previous first image, and at least The acquisition time of one auxiliary image is smaller than the acquisition time of the current first image.
  • the first ratio range does not intersect with the overlap ratio range.
  • the first ratio range may be 0 to 0.2 (including 0, excluding 0.2), 0.1. ⁇ 0.15 or other ranges are not limited in the embodiment of the present invention.
  • the first overlap ratio is in the first ratio range, it indicates that the overlap ratio between the current first image and the previous first image does not meet the stitching requirement, and the stitching cannot be directly performed, and the auxiliary image needs to be added in the stitching process.
  • the auxiliary image is used as a transition image for completing the splicing of the current first image and the previous first image.
  • the auxiliary image is not the image required by the user, and is the user The use of auxiliary images is unknowing.
  • the number of auxiliary images supplemented between the current first image and the previous first image may be one, two, three or more, but the number is less than or equal to the current first image.
  • the number of second images acquired between the first image and the last image, in particular, the number of auxiliary images is related to the overlap ratio between the first image and the second image and the acquisition period of the second image.
  • the aircraft can obtain a second overlap ratio between the first first image and the auxiliary image,
  • the second overlap ratio is in the second ratio interval
  • the position of the reference object in the previous first image is acquired, and the auxiliary image is compressed according to the position of the reference object in the previous first image, and the compressed auxiliary is obtained.
  • the image is image-spliced by combining the previous first image, the compressed auxiliary image, and the current first image to obtain a stitched image.
  • FIG. 4B a schematic diagram of a scene in which the previous first image and the auxiliary image are overlapped as shown in FIG. 4B is taken as an example.
  • the second overlap ratio is in the second proportional interval, indicating that the overlapping area of the auxiliary image and the previous first image is too large, that is, the auxiliary image covers the reference object contained in the first first image, and the reference object cannot be in the stitched image.
  • the possibility of being fully displayed is extremely high. It is necessary to obtain the position of the reference object in the previous first image, and then reduce the overlapping area between the auxiliary image and the previous first image, so that the overlapping area does not contain the reference object, so that the auxiliary image does not affect the reference object in the splicing.
  • a schematic diagram of another image mosaic shown by 4C is taken as an example.
  • the aircraft can compress the auxiliary image according to the position of the reference object in the previous first image to obtain a compressed auxiliary image.
  • the previous first image, the compressed auxiliary image, and the current first image are image-spliced to obtain a stitched image.
  • the use of a compressed auxiliary image shown in FIG. 4C is for example only, and does not constitute a limitation on the present invention.
  • the compressed auxiliary image used may also have two, three or more.
  • a schematic diagram of another image splicing shown in 4D is taken as an example.
  • the aircraft can crop the auxiliary image according to the position of the reference object in the previous first image (in FIG. 4D, the shaded area is the area in the clipped auxiliary image), and the cropped auxiliary image is obtained.
  • the previous first image, the cropped auxiliary image, and the current first image are image-spliced to obtain a stitched image.
  • the use of a cropped auxiliary image shown in FIG. 4D is for example only, and does not constitute a limitation on the present invention.
  • the cropped auxiliary image used may also have two, three or more.
  • a schematic diagram of another image stitching shown in FIG. 4E is taken as an example.
  • the second overlap ratio is not in the second ratio interval, it indicates that the overlapping region of the auxiliary image and the previous first image is suitable, that is, the auxiliary image covers the reference object contained in the previous first image, and the reference object cannot be The possibility of being completely displayed in the stitched image is extremely small.
  • the aircraft can directly go to the first one
  • the image, the auxiliary image, and the current first image are image-spliced to obtain a stitched image.
  • the two auxiliary images shown in FIG. 4E are for example only and do not constitute a limitation of the present invention.
  • the supplementary between the current first image and the previous first image is supplemented.
  • the number of images can also be one, three or more.
  • the image processing efficiency and the quality of the stitched image can be improved, and since it is automatically completed, the user operation can be reduced, and the convenience of the operation can be improved.
  • FIG. 5 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention.
  • the image processing device can be applied to an aircraft, and the image processing device described in this embodiment includes:
  • An identification module 501 configured to identify a reference object by using an image processing module of the aircraft;
  • the image acquisition module 502 is configured to collect, by using an image processing module, a first image that includes the reference object;
  • the image splicing module 503 is configured to splicing each of the first images to obtain a spliced image.
  • the image splicing module 503 is specifically configured to:
  • the previous first image, the auxiliary image, and the current first image are image-spliced to obtain the stitched image.
  • the image splicing module 503 acquires an auxiliary image in the second image according to the first overlap ratio, specifically for:
  • the image splicing module 503 splicing the image of the previous first image, the auxiliary image, and the current first image to obtain the spliced image, specifically for:
  • the image processing apparatus in the embodiment of the present invention may further include:
  • the determining module 504 is configured to determine that the aircraft is in the avatar mode before the image capturing module 502 identifies the reference object by the image processing module of the aircraft.
  • the image collection module 502 is specifically configured to:
  • a first image containing the reference object is acquired by the image processing module when a photographing gesture is detected.
  • the image processing apparatus in the embodiment of the present invention may further include:
  • the tracking module 505 is configured to perform tracking shooting on the reference object after the identification module 501 identifies the reference object by using the image processing module of the aircraft.
  • the identification module 501 identifies the reference object through the image processing module of the aircraft, and the image acquisition module 502 collects the first image including the reference object through the image processing module; the image splicing module 503 An image is spliced to obtain a spliced image, which can improve image processing efficiency and improve the convenience of operation.
  • FIG. 6 is a schematic structural diagram of an aircraft according to an embodiment of the present invention.
  • the aircraft described in this embodiment includes a memory 601, a processor 602, and an image processing module 603.
  • the processor 602, the memory 601, and the image processing module 603 described above are connected by a bus.
  • the processor 602 may be a central processing unit (CPU), and the processor may be another general-purpose processor, a digital signal processor (DSP), or an application specific integrated circuit (ASIC). ), a Field-Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, and the like.
  • the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
  • the above memory 601 can include read only memory and random access memory and provides instructions and data to the processor 602.
  • a portion of the memory 601 may also include a non-volatile random access memory. among them:
  • the memory 601 is configured to store program instructions
  • the processor 602 is configured to invoke the program instruction, and when the program instruction is executed, perform the following operations:
  • Each of the first images is image-spliced to obtain a stitched image.
  • the processor 602 performs image stitching on each of the first images to obtain a stitched image, specifically:
  • the previous first image, the auxiliary image, and the current first image are image-spliced to obtain the stitched image.
  • the processor 602 obtains an auxiliary image in the second image according to the first overlap ratio, specifically, to:
  • the processor 602 performs image stitching on the previous first image, the auxiliary image, and the current first image to obtain the stitched image, specifically for:
  • the processor 602 is further configured to:
  • the processor 602 collects, by using the image processing module 603, the first image that includes the reference object, specifically for:
  • the first image containing the reference object is acquired by the image processing module 603.
  • the processor 602 is further configured to:
  • the processor 602 described in the embodiment of the present invention may implement the implementation manner described in the image processing methods provided in the embodiments 1 and 3 of the embodiment of the present invention, and may also perform the image processing described in FIG. 5 of the embodiment of the present invention. The implementation of the device will not be described here.
  • the program may be stored in a computer readable storage medium, and the storage medium may include: Flash disk, Read-Only Memory (ROM), Random Access Memory (RAM), disk or optical disk.

Abstract

一种图像处理方法、装置及飞行器,其中方法包括:通过飞行器的图像处理模块对参考物进行识别,通过图像处理模块采集包含所述参考物的第一图像,将各个所述第一图像进行图像拼接,得到拼接图像。本申请可提升图像处理效率,并提高操作的便利性。

Description

一种图像处理方法、装置及飞行器 技术领域
本申请涉及航拍领域,尤其涉及一种图像处理方法、装置及飞行器。
背景技术
分身拍照由于其趣味性和创意性,受到广大用户的青睐。分身拍照的原理为:终端开启全景模式;在终端采集到第一用户的第一张图像之后,第二用户慢慢移动镜头直到第一用户未出现在预览界面中;第一用户从终端的背面(即相对镜头的另外一面)移动至下一个拍摄场景,第二用户将镜头移动至该拍摄场景,采集第一用户的第二张图像;通过上述方法将采集到的多张图像进行图像拼接,得到拼接图像,该拼接图像可包括位于不同拍摄场景的第一用户的分身。
但是,上述分身拍照需要两个用户配合才能完成,且用户需要从终端的背面移动至下一拍摄场景,降低操作的便利性。另外,第一用户在每个拍摄场景摆好姿势之后,需要第二用户控制终端采集图像,操作繁琐,降低图像处理效率。
发明内容
本发明实施例公开了一种图像处理方法、装置及飞行器,可提升图像处理效率,并提高操作的便利性。
本发明实施例第一方面公开了一种图像处理方法,包括:
通过所述飞行器的图像处理模块对参考物进行识别;
通过所述图像处理模块采集包含所述参考物的第一图像;
将各个所述第一图像进行图像拼接,得到拼接图像。
本发明实施例第二方面公开了一种图像处理装置,包括:
识别模块,用于通过所述飞行器的图像处理模块对参考物进行识别;
图像采集模块,用于通过所述图像处理模块采集包含所述参考物的第一图像;
图像拼接模块,用于将各个所述第一图像进行图像拼接,得到拼接图像。
本发明实施例第三方面公开了一种飞行器,包括:存储器、处理器以及图像处理模块;
所述存储器,用于存储程序指令;
所述处理器,用于调用所述程序指令,当所述程序指令被执行时,执行以下操作:
通过所述图像处理模块对参考物进行识别;
通过所述图像处理模块采集包含所述参考物的第一图像;
将各个所述第一图像进行图像拼接,得到拼接图像。
本发明实施例通过对参考物的识别,可以高效的自动完成分身拍照,以及图像拼接,进而提高图像处理效率。并且通过自动完成分身拍照,操作便捷,且在此过程中,一人参与即可,有利于提高操作的便利性与用户体验。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本发明实施例公开的一种图像处理方法的流程示意图;
图2A是本发明实施例公开的一种开启分身拍照的场景示意图;
图2B是本发明实施例公开的一种对参考物进行跟踪拍摄的场景示意图;
图2C是本发明实施例公开的一种拍照手势的示意图;
图2D是本发明实施例公开的一种图像拼接的示意图;
图2E是本发明实施例公开的一种拼接图像的示意图;
图3是本发明另一实施例公开的一种图像处理方法的流程示意图;
图4A是本发明实施例公开的一种第一图像不满足拼接要求的场景示意图;
图4B是本发明实施例公开的一种上一个第一图像和辅助图像重叠的场景示意图;
图4C是本发明另一实施例公开的一种图像拼接的示意图;
图4D是本发明另一实施例公开的一种图像拼接的示意图;
图4E是本发明另一实施例公开的一种图像拼接的示意图;
图5是本发明实施例公开的一种图像处理装置的结构示意图;
图6是本发明实施例公开的一种飞行器的结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
请参阅图1,图1是本发明实施例提供的一种图像处理方法的流程示意图。具体的,如图1所示,本发明实施例的图像处理方法可以包括以下步骤:
101、通过飞行器的图像处理模块对参考物进行识别。
具体的,飞行器可以通过飞行器的图像处理模块对参考物进行识别。以图2B所示的对参考物进行跟踪拍摄的场景示意图为例,飞行器可以识别用户,并将该用户确定为参考物,然后对该参考物进行跟踪拍摄。具体的,飞行器可以采用图像匹配算法、行人tracking算法或其他算法对参考物进行跟踪。
其中,图像处理模块可以包括摄像装置,摄像装置可以集成在飞行器中,也可以外接飞行器。摄像装置可以为摄像头或者相机等。
可选的,飞行器通过图像处理模块对参考物进行识别之前,可以确定飞行器处于分身拍照模式。例如,用户可以通过点击用于控制飞行器的无人机地面控制站的虚拟按键或者物理按键的方式,控制飞行器进入分身拍照模式。或者用户可以通过向飞行器发送语音消息(例如“分身拍照”)的方式,控制飞行器进入分身拍照模式。或者飞行器通过图像处理模块采集到用户的分身拍照手势时,进入分身拍照模式。进一步的,飞行器确定处于分身拍照模式之后,可以通过图像处理模块采集包含参考物的第一图像。
102、通过图像处理模块采集包含参考物的第一图像。
具体的,飞行器对参考物进行识别之后,可以采集包含参考物的第一图像。 该第一图像是后续进行图像拼接时使用的关键帧,也是用户真正需要的图像。
可选的,飞行器通过图像处理模块采集包含参考物的第一图像之前,可以判断是否检测到拍照手势,当检测到拍照手势时,飞行器可以通过图像处理模块采集包含参考物的第一图像。
示例性的,当检测到拍照手势时,飞行器可以通过图像处理模块采集第一图像。进而飞行器可以将多个第一图像进行图像拼接,得到拼接图像。
示例性的,当检测到拍照手势时,飞行器可以通过图像处理模块采集第一图像。从采集第一图像开始,以预设时长为周期采集第二图像,在第二图像中获取辅助图像,将第一图像和辅助图像进行图像拼接,得到拼接图像。
在一种实现方式中,拍照手势可以是由飞行器默认设置的,也可以是由用户操作飞行器而设置的。例如,用户可以通过飞行器的摄像组件拍摄包含手势的图像,并将该手势设置为拍照手势,又如,用户可以根据飞行器提供的备选拍照手势,通过选择操作确定拍照手势。通过这种方式,用户使用不同的飞行器时,均可以通过对飞行器进行设置而采用自己熟悉的拍照手势进行拍摄,进而避免因忘记飞行器的拍照手势而影响拍摄过程,有利于提高拍摄效率及用户体验。
103、将各个第一图像进行图像拼接,得到拼接图像。
具体的,飞行器可以将各个第一图像进行图像拼接,得到拼接图像。
具体的,第一图像可以有两张、三张或更多,需要说明的是,用于拼接的第一图像的数量,与拼接后得到的拼接图像中的分身的数量相等。在一种实现方式中,飞行器可以将所采集的第一图像按照拍摄的先后顺序(即拍摄时间)依次进行拼接。例如,在分身拍照过程中,一共采集了三张第一图像,飞行器可以先将第一张和第二张第一图像进行拼接,得到中间图像,然后将中间图像和第三张第一图像进行拼接,得到拼接图像。可选的,飞行器也可以先将第二张和第三张第一图像进行拼接,得到中间图像,然后将第一张第一图像和中间图像进行拼接,得到拼接图像。
在本发明实施例中,在图像拼接过程中,相邻的两张第一图像之间存在的重叠率在重叠率范围内时,飞行器可以直接将两张第一图像进行图像拼接。具体的,以图2D所示的图像拼接的示意图为例,第一图像a和第一图像b之间 的重叠率为0.2,第一图像b和第一图像c之间的重叠率为0.3,当重叠率范围为0.2~0.3(包括0.2和0.3)时,飞行器可以直接将第一图像a、第一图像b和第一图像c进行拼接,其中,左边的阴影区是第一图像a和第一图像b之间的重叠率对应的重叠区域,右边的阴影区是第一图像b和第一图像c之间的重叠率对应的重叠区域。飞行器通过确保拼接图像中所使用的第一图像之间的重叠率满足拼接要求,在这些重叠区域进行特征点的匹配,然后进行ba优化,使得第一图像之间的相对位置更为精确,然后对待拼接的第一图像进行曝光补偿,寻找拼接线,最后通过warp变形,投影为一张拼接图像(如图2E所示)。通过这种方式,能提高拼接图像的图像质量。需要说明的是,重叠率范围可以是0.2~0.3,或者其他范围,本发明实施例对此不作限定。
本发明实施例通过对参考物的识别与跟踪拍摄,可以高效的自动完成分身拍照,以及图像拼接,进而提高图像处理效率。并且通过自动完成分身拍照,操作便捷,且在此过程中,一人参与即可,有利于提高操作的便利性与用户体验。
请参阅图3,图3是本发明实施例提供的另一种图像处理方法的流程示意图。具体的,如图3所示,本发明实施例的另一种图像处理方法可以包括以下步骤:
301、在检测到识别手势时,通过飞行器的图像处理模块对参考物进行识别。
具体的,飞行器处于分身拍照模式时,可以判断是否检测到识别手势,检测到识别手势时,可以通过飞行器的图像处理模块对参考物进行识别。其中,识别手势用于开启分身拍照。以图2A所示的开启分身拍照的场景示意图为例,飞行器检测到识别手势(举双手手势)时,可以开启分身拍照。以图2B所示的对参考物进行跟踪拍摄的场景示意图为例,飞行器在检测到识别手势时,还可以识别做出识别手势的用户,并将该用户确定为参考物,然后对该参考物进行跟踪拍摄,并等待接收拍摄包含该参考物的图像的指令。具体的,飞行器可以采用图像匹配算法、行人tracking算法或其他算法对参考物进行跟踪。需要说明的是,分身是参考物在不同方位留下的图像中的主体,即参考物本身。还需要说明的是,图2A所示识别手势仅用于举例,并不构成对本发明的限定, 识别手势还可以是举单手、摆爱心等其他手势。
在一种实现方式中,识别手势可以是由飞行器默认设置的,也可以是由用户操作飞行器而设置的。例如,用户可以通过飞行器的摄像组件拍摄包含手势的图像,并将该手势设置为识别手势,又如,用户可以根据飞行器提供的备选识别手势,通过选择操作确定识别手势。通过这种方式,用户使用不同的飞行器时,均可以通过对飞行器进行设置而采用自己熟悉的识别手势进行拍摄,进而避免因忘记飞行器的识别手势而影响拍摄过程,有利于提高拍摄效率及用户体验。
可选的,在步骤301之前,飞行器还可以采集手势图像,将手势图像所包含的手势与识别手势进行比较,当手势图像所包含的手势与识别手势之间的相似度大于预设比例阈值时,确定检测到识别手势。
在一种实现方式中,飞行器在采集到手势图像后,可以采用手势检测算法来检测手势图像所包含的手势与识别手势之间的相似度。具体的,飞行器将手势图像所包含的手势和识别手势的边缘图像变换到欧式距离空间,求出它们的豪斯多夫(Hausdorff)距离或修正Hausdorff距离。用该距离值代表手势图像所包含的手势与识别手势之间的相似度。需要说明的是,飞行器也可以采用其他识别算法来检测手势图像所包含的手势与识别手势之间的相似度,本发明实施例对此不作限定。还需要说明的是,预设比例阈值可以是一个固定值,也可以是一个根据实际情况变化的可变值。例如,当飞行器与参考物位于有利于手势识别的场景下时,如在光线较好的户外时,预设比例阈值可以为0.8、0.9、0.95或其他较高的值。又如,当飞行器与参考物位于不利于手势识别的场景下时,如在光线较暗的室内时,预设比例阈值可以为0.6、0.7、0.75或其他较低的值,本发明实施例对预设比例阈值不作限定。
302、在检测到拍照手势时,采集包含参考物的第一图像。
具体的,飞行器在检测到拍照手势时,可以采集包含参考物的第一图像,其中,拍照手势用于控制拍照。以图2C所示的拍照手势的示意图为例,飞行器在检测到拍照手势时,可以向飞行器的图像处理模块发送拍摄指令,该拍摄指令指示拍摄组件拍摄包含参考物的第一图像。该第一图像是后续进行图像拼接时使用的关键帧,也是用户真正需要的图像。
在一种实现方式中,拍照手势可以是由飞行器默认设置的,也可以是由用户操作飞行器而设置的。例如,用户可以通过飞行器的摄像组件拍摄包含手势的图像,并将该手势设置为拍照手势,又如,用户可以根据飞行器提供的备选拍照手势,通过选择操作确定拍照手势。通过这种方式,用户使用不同的飞行器时,均可以通过对飞行器进行设置而采用自己熟悉的拍照手势进行拍摄,进而避免因忘记飞行器的拍照手势而影响拍摄过程,有利于提高拍摄效率及用户体验。
需要说明的是,拍照手势和识别手势是不同的手势,以避免飞行器将拍照手势误识别为识别手势,而不能正常拍照。还需要说明的是,图2C所示拍照手势仅用于举例,并不构成对本发明的限定,在其他实施例中,拍照手势还可以是双手交叉、双手平举等其他手势。
303、从采集第一图像开始,以预设时长为周期采集第二图像。
在本发明实施例中,当用户使用识别手势触发开始分身拍照后,即可使用拍照手势记录关键帧(第一图像),或者绕飞行器走动到需要记录自己分身的当前拍摄点,过程中可能会发生当前拍摄点距离上一张第一图像的拍摄点较远的情况,导致在当前拍摄点采集的第一图像与上一张第一图像不满足拼接要求而无法进行拼接。以图4A所示的第一图像不满足拼接要求的场景示意图为例,当前拍摄点采集的第一图像与上一张第一图像之间的重叠率为零,即不存在重叠区域,如果中间不补充图像,那么将导致无法进行拼接。
为解决当前第一图像和上一个第一图像(即相邻的前后两张第一图像)的重叠率不满足拼接要求的问题,本发明实施例公开了在飞行器从采集第一图像开始,可以以预设时长为周期采集第二图像。第二图像用于在该第二图像的前后两张第一图像的重叠率不满足拼接要求的情况下,在该第二图像中选择与该前后两张第一图像之间的重叠率满足拼接要求的第二图像作为过渡图像。需要说明的是,预设时长可以是一个固定值,也可以是一个根据实际情况变化的可变值。具体的,飞行器可以通过检测参考物的平均运动速度来确定预设时长,例如,当参考物的平均运动速度较快,即相邻的前后两张第一图像的拍摄点距离较远的可能性较大时,飞行器可以将预设时长设置的较短,如0.1秒、0.2秒等,以获取更多的第二图像,便于后续作为图像拼接的过渡图像。又如,当 参考物的平均运动速度较慢,即相邻的前后两张第一图像的拍摄点距离较远的可能性较小时,飞行器可以将预设时长设置的较长,如0.5秒、0.6秒等,以避免获取无用的第二图像,通过这种方式,可以提高存储空间利用率,降低资源消耗。还需要说明的是,在相邻的前后两张第一图像之间采集的第二图像的数量,可以是一张、两张、三张或更多,具体的,该数量与预设时长有关。
304、获取当前第一图像和上一个第一图像之间的第一重叠率。
具体的,飞行器获取当前第一图像和上一个第一图像之间的第一重叠率,用于自动根据第一重叠率补充辅助图像(辅助帧)。需要说明的是,第一重叠率大于等于零,并且小于1。
305、根据第一重叠率,在第二图像中获取辅助图像。
具体的,当第一重叠率在第一比例范围内时,飞行器在第二图像中获取至少一张辅助图像,其中至少一张辅助图像的采集时间大于上一个第一图像的采集时间,且至少一张辅助图像的采集时间小于当前第一图像的采集时间。其中,第一比例范围与重叠率范围没有交集,例如,当重叠率范围是0.2~0.3(包括0.2和0.3)时,第一比例范围可以是0~0.2(包括0,不包括0.2)、0.1~0.15或者其他范围,本发明实施例对此不作限定。当第一重叠率在第一比例范围内时,表明当前第一图像和上一个第一图像之间的重叠率不满足拼接要求,不能直接进行拼接,需要在拼接过程中加入辅助图像。
在本发明实施例中,辅助图像用于作为完成当前第一图像和上一个第一图像的拼接的过渡图像,对于用户而言,辅助图像不是用户需要的图像,且用户对于图像拼接过程中是否使用了辅助图像是不知情的。通过自动补充辅助图像作为第一图像之间的过渡图像,可以提高图像处理效率与拼接图像的质量。
需要说明的是,在当前第一图像和上一个第一图像之间补充的辅助图像的数量,可以是一张、两张、三张或更多,但是其数量小于或等于在当前第一图像和上一个第一图像之间采集的第二图像的数量,具体的,辅助图像的数量与第一图像和第二图像之间的重叠率以及第二图像的采集周期有关。
306、将上一个第一图像、辅助图像和当前第一图像进行图像拼接,得到拼接图像。
具体的,飞行器可以获取上一个第一图像和辅助图像之间的第二重叠率, 当第二重叠率在第二比例区间内时,获取参考物在上一个第一图像中的位置,根据参考物在上一个第一图像中的位置,对辅助图像进行压缩,得到压缩后的辅助图像,将上一个第一图像、压缩后的辅助图像和当前第一图像进行图像拼接,得到拼接图像。
具体的,以图4B所示的上一个第一图像和辅助图像重叠的场景示意图为例。第二重叠率在第二比例区间内,表明辅助图像和上一个第一图像的重叠区域太大,即辅助图像会覆盖上一个第一图像中包含的参考物,而导致参考物无法在拼接图像中完全显示出来的可能性极大。需要通过获取参考物在上一个第一图像中的位置,然后减少辅助图像和上一个第一图像之间的重叠区域,使得重叠区域不包含参考物,进而使得辅助图像不会影响参考物在拼接图像中的显示。
具体的,以4C所示的另一种图像拼接的示意图为例。飞行器可以根据参考物在上一个第一图像中的位置,对辅助图像进行压缩,得到压缩后的辅助图像。将上一个第一图像、压缩后的辅助图像和当前第一图像(图4C未示)进行图像拼接,得到拼接图像。需要说明的是,在上一个第一图像和当前第一图像进行拼接的过程中,图4C所示的使用一张压缩后的辅助图像仅用于举例,并不构成对本发明的限定,在其他实施例中,所使用的压缩后的辅助图像还可以有两张、三张或更多。
可选的,以4D所示的另一种图像拼接的示意图为例。飞行器可以根据参考物在上一个第一图像中的位置,对辅助图像进行裁剪(图4D中,阴影区域为剪掉的辅助图像中的区域),得到裁剪后的辅助图像。将上一个第一图像、裁剪后的辅助图像和当前第一图像(图4D未示)进行图像拼接,得到拼接图像。需要说明的是,在上一个第一图像和当前第一图像进行拼接的过程中,图4D所示的使用一张裁剪后的辅助图像仅用于举例,并不构成对本发明的限定,在其他实施例中,所使用的裁剪后的辅助图像还可以有两张、三张或更多。
在一种实现方式中,以图4E所示的又一种图像拼接的示意图为例。当第二重叠率不在第二比例区间内时,表明辅助图像和上一个第一图像的重叠区域比较合适,即辅助图像会覆盖上一个第一图像中包含的参考物,而导致参考物无法在拼接图像中完全显示出来的可能性极小。飞行器可以直接将上一个第一 图像、辅助图像和当前第一图像进行图像拼接,得到拼接图像。需要说明的是,图4E所示的两张辅助图像仅用于举例,并不构成对本发明的限定,在其他实施例中,在当前第一图像与上一张第一图像之间补充的辅助图像的数量,还可以是一张、三张或更多。
本发明实施例通过自动补充辅助图像作为第一图像之间的过渡图像,可以提高图像处理效率与拼接图像的质量,而且由于是自动完成的,还可以减少用户操作,提高操作的便利性。
请参阅图5,为本发明实施例提供的一种图像处理装置的结构示意图。图像处理装置可以应用于飞行器,本实施例中所描述的图像处理装置,包括:
识别模块501,用于通过所述飞行器的图像处理模块对参考物进行识别;
图像采集模块502,用于通过图像处理模块采集包含所述参考物的第一图像;
图像拼接模块503,用于将各个所述第一图像进行图像拼接,得到拼接图像。
可选的,所述图像拼接模块503,具体用于:
从采集所述第一图像开始,以预设时长为周期采集第二图像;
获取当前第一图像和上一个第一图像之间的第一重叠率;
根据所述第一重叠率,在所述第二图像中获取辅助图像;
将所述上一个第一图像、所述辅助图像和所述当前第一图像进行图像拼接,得到所述拼接图像。
可选的,所述图像拼接模块503根据所述第一重叠率,在所述第二图像中获取辅助图像,具体用于:
当所述第一重叠率在第一比例范围内时,在所述第二图像中获取至少一张辅助图像,其中所述至少一张辅助图像的采集时间大于所述上一个第一图像的采集时间,且所述至少一张辅助图像的采集时间小于所述当前第一图像的采集时间。
可选的,所述图像拼接模块503将所述上一个第一图像、所述辅助图像和所述当前第一图像进行图像拼接,得到所述拼接图像,具体用于:
获取所述上一个第一图像和所述辅助图像之间的第二重叠率;
当所述第二重叠率在第二比例区间内时,获取所述参考物在所述上一个第一图像中的位置;
根据所述参考物在所述上一个第一图像中的位置,对所述辅助图像进行压缩,得到压缩后的辅助图像;
将所述上一个第一图像、所述压缩后的辅助图像和所述当前第一图像进行图像拼接,得到所述拼接图像。
可选的,本发明实施例中的图像处理装置还可以包括:
确定模块504,用于所述图像采集模块502通过所述飞行器的图像处理模块对参考物进行识别之前,确定飞行器处于分身拍照模式。
可选的,图像采集模块502,具体用于:
在检测到拍照手势时,通过所述图像处理模块采集包含所述参考物的第一图像。
可选的,本发明实施例中的图像处理装置还可以包括:
跟踪模块505,用于所述识别模块501通过所述飞行器的图像处理模块对参考物进行识别之后,对所述参考物进行跟踪拍摄。
本发明实施例中识别模块501通过所述飞行器的图像处理模块对参考物进行识别,图像采集模块502通过图像处理模块采集包含所述参考物的第一图像;图像拼接模块503将各个所述第一图像进行图像拼接,得到拼接图像,可提升图像处理效率,并提高操作的便利性。
请参阅图6,为本发明实施例提供的一种飞行器的结构示意图。本实施例中所描述的飞行器,包括:存储器601、处理器602以及图像处理模块603。上述处理器602、存储器601以及图像处理模块603通过总线连接。
上述处理器602可以是中央处理单元(Central Processing Unit,CPU),该处理器还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
上述存储器601可以包括只读存储器和随机存取存储器,并向处理器602提供指令和数据。存储器601的一部分还可以包括非易失性随机存取存储器。其中:
所述存储器601,用于存储程序指令;
所述处理器602,用于调用所述程序指令,当所述程序指令被执行时,执行以下操作:
通过所述图像处理模块603对参考物进行识别;
通过所述图像处理模块603采集包含所述参考物的第一图像;
将各个所述第一图像进行图像拼接,得到拼接图像。
可选的,所述处理器602将各个所述第一图像进行图像拼接,得到拼接图像,具体用于:
从采集所述第一图像开始,以预设时长为周期采集第二图像;
获取当前第一图像和上一个第一图像之间的第一重叠率;
根据所述第一重叠率,在所述第二图像中获取辅助图像;
将所述上一个第一图像、所述辅助图像和所述当前第一图像进行图像拼接,得到所述拼接图像。
可选的,所述处理器602根据所述第一重叠率,在所述第二图像中获取辅助图像,具体用于:
当所述第一重叠率在第一比例范围内时,在所述第二图像中获取至少一张辅助图像,其中所述至少一张辅助图像的采集时间大于所述上一个第一图像的采集时间,且所述至少一张辅助图像的采集时间小于所述当前第一图像的采集时间。
可选的,所述处理器602将所述上一个第一图像、所述辅助图像和所述当前第一图像进行图像拼接,得到所述拼接图像,具体用于:
获取所述上一个第一图像和所述辅助图像之间的第二重叠率;
当所述第二重叠率在第二比例区间内时,获取所述参考物在所述上一个第一图像中的位置;
根据所述参考物在所述上一个第一图像中的位置,对所述辅助图像进行压缩,得到压缩后的辅助图像;
将所述上一个第一图像、所述压缩后的辅助图像和所述当前第一图像进行图像拼接,得到所述拼接图像。
可选的,所述处理器602通过所述图像处理模块603对参考物进行识别之前,还用于:
确定飞行器处于分身拍照模式。
可选的,处理器602通过所述图像处理模块603采集包含所述参考物的第一图像,具体用于:
在检测到拍照手势时,通过所述图像处理模块603采集包含所述参考物的第一图像。
可选的,所述处理器602通过所述图像处理模块603对参考物进行识别之后,还用于:
对所述参考物进行跟踪拍摄。
具体实现中,本发明实施例中所描述处理器602可执行本发明实施例图1、3提供的图像处理方法中所描述的实现方式,也可执行本发明实施例图5所描述的图像处理装置的实现方式,在此不再赘述。
需要说明的是,对于前述的各个方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本发明并不受所描述的动作顺序的限制,因为依据本申请,某一些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本申请所必须的。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,存储介质可以包括:闪存盘、只读存储器(Read-Only Memory,ROM)、随机存取器(Random Access Memory,RAM)、磁盘或光盘等。
以上对本发明实施例所提供的一种控制终端的控制方法、装置、设备及飞行器进行了详细介绍,本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。

Claims (21)

  1. 一种图像处理方法,其特征在于,所述方法应用于飞行器,所述方法包括:
    通过所述飞行器的图像处理模块对参考物进行识别;
    通过所述图像处理模块采集包含所述参考物的第一图像;
    将各个所述第一图像进行图像拼接,得到拼接图像。
  2. 如权利要求1所述的方法,其特征在于,所述将各个所述第一图像进行图像拼接,得到拼接图像,包括:
    从采集所述第一图像开始,以预设时长为周期采集第二图像;
    获取当前第一图像和上一个第一图像之间的第一重叠率;
    根据所述第一重叠率,在所述第二图像中获取辅助图像;
    将所述上一个第一图像、所述辅助图像和所述当前第一图像进行图像拼接,得到所述拼接图像。
  3. 如权利要求2所述的方法,其特征在于,所述根据所述第一重叠率,在所述第二图像中获取辅助图像,包括:
    当所述第一重叠率在第一比例范围内时,在所述第二图像中获取至少一张辅助图像,其中所述至少一张辅助图像的采集时间大于所述上一个第一图像的采集时间,且所述至少一张辅助图像的采集时间小于所述当前第一图像的采集时间。
  4. 如权利要求2所述的方法,其特征在于,所述将所述上一个第一图像、所述辅助图像和所述当前第一图像进行图像拼接,得到所述拼接图像,包括:
    获取所述上一个第一图像和所述辅助图像之间的第二重叠率;
    当所述第二重叠率在第二比例区间内时,获取所述参考物在所述上一个第一图像中的位置;
    根据所述参考物在所述上一个第一图像中的位置,对所述辅助图像进行压 缩,得到压缩后的辅助图像;
    将所述上一个第一图像、所述压缩后的辅助图像和所述当前第一图像进行图像拼接,得到所述拼接图像。
  5. 如权利要求1所述的方法,其特征在于,所述通过所述飞行器的图像处理模块对参考物进行识别之前,还包括:
    确定所述飞行器处于分身拍照模式。
  6. 如权利要求1所述的方法,其特征在于,所述通过所述图像处理模块采集包含所述参考物的第一图像,包括:
    在检测到拍照手势时,通过所述图像处理模块采集包含所述参考物的第一图像。
  7. 如权利要求1所述的方法,其特征在于,所述通过所述飞行器的图像处理模块对参考物进行识别之后,还包括:
    对所述参考物进行跟踪拍摄。
  8. 一种图像处理装置,其特征在于,所述装置应用于飞行器,所述装置包括:
    识别模块,用于通过所述飞行器的图像处理模块对参考物进行识别;
    图像采集模块,用于通过所述图像处理模块采集包含所述参考物的第一图像;
    图像拼接模块,用于将各个所述第一图像进行图像拼接,得到拼接图像。
  9. 如权利要求8所述的装置,其特征在于,所述图像拼接模块,具体用于:
    从采集所述第一图像开始,以预设时长为周期采集第二图像;
    获取当前第一图像和上一个第一图像之间的第一重叠率;
    根据所述第一重叠率,在所述第二图像中获取辅助图像;
    将所述上一个第一图像、所述辅助图像和所述当前第一图像进行图像拼接,得到所述拼接图像。
  10. 如权利要求9所述的装置,其特征在于,所述图像拼接模块根据所述第一重叠率,在所述第二图像中获取辅助图像,具体用于:
    当所述第一重叠率在第一比例范围内时,在所述第二图像中获取至少一张辅助图像,其中所述至少一张辅助图像的采集时间大于所述上一个第一图像的采集时间,且所述至少一张辅助图像的采集时间小于所述当前第一图像的采集时间。
  11. 如权利要求9所述的装置,其特征在于,所述图像拼接模块将所述上一个第一图像、所述辅助图像和所述当前第一图像进行图像拼接,得到所述拼接图像,具体用于:
    获取所述上一个第一图像和所述辅助图像之间的第二重叠率;
    当所述第二重叠率在第二比例区间内时,获取所述参考物在所述上一个第一图像中的位置;
    根据所述参考物在所述上一个第一图像中的位置,对所述辅助图像进行压缩,得到压缩后的辅助图像;
    将所述上一个第一图像、所述压缩后的辅助图像和所述当前第一图像进行图像拼接,得到所述拼接图像。
  12. 如权利要求8所述的装置,其特征在于,所述装置还包括:
    确定模块,用于所述图像采集模块通过所述图像处理模块对所述参考物进行识别之前,确定所述飞行器处于分身拍照模式。
  13. 如权利要求8所述的装置,其特征在于,所述图像采集模块,具体用于:
    在检测到拍照手势时,通过所述图像处理模块采集包含所述参考物的第一图像。
  14. 如权利要求8所述的装置,其特征在于,所述装置还包括:
    跟踪模块,用于所述识别模块通过所述图像处理模块对所述参考物进行识别之后,对所述参考物进行跟踪拍摄。
  15. 一种飞行器,其特征在于,包括:存储器、处理器以及图像处理模块;
    所述存储器,用于存储程序指令;
    所述处理器,用于调用所述程序指令,当所述程序指令被执行时,执行以下操作:
    通过所述图像处理模块对参考物进行识别;
    通过所述图像处理模块采集包含所述参考物的第一图像;
    将各个所述第一图像进行图像拼接,得到拼接图像。
  16. 如权利要求15所述的飞行器,其特征在于,所述处理器将各个所述第一图像进行图像拼接,得到拼接图像,具体用于:
    从采集所述第一图像开始,以预设时长为周期采集第二图像;
    获取当前第一图像和上一个第一图像之间的第一重叠率;
    根据所述第一重叠率,在所述第二图像中获取辅助图像;
    将所述上一个第一图像、所述辅助图像和所述当前第一图像进行图像拼接,得到所述拼接图像。
  17. 如权利要求16所述的飞行器,其特征在于,所述处理器根据所述第一重叠率,在所述第二图像中获取辅助图像,具体用于:
    当所述第一重叠率在第一比例范围内时,在所述第二图像中获取至少一张辅助图像,其中所述至少一张辅助图像的采集时间大于所述上一个第一图像的采集时间,且所述至少一张辅助图像的采集时间小于所述当前第一图像的采集时间。
  18. 如权利要求16所述的飞行器,其特征在于,所述处理器将所述上一 个第一图像、所述辅助图像和所述当前第一图像进行图像拼接,得到所述拼接图像,具体用于:
    获取所述上一个第一图像和所述辅助图像之间的第二重叠率;
    当所述第二重叠率在第二比例区间内时,获取所述参考物在所述上一个第一图像中的位置;
    根据所述参考物在所述上一个第一图像中的位置,对所述辅助图像进行压缩,得到压缩后的辅助图像;
    将所述上一个第一图像、所述压缩后的辅助图像和所述当前第一图像进行图像拼接,得到所述拼接图像。
  19. 如权利要求15所述的飞行器,其特征在于,所述处理器通过所述图像处理模块对参考物进行识别之前,还用于:
    确定所述飞行器处于分身拍照模式。
  20. 如权利要求15所述的飞行器,其特征在于,所述处理器通过所述图像处理模块采集包含所述参考物的第一图像,具体用于:
    在检测到拍照手势时,通过所述图像处理模块采集包含所述参考物的第一图像。
  21. 如权利要求15所述的飞行器,其特征在于,所述处理器通过所述图像处理模块对参考物进行识别之后,还用于:
    对所述参考物进行跟踪拍摄。
PCT/CN2017/108528 2017-10-31 2017-10-31 一种图像处理方法、装置及飞行器 WO2019084756A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201780012764.4A CN108780568A (zh) 2017-10-31 2017-10-31 一种图像处理方法、装置及飞行器
PCT/CN2017/108528 WO2019084756A1 (zh) 2017-10-31 2017-10-31 一种图像处理方法、装置及飞行器

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/108528 WO2019084756A1 (zh) 2017-10-31 2017-10-31 一种图像处理方法、装置及飞行器

Publications (1)

Publication Number Publication Date
WO2019084756A1 true WO2019084756A1 (zh) 2019-05-09

Family

ID=64034048

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/108528 WO2019084756A1 (zh) 2017-10-31 2017-10-31 一种图像处理方法、装置及飞行器

Country Status (2)

Country Link
CN (1) CN108780568A (zh)
WO (1) WO2019084756A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114554280A (zh) * 2022-01-14 2022-05-27 影石创新科技股份有限公司 影分身视频的生成方法、生成装置、电子设备及存储介质

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110751215B (zh) * 2019-10-21 2020-10-27 腾讯科技(深圳)有限公司 一种图像识别方法、装置、设备、系统及介质
CN114245006B (zh) * 2021-11-30 2023-05-23 联想(北京)有限公司 一种处理方法、装置及系统
CN116610905B (zh) * 2023-07-20 2023-09-22 中国空气动力研究与发展中心计算空气动力研究所 一种基于各向异性尺度修正的反距离权重数据插值方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105763815A (zh) * 2016-05-05 2016-07-13 胡央 一种自动调整拍摄间隔的摄像设备及其控制方法
CN106029501A (zh) * 2014-12-23 2016-10-12 深圳市大疆创新科技有限公司 Uav全景成像
CN106056075A (zh) * 2016-05-27 2016-10-26 广东亿迅科技有限公司 基于无人机的社区网格化中重点人员识别及跟踪系统
CN106981048A (zh) * 2017-03-31 2017-07-25 联想(北京)有限公司 一种图片处理方法和装置
US20170300742A1 (en) * 2016-04-14 2017-10-19 Qualcomm Incorporated Systems and methods for recognizing an object in an image

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN203852839U (zh) * 2014-04-08 2014-10-01 宣文彬 可变形玩偶
CN105046909A (zh) * 2015-06-17 2015-11-11 中国计量学院 一种基于小型无人机的农业辅助定损方法
CN105352509B (zh) * 2015-10-27 2018-05-11 武汉大学 地理信息时空约束下的无人机运动目标跟踪与定位方法
CN105554373A (zh) * 2015-11-20 2016-05-04 宇龙计算机通信科技(深圳)有限公司 一种拍照处理的方法、装置以及终端
CN105912980B (zh) * 2016-03-31 2019-08-30 深圳奥比中光科技有限公司 无人机以及无人机系统
CN107025647B (zh) * 2017-03-09 2020-02-28 中国科学院自动化研究所 图像篡改取证方法及装置
CN106970393B (zh) * 2017-03-14 2019-12-03 南京航空航天大学 一种基于码分多址的面阵激光雷达三维成像方法
CN107295272A (zh) * 2017-05-10 2017-10-24 深圳市金立通信设备有限公司 一种图像处理的方法及终端

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106029501A (zh) * 2014-12-23 2016-10-12 深圳市大疆创新科技有限公司 Uav全景成像
US20170300742A1 (en) * 2016-04-14 2017-10-19 Qualcomm Incorporated Systems and methods for recognizing an object in an image
CN105763815A (zh) * 2016-05-05 2016-07-13 胡央 一种自动调整拍摄间隔的摄像设备及其控制方法
CN106056075A (zh) * 2016-05-27 2016-10-26 广东亿迅科技有限公司 基于无人机的社区网格化中重点人员识别及跟踪系统
CN106981048A (zh) * 2017-03-31 2017-07-25 联想(北京)有限公司 一种图片处理方法和装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114554280A (zh) * 2022-01-14 2022-05-27 影石创新科技股份有限公司 影分身视频的生成方法、生成装置、电子设备及存储介质
CN114554280B (zh) * 2022-01-14 2024-03-19 影石创新科技股份有限公司 影分身视频的生成方法、生成装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN108780568A (zh) 2018-11-09

Similar Documents

Publication Publication Date Title
CN108933899B (zh) 全景拍摄方法、装置、终端及计算机可读存储介质
WO2020038109A1 (zh) 拍照方法、装置、终端及计算机可读存储介质
US8115816B2 (en) Image capturing method, control method therefor, and program
WO2019084756A1 (zh) 一种图像处理方法、装置及飞行器
US9300858B2 (en) Control device and storage medium for controlling capture of images
JP2019212312A (ja) ビデオシーケンスのフレームを選択する方法、システム及び装置
CN103685940A (zh) 一种通过表情识别拍摄照片的方法
CN107395957B (zh) 拍照方法、装置、存储介质及电子设备
JP5293206B2 (ja) 画像検索装置、画像検索方法及びプログラム
US8411159B2 (en) Method of detecting specific object region and digital camera
WO2021169686A1 (zh) 一种拍摄控制方法、装置及计算机可读存储介质
JP2010177894A (ja) 撮像装置、画像管理装置及び画像管理方法、並びにコンピューター・プログラム
WO2019214574A1 (zh) 图像拍摄方法、装置及电子终端
US9888176B2 (en) Video apparatus and photography method thereof
US20140362275A1 (en) Autofocus
CN113840070B (zh) 拍摄方法、装置、电子设备及介质
US20150138309A1 (en) Photographing device and stitching method of captured image
JP2005045600A (ja) 画像撮影装置およびプログラム
US8571404B2 (en) Digital photographing apparatus, method of controlling the same, and a computer-readable medium storing program to execute the method
JP6270578B2 (ja) 撮像装置、撮像装置の制御方法及びプログラム
JP5073602B2 (ja) 撮像装置および撮像装置の制御方法
JP2019186791A (ja) 撮像装置、撮像装置の制御方法、および制御プログラム
EP3304551B1 (en) Adjusting length of living images
JP5044472B2 (ja) 画像処理装置、撮像装置、画像処理方法及びプログラム
CN106488128B (zh) 一种自动拍照的方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17930299

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17930299

Country of ref document: EP

Kind code of ref document: A1