CN117859103A - Composition processing method, device, system and storage medium - Google Patents

Composition processing method, device, system and storage medium Download PDF

Info

Publication number
CN117859103A
CN117859103A CN202180101678.7A CN202180101678A CN117859103A CN 117859103 A CN117859103 A CN 117859103A CN 202180101678 A CN202180101678 A CN 202180101678A CN 117859103 A CN117859103 A CN 117859103A
Authority
CN
China
Prior art keywords
target
contour
profile
target profile
imaging device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202180101678.7A
Other languages
Chinese (zh)
Inventor
许望
蒋梦瑶
李欣宇
周梓航
翁松伟
付洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SZ DJI Technology Co Ltd
Original Assignee
SZ DJI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SZ DJI Technology Co Ltd filed Critical SZ DJI Technology Co Ltd
Publication of CN117859103A publication Critical patent/CN117859103A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/10Simultaneous control of position or course in three dimensions

Abstract

A composition processing method, apparatus, system and storage medium, the method comprising: displaying a picture acquired by an imaging device carried in the unmanned aerial vehicle in a display interface; displaying a selected target contour and a preset target contour in the display interface, wherein the selected target contour is a contour of a target object selected in the picture, and the preset target contour and the selected target contour have the same shape and are positioned at a fixed position on the display interface; according to the difference between the preset target contour and the selected target contour, the unmanned aerial vehicle is controlled to automatically drive the imaging device to move, so that the contour of the target object in the picture acquired by the imaging device coincides with the preset target contour. The embodiment realizes the automatic fine composition effect.

Description

Composition processing method, device, system and storage medium Technical Field
The application relates to the technical field of unmanned aerial vehicle aerial photography, in particular to a composition processing method, a composition processing device, a composition processing system and a storage medium.
Background
In an aerial scene, in order to shoot a target object with a desired composition, a user needs to manually operate a remote controller to control an Unmanned Aerial Vehicle (UAV) to fly to a proper position, and adjust an imaging device carried on the unmanned aerial vehicle, until the position and angle of the target object in a picture meet the composition requirement, and shooting can be performed. Along with the development of unmanned aerial vehicle control technology, can make unmanned aerial vehicle fly to appointed place automatically and shoot, but after the appointed place is flown to, if the user wants the target to shoot according to position and angle that oneself needs in the picture completely, then must manual adjustment again, receive the influence of a great deal of factor, artificial operation often can not reach very fine effect, and in order to reach expected composition in this period probably need the user to operate the remote controller repeatedly in order to control unmanned aerial vehicle or image device, not only the operation is complicated, and can lead to time cost and unmanned aerial vehicle's consumption increase, influence unmanned aerial vehicle's duration, consequently, current unmanned aerial vehicle shooting method can't realize automatic accurate composition.
Disclosure of Invention
In view of this, it is an object of the present application to provide a composition processing method, apparatus, system, and storage medium.
In a first aspect, an embodiment of the present application provides a patterning processing method, including:
displaying a picture acquired by an imaging device carried in the unmanned aerial vehicle in a display interface; the method comprises the steps of,
displaying a selected target contour and a preset target contour in the display interface, wherein the selected target contour is a contour of a target object selected in the picture, and the preset target contour has the same shape as the selected target contour and is positioned at a fixed position on the display interface;
according to the difference between the preset target contour and the selected target contour, the unmanned aerial vehicle is controlled to automatically drive the imaging device to move, so that the contour of the target object in the picture acquired by the imaging device coincides with the preset target contour.
In a second aspect, an embodiment of the present application provides a composition processing apparatus, including:
a memory for storing executable instructions;
one or more processors;
the display is used for displaying pictures acquired by an imaging device carried in the unmanned aerial vehicle in a display interface; displaying a selected target contour and a preset target contour, wherein the selected target contour is a contour of a target object selected in the picture, and the preset target contour has the same shape as the selected target contour and is positioned at a fixed position on the display interface;
Wherein the one or more processors, when executing the executable instructions, are individually or collectively configured to perform the method of any one of the first aspects.
In a third aspect, an embodiment of the present application provides a composition processing system, including the composition processing apparatus according to the second aspect and an unmanned aerial vehicle on which an imaging apparatus is mounted; the composition processing device is in communication connection with the unmanned aerial vehicle.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium storing executable instructions that when executed by a processor implement a method according to any one of the first aspects.
In a fifth aspect, embodiments of the present application provide a computer program product comprising a computer program according to the method of any one of the first aspects.
The embodiment of the application provides a composition processing method, which can display a picture acquired by an imaging device carried in an unmanned aerial vehicle in an interactive display interface for a user to watch, and can display a selected target contour and a preset target contour in the display interface, wherein the selected target contour is a contour of a target object selected in the picture, the preset target contour and the selected target contour have the same shape and are positioned at a fixed position on the display interface, the fixed position indicates an expected display position of the target object expected by the user in the picture, and then the unmanned aerial vehicle can be controlled to automatically drive the imaging device to move according to the difference between the preset target contour and the selected target contour, so that the contour of the target object in the picture acquired by the imaging device coincides with the preset target contour, an automatic fine composition effect is realized, the unmanned aerial vehicle is not required to be manually operated by the user in the process, the unmanned aerial vehicle is easy to operate, the unmanned aerial vehicle is avoided to be repeatedly operated by the user, and time and loss are also facilitated.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a patterning processing system according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of the unmanned aerial vehicle according to the embodiment of the present application;
FIG. 3 is a flowchart of a first patterning method according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a fixed position of a display interface according to an embodiment of the present disclosure;
fig. 5A is a schematic diagram of displaying indication information of a fixed position and a screen superimposed according to an embodiment of the present application;
FIG. 5B is a schematic diagram showing a preset target profile and a selected target profile according to an embodiment of the present application;
FIG. 5C is a schematic diagram of the coincidence of the contour of the target object and the preset target contour provided in the embodiment of the present application;
FIG. 6 is a schematic illustration of another selected target profile and a preset target profile provided in an embodiment of the present application;
FIG. 7 is a schematic flow chart of another patterning method according to an embodiment of the present disclosure;
fig. 8 is a schematic structural view of a patterning device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
Aiming at the problem that an unmanned aerial vehicle shooting method in the related art cannot realize automatic accurate composition, the embodiment of the application provides a composition processing method, which can display pictures acquired by an imaging device carried in an unmanned aerial vehicle in a display interface for a user to watch, and can display a selected target contour and a preset target contour in the display interface, wherein the selected target contour is a contour of a target object selected in the pictures, the preset target contour and the selected target contour have the same shape and are positioned at a fixed position on the display interface, the fixed position indicates a desired display position of the target object expected by the user in the pictures, the possible positions of the preset target contour and the selected target contour are different, the display sizes are different, or the display angles in the pictures are different, and the like, so that the unmanned aerial vehicle can be controlled to automatically drive the imaging device to move according to the difference between the preset target contour and the selected target contour, the contour of the target object in the pictures acquired by the imaging device is overlapped with the preset target contour, the automatic target contour is realized, the user can easily and manually operate the unmanned aerial vehicle without needing to repeatedly, and the manual composition processing is avoided.
In some embodiments, the patterning process may be performed by a patterning process device. For example, the patterning device may include a program for executing the patterning method. Illustratively, the patterning device includes at least a memory storing executable instructions of the patterning method and a processor configured to execute the executable instructions. By way of example, the composition processing device includes, but is not limited to, a remote control, a smart phone/cell phone, a tablet computer, a Personal Digital Assistant (PDA), a laptop computer, a desktop computer, a media content player, a video game station/system, a virtual reality system, an augmented reality system, a wearable device (e.g., a watch, glasses, helmet, pendant, or the like), or any other type of device.
In some embodiments, referring to FIG. 1, a patterning processing system is provided, comprising a patterning device 10 and a drone 20; the patterning device 10 is communicatively connected to an unmanned aerial vehicle 20 (UAV), and fig. 1 illustrates the patterning device 10 as a remote controller, where the remote controller has a display, and the remote controller and the display may be detachably connected, and the display may also be fixedly disposed on the remote controller. Examples of the types of the composition processing apparatus 10 communicating with the drone 20 may include, but are not limited to, communication via: the internet, local Area Network (LAN), wide Area Network (WAN), bluetooth, near Field Communication (NFC) technology, networks based on mobile data protocols such as General Packet Radio Service (GPRS), GSM, enhanced Data GSM Environment (EDGE), 3G, 4G, or Long Term Evolution (LTE) protocols, infrared (IR) communication technology, and/or WiFi, and may be wireless, wired, or a combination thereof. The unmanned aerial vehicle 20 is further equipped with an imaging device 30.
For example, referring to fig. 2, the unmanned aerial vehicle 20 includes at least a fuselage 21, a power system 22, and a flight control system 23. Taking a rotorcraft as an example, the fuselage 21 may include a central frame and one or more arms coupled to the central frame, the one or more arms extending radially from the central frame.
The power system 22 may include one or more electronic speed regulators 221 (simply called electric speed regulators), one or more propellers 223, and one or more motors 222 corresponding to the one or more propellers, wherein the motors 222 are connected between the electronic speed regulators 221 and the propellers 223, and the motors 222 and the propellers 223 are arranged on a horn of the unmanned aerial vehicle; the electronic governor 221 is configured to receive a driving signal generated by the flight control system 23, and provide a driving current to the motor 222 according to the driving signal, so as to control the rotation speed of the motor 222. The motor 222 is used to drive the propeller 223 in rotation to power the flight of the drone 20, which enables one or more degrees of freedom of movement of the drone 20.
Flight control system 23 may include a flight controller 231 and a sensing system 232. The sensing system 232 is used for measuring attitude information of the unmanned aerial vehicle, that is, position information and state information of the unmanned aerial vehicle 20 in space, for example, three-dimensional position, three-dimensional angle, three-dimensional speed, three-dimensional acceleration, three-dimensional angular speed, and the like. The sensing system 232 may include, for example, at least one of a gyroscope, an ultrasonic sensor, an electronic compass, an inertial measurement unit (Inertial Measurement Unit, IMU), a vision sensor, a global navigation satellite system, and a barometer. For example, the global navigation satellite system may be a global positioning system (Global Positioning System, GPS). The flight controller is used for controlling the flight of the unmanned aerial vehicle, for example, the flight of the unmanned aerial vehicle can be controlled according to the gesture information measured by the sensing system. It should be appreciated that the flight controller 231 may control the drone 20 in accordance with preprogrammed instructions or may control the drone in response to one or more remote control signals from the composition processing device (e.g., a remote control).
The unmanned aerial vehicle 20 is provided with an imaging device 30, and the composition processing device 10 can control the unmanned aerial vehicle 20 to automatically drive the imaging device 30 to move according to the difference between the preset target profile and the selected target profile. The imaging device 30 may be a physical imaging device, for example. The imaging device 30 may be configured to detect electromagnetic radiation (e.g., visible light, infrared light, and/or ultraviolet light) and generate image data based on the detected electromagnetic radiation, which may be polychromatic (e.g., RGB, CMYK, HSV) or monochromatic (e.g., grayscale, black and white, tan). The imaging means may be a device for capturing images, such as a camera or video camera.
The imaging device 30 includes at least a photosensitive element and an optical element, and can capture color images, gray-scale images, infrared images, and the like; such as a complementary metal oxide semiconductor (Complementary Metal Oxide Semiconductor, CMOS) sensor or a Charge-coupled Device (CCD) sensor, and optical elements such as lenses, mirrors, filters, etc. The imaging device 30 may take images or image sequences at a specified image resolution or may capture image sequences at a specified frame rate. The imaging device 30 may have adjustable shooting parameters such as exposure (e.g., exposure time, shutter speed, aperture, film speed), gain, gamma, region of interest, binning, pixel clock, offset, trigger, ISO, etc. The exposure-related parameter may control the amount of light reaching an image sensor in the imaging device. For example, the shutter speed may control the amount of time that light reaches the image sensor and the aperture may control the amount of light that reaches the image sensor in a given time. The gain-related parameter may control amplification of the signal from the optical sensor. The ISO may control the sensitivity level of the camera to the available light.
For example, referring to fig. 2, the imaging device 30 may be mounted on the unmanned aerial vehicle 20 through the pan-tilt 24, or may be directly fixed on the unmanned aerial vehicle 20. The cradle head 24 may include a motor. The flight controller 231 may control the movement of the pan/tilt head 24 via motors. Optionally, the pan/tilt head 24 may further include a controller for controlling the movement of the pan/tilt head 24 by controlling the motor.
Next, referring to fig. 3, fig. 3 shows a schematic flow chart of a composition processing method, where the method may be applied to a composition processing device, where the composition processing device may be communicatively connected to an unmanned aerial vehicle, the unmanned aerial vehicle may be equipped with an imaging device, and may collect images along the way during the movement of the unmanned aerial vehicle, and the composition device may include a display (such as a touch screen) that may provide a display interface. The display can be fixedly arranged on the patterning device or can be detachably arranged on the patterning device. The connection mode of the composition device and the display comprises wired connection or wireless connection. The method comprises the following steps:
in step S101, a screen acquired by an imaging device mounted in the unmanned plane is displayed on a display interface.
In step S102, a selected target contour and a preset target contour are displayed in the display interface, where the selected target contour is a contour of a target object selected in the screen, and the preset target contour has the same shape as the selected target contour and is located at a fixed position on the display interface.
In step S103, according to the difference between the preset target profile and the selected target profile, the unmanned aerial vehicle is controlled to automatically drive the imaging device to move, so that the profile of the target object in the image acquired by the imaging device coincides with the preset target profile.
In this embodiment, the unmanned aerial vehicle does not need to be manually operated by a user, the unmanned aerial vehicle can automatically drive the imaging device to move under the control of the composition device, and in the process that the unmanned aerial vehicle automatically drives the imaging device to move, whether the picture acquired by the imaging device reaches the accurate expected composition can be determined through the superposition degree between the selected target contour and the preset target contour displayed in the display interface, and the picture acquired by the imaging device is determined to reach the accurate expected composition under the condition that the contour of the target object coincides with the preset target contour.
In some embodiments, the composition processing device may receive the frame acquired by the imaging device, and further display the frame acquired by the imaging device on a display interface, so that a user may determine whether the frame meets a user requirement (such as achieving a composition desired by the user).
Illustratively, the composition processing device is preset with a composition mode, and the composition mode can be started in response to a user instruction; in one example, for example, a composition control is provided in the display interface, and the user can operate (such as clicking, long pressing, sliding, etc.) the composition control according to actual needs, so that the composition processing device responds to the triggering operation of the composition control and enters the composition mode.
After entering the composition mode, a user can select a target object to be composed from the picture according to actual needs, and then the display interface can display the outline of the target object (hereinafter referred to as a selected target outline) at the position of the target object. In this embodiment, the selected target outline determined by the shape outline of the target object is displayed on the display interface, and compared with a simple display method of a rectangular frame including the target object, the precise composition effect with extremely small deviation degree can be realized.
In some embodiments, the target object may include at least one of: a target character, a target object, a target graphic, a selected point, a selected line, or a selected area.
For example, the target person or the target object may be obtained by image recognition of the picture by the composition processing apparatus, for example, the composition processing apparatus may prestore reference data of the target person or the target object, the composition processing apparatus may obtain identification data of each object by image recognition of the picture, and the identification data of a plurality of objects may be compared with the reference data of the target person or the target object, thereby determining the target person or the target object and further confirming the target person or the target object by the user. The target person or the target object may be selected by the user directly in the screen, for example, the target object or the target object may be selected by a frame selection operation in the screen. The composition processing means may then determine the selected target contour from the shape contour of the target person or the target object.
By way of example, the target graphic may include a symmetrical graphic and/or a graphic of a designated structure including, but not limited to, a quadrilateral, a heart, a flower, a pentagram, an irregular polygon, or the like. In one example, the target graphic is selected based on a selection frame in the screen, for example, a user may perform a frame selection operation within a certain display range of the screen, and the composition processing apparatus may display the selection frame based on the frame selection operation, and then the graphic in the selection frame is the currently selected target graphic. In another example, to save user operations, the composition processing apparatus may perform shape recognition on the screen to obtain at least one candidate graphic, and may further display the at least one candidate graphic for selection and confirmation by the user, so as to obtain the target graphic selected from the at least one candidate image. The patterning device may then determine the selected target profile based on the shape profile of the target pattern.
For example, the selected point, selected line, and/or selected area may be selected around an object displayed in the screen. Such as a river, road, etc., around which the selected points, selected lines, and/or selected areas may be selected. For example, the user may select a point around the object displayed on the screen, or trace a line segment (selected line) or a closed curve (selected area) according to the actual need. In order to improve the display effect, if the target object is a selected point, the selected target outline displayed in the display interface comprises the display effect of the selected point after thickening; if the target object is a selected line, the selected target outline displayed in the display interface includes: the selected line and the extension line thereof are convenient for a user to know the position of the selected line, and on the other hand, the difference (such as position difference and/or angle difference and the like) between the selected target profile and the preset target profile is also convenient to determine; if the target object is a selected area, the selected target outline displayed in the display interface includes: the closed curve surrounding the selected area is not limited in shape, and it is understood that the selected area may be circular or square.
It can be understood that, in the process of selecting the target object, if the selected object does not meet the requirement, the user can reselect or adjust the selected object so that the composition processing apparatus obtains the final selected target object; for example, the first selected graph does not meet the requirement, and the target graph can be reselected; for example, the selected line or the position of the point does not meet the requirement, and the position of the selected point or the position of the line can be adjusted; also, for example, the size of the selected area may be adjusted to not meet the demand.
After the target object is selected, a preset target contour can be obtained according to the selected target contour corresponding to the target object, and the preset target contour is displayed at a fixed position on the display interface. The preset target contour and the selected target contour have the same shape, but the display size, the display position or the display angle, etc. of the preset target contour and the selected target contour may be the same or different. Wherein the fixed position is a desired composition position of the target object.
In an exemplary embodiment, the screen is displayed in full screen in the display interface; the fixed location may include at least one of: the center point of the display interface, the intersection point of the transverse N-bisector and the longitudinal M-bisector of the display interface, a closed curve, a diagonal line or the transverse N-bisector and/or the longitudinal M-bisector in the display interface; wherein N, M are integers greater than 1. For example, when n=2, the horizontal bisector of the display interface is represented; when m=3, the vertical three-bisector of the display interface is represented. It may be understood that the fixed position may be preset, or may be obtained by a user performing user-defined setting on the composition processing device according to an actual requirement, for example, the user may determine a closed curve or a point at any position of the display interface according to an actual requirement, and then the composition processing device may determine the position where the closed curve or the point is located as the fixed position.
In one example, in order for a user to know a fixed location in a display interface, where to explicitly preset a target profile to display, indication information of the fixed location may be displayed in the display interface, and the indication information may be a line or a point displayed in the display interface indicating the fixed location. For example, referring to fig. 4, in fig. 4, a plurality of fixed positions are indicated by points and lines in the display interface, a A, C, G, I point is an intersection point of a transverse bisector and a longitudinal bisector, B, D, F, H is an intersection point of a transverse bisector and a longitudinal bisector, and E is a center point of a picture, where the preset target profile may be located at one of the fixed positions in A, B, C, D, E, F, G, H, I according to an actual application scenario, or may be located on one of the transverse bisector and the longitudinal bisector.
In another exemplary embodiment, when the screen is not displayed full screen in the display interface, the fixed position may include at least one of: the central point of the display area where the picture is located, the intersection point of the transverse N bisector and the longitudinal M bisector of the display area where the picture is located, the closed curve, the diagonal, or the transverse N bisector and/or the longitudinal M bisector of the display area where the picture is located; wherein N, M are integers greater than 1. For example, when n=2, the horizontal bisector of the display area where the picture is located is indicated; when m=3, the vertical three-bisector of the display area where the picture is located is indicated.
Wherein the patterning device may be provided with a patterning mode. In one example, a plurality of composition modes can be provided, one composition mode corresponds to one fixed position, the fixed positions included in different composition modes are different, and a user can select different composition modes according to actual needs; in another example, a composition mode may be set, where the composition mode includes a plurality of fixed positions, and a user may select one of the fixed positions to perform composition according to actual needs.
In some embodiments, the preset target profile may be obtained from the selected target profile via at least one of: rotate, move or zoom. The difference between the preset target profile and the selected target profile comprises at least one of the following parameter differences: angle difference, position difference, or display size difference.
In an example, the position of the target object is a desired composition position, that is, a fixed position in the display interface, if the display angle of the target object on the fixed position is not satisfied by a user, a preset target contour may be obtained by rotating a selected target contour corresponding to the target object, where the display positions of the preset target contour and the selected target contour are the same, but the display angles are different, and the composition processing device may control the unmanned aerial vehicle to automatically drive the imaging device to move according to the angle difference between the preset target contour and the selected target contour, so that the contour of the target object in the image acquired by the imaging device coincides with the preset target contour.
In one example, the position of the target object is a desired composition position, that is, a fixed position in the display interface, if the user is not satisfied with the display size of the target object on the fixed position, the preset target contour may be obtained by scaling the selected target contour corresponding to the target object, and then the composition processing device may control the unmanned aerial vehicle to automatically drive the imaging device to move according to the size difference between the preset target contour and the selected target contour, so that the contour of the target object in the image acquired by the imaging device coincides with the preset target contour.
In one example, the position of the target object is not a fixed position in the display interface, and the user is not satisfied with the display position of the target object, then a preset target contour displayed on the fixed position may be obtained by moving a selected target contour corresponding to the target object, where the display positions of the preset target contour and the selected target contour are different, and then the composition processing device may control the unmanned aerial vehicle to automatically drive the imaging device to move according to the position difference between the preset target contour and the selected target contour, so that the contour of the target object in the image acquired by the imaging device coincides with the preset target contour.
Of course, at least two operations may be performed on the selected target contour to obtain the preset target contour according to the actual situation, for example, a moving operation and a zooming operation may be performed on the selected target contour to obtain the preset target contour, a rotating operation and a zooming operation may be performed on the selected target contour to obtain the preset target contour, or a moving operation, a rotating operation and a zooming operation may be performed on the selected target contour to obtain the preset target contour, which is not limited in this embodiment.
In an exemplary embodiment, in a case that there are a plurality of fixed positions, the composition processing device may acquire a selected target contour according to a shape contour of the target object after acquiring the selected target object, and superimpose and display the selected target contour on a position where the target object is located, where the selected target contour and the screen are displayed in different layers of the display interface, and the user may drag the selected target contour, if the composition processing device detects that the selected target contour is dragged to the vicinity of the fixed position, a preset target contour may be acquired, such as copying the selected target contour to obtain the preset target contour, and the preset target contour is displayed at the fixed position. In this embodiment, the user may select a fixed position where the composition is desired according to the actual needs, so as to meet the personalized needs of the user.
As a possible implementation manner, if the selected target profile is detected to be dragged to the vicinity of the fixed position for convenience of user operation, the obtained preset target profile can be automatically absorbed to the fixed position without other operations by the user, which is beneficial to reducing operation steps of the user.
When the user drags the selected target outline away from the position of the target object, the selected target outline may still be displayed at the position of the target object, that is, the selected target outline at the position of the target object and the target outline dragged by the user are displayed in the display interface, and the colors of the selected target outline and the target outline may be different, or one of the two outlines is depicted by a solid line and the other is depicted by a dashed line, so as to facilitate the distinction of the user.
In another exemplary embodiment, in the case that there is only one fixed position, the composition processing apparatus may acquire the selected target profile according to the shape profile of the target object after acquiring the selected target object, and if it is detected that the position of the target object is not the fixed position, may copy the selected target profile to obtain a preset target profile, and directly display the preset target profile on the fixed position, and may further perform a rotation process or a scaling process on the preset target profile.
For convenience of user distinction between the selected target profile and the preset target profile, the display colors of the selected target profile and the preset target profile may be different, such as one being a red line and the other being a green line; and/or one of the selected target profile and the preset target profile is a solid line, and the other is a dashed line.
In some scenes, in order to achieve a fully symmetrical fine composition effect, the preset target contour may be symmetrically displayed with the fixed position as a center point, and when the contour of the target object in the image acquired by the imaging device coincides with the preset target contour, a fully symmetrical display effect of the target object on the fixed position may be achieved, so that the requirements of certain high-requirement photographic scenes may be met.
In an example, when the position of the target object is not a fixed position in the display interface, in order to provide a reference for a user, the indication information (lines or points) of the fixed position may also be displayed in a superimposed manner in the image of the imaging device, for example, as shown in fig. 5A, and the indication information of the fixed position, for example, the composition points and/or the composition lines as shown in fig. 4, may be displayed in a superimposed manner in the image acquired by the imaging device, so as to provide a reference for the user. Under the condition that a plurality of fixed positions exist, a user can select the fixed position expected by the composition according to the actual application scene.
For example, in fig. 5A, assuming that the selected target object is an attic building in a screen, the screen is displayed in full screen on a display interface, and the fixed position determined by the user is the center of the display interface, as shown in fig. 5B, the selected target contour 100 may be displayed at the position where the target object is located, and a preset target contour 200 may be displayed at the center of the display interface, where the preset target contour 200 is obtained by moving the selected target contour 100, for example, if it is detected that the user drags the selected target contour 100 to the vicinity of the center point E, the composition processing apparatus may copy the selected target contour 100 to obtain the preset target contour 200, and automatically adsorb the copied preset target contour 200 to the center point E. Of course, if only one fixed position is assumed to be the center of the display interface, the preset target outline 200 may be directly displayed at the center point of the display interface, and no indication information of the fixed position is displayed. Of course, in addition to moving the selected target profile 100, a rotation or scaling operation may be performed on the selected target profile 100.
In another example, as shown in fig. 6, the target object is a selected line, the selected line is selected around the highway, and considering that the line selected by the user may be short, an extension line is displayed in addition to the selected line, and the selected target contour displayed in the display interface includes: the selected line and the extension line thereof are convenient for a user to know the position of the selected line, and on the other hand, the position difference and/or the angle difference between the selected target profile and the preset target profile are also convenient to determine. Assuming that the fixed position is a position where a diagonal line of the display interface is located, as shown in fig. 6, the selected target profile 100 and the preset target profile 200 may be displayed on the display interface, and in order to distinguish the two, the selected target profile 100 is a solid line, and the preset target profile 200 is a dotted line.
In some embodiments, after the preset target contour and the selected target contour are acquired, the composition processing device may control the unmanned aerial vehicle to automatically drive the imaging device to move according to a difference between the preset target contour and the selected target contour, so that the contour of the target object in the picture acquired by the imaging device coincides with the preset target contour. In the process that the unmanned aerial vehicle drives the imaging device to move, a user can see that the target object in the real-time picture acquired by the imaging device is gradually close to a preset target contour at a fixed position until the contour of the target object coincides with the preset target contour, so that an automatic fine composition effect is realized.
In some possible implementations, the composition processing device may send a control instruction to the unmanned aerial vehicle according to a difference between the preset target profile and the selected target profile, so as to control the unmanned aerial vehicle to automatically drive the imaging device to move.
In an example, for example, on the basis of the embodiment shown in fig. 5A and fig. 5B, the composition processing device may control the unmanned aerial vehicle to automatically drive the imaging device to move according to the difference between the preset target profile and the selected target profile, so as to obtain an effect that the profile of the target object in the image acquired by the imaging device shown in fig. 5C coincides with the preset target profile.
In some embodiments, it is contemplated that the preset target profile may be obtained from the selected target profile via at least one of: and rotating, moving or zooming, wherein one or more corresponding points are arranged between the preset target contour and the selected target contour, and at least one parameter difference between the preset target contour and the selected target contour can be determined according to the position relation between the one or more corresponding points. The positional relationship between the one or more corresponding points includes a two-dimensional positional relationship and/or a three-dimensional positional relationship between the one or more corresponding points, where the two-dimensional positional relationship refers to a positional relationship between display positions of the one or more corresponding points on a display interface (two-dimensional space), and the three-dimensional positional relationship refers to a positional relationship between three-dimensional spatial position coordinates corresponding to the one or more corresponding points.
Illustratively, the one or more corresponding points include points located within the preset target profile and within the selected target profile, respectively, and/or points located on the preset target profile and on the selected target profile, respectively.
For example, when the preset target profile is obtained by moving the selected target profile, the corresponding points may include a center point in the preset target profile and a center point in the selected target profile, and a position difference between the preset target profile and the selected target profile may be determined according to a position relationship between the center point in the preset target profile and the center point in the selected target profile; of course, the position difference between the two may be determined by other corresponding points, which are not limited in this embodiment, for example, the vertex of the preset target contour and the vertex of the selected target contour, or the midpoint of a certain side of the preset target contour and the midpoint of a certain side point corresponding to the selected target contour, or the like.
For example, when the preset target profile is obtained by rotating the selected target profile, the corresponding points may include a plurality of points respectively located on the preset target profile and the selected target profile, and the angular difference between the preset target profile and the selected target profile may be determined according to the positional relationship of the plurality of points respectively located on the preset target profile and the selected target profile.
For example, when the preset target contour is obtained by scaling the selected target contour, the corresponding points may be a plurality of points for constructing the preset target contour and a plurality of points for constructing the selected target contour, and a display size difference between the preset target contour and the selected target contour may be determined according to a positional relationship therebetween.
In an exemplary embodiment, the composition processing device may determine a movement direction of the unmanned aerial vehicle according to a difference between the preset target contour and the selected target contour, so that the unmanned aerial vehicle automatically drives the image capturing device to move in the movement direction until the contour of the target object in the real-time picture acquired by the imaging device coincides with the preset target contour. In this embodiment, the unmanned aerial vehicle is automatically controlled according to the difference between the preset target profile and the selected target profile, so as to achieve the purpose that the profile of the target object in the real-time picture acquired by the imaging device coincides with the preset target profile, and achieve an automatic fine patterning effect.
In some scenes, considering that under the condition that the unmanned aerial vehicle needs to move towards multiple directions, the motors need to be controlled to operate in a coordinated manner, the power consumption is relatively high, in order to reduce the power consumption of the unmanned aerial vehicle, the moving direction can comprise the translational direction of the unmanned aerial vehicle under the condition that the flying height, the direction and the acquisition angle of the imaging device are unchanged, namely, the unmanned aerial vehicle only needs to perform translational movement, and the unmanned aerial vehicle does not need to adjust the other components, so that the unmanned aerial vehicle is further saved, the expected composition is achieved with the minimum loss, the cruising ability of the unmanned aerial vehicle is improved, and only the unmanned aerial vehicle performs translational movement, the flying height, the direction and the acquisition angle of the imaging device of the unmanned aerial vehicle are unchanged, in this case, the difference (such as the difference of the sizes, the differences of the acquisition angles and the like) of the same target objects (such as scenes, characters, the target graphics, selected points, lines, areas and the like) in different images are small or the images can be ignored, and the position change of the target objects in the images is achieved under the condition that the size is unchanged.
Illustratively, the difference between the preset target profile and the selected target profile comprises at least one of the following parameter differences: angle differences, position differences, or display size differences. When the preset target profile is obtained through rotation according to the selected target profile, the unmanned aerial vehicle can be controlled to automatically drive the imaging device to move according to the angle difference between the preset target profile and the selected target profile so as to adjust the direction of the imaging device, and in the moving process of the imaging device, a user can see that the target object in a real-time picture acquired by the imaging device gradually rotates until the profile of the target object coincides with the preset target profile.
When the preset target contour is obtained through movement according to the selected target contour, the unmanned aerial vehicle can be controlled to automatically drive the imaging device to move according to the position difference between the preset target contour and the selected target contour so as to adjust the position of the target object in the picture acquired by the imaging device, and in the moving process of the imaging device, a user can see that the target object in the real-time picture acquired by the imaging device gradually moves until the contour of the target object coincides with the preset target contour.
When the preset target contour is obtained by scaling according to the selected target contour, the unmanned aerial vehicle can be controlled to automatically drive the imaging device to move according to the size display size difference between the preset target contour and the selected target contour so as to adjust the distance between the imaging device and the target object, and in the moving process of the imaging device, a user can see that the display size of the target object in a real-time picture acquired by the imaging device is gradually changed until the contour of the target object coincides with the preset target contour.
In another exemplary embodiment, in the case that the unmanned aerial vehicle is provided with a cradle head, the imaging device is connected with the cradle head, and considering that the power consumption of the whole movement of the unmanned aerial vehicle is higher, the unmanned aerial vehicle can be considered to keep the posture unchanged under the condition that the rotation limiting condition of the cradle head allows, and only the cradle head is controlled to rotate to automatically drive the imaging device to move, so that the outline of the target object in the picture acquired by the imaging device coincides with the preset target outline. For example, the adjustment amount of the pan-tilt may be determined according to the difference between the preset target profile and the selected target profile, where the adjustment amount may include a movement amount of the pan-tilt in the determined movement direction, so that the unmanned aerial vehicle maintains a constant posture, and the pan-tilt is automatically controlled to perform adjustment according to the adjustment amount, so as to drive the imaging device to move, so that the profile of the target object in the image acquired by the imaging device coincides with the preset target profile. In this embodiment, under the condition that the rotation limiting condition of the pan-tilt allows, the unmanned aerial vehicle can be considered to keep the posture unchanged, and only the pan-tilt is controlled to rotate, so that the power consumption of the unmanned aerial vehicle is reduced.
Of course, when only the cradle head can not achieve the effect of overlapping the contour of the target object with the preset target contour in the frame acquired by the imaging device, the unmanned aerial vehicle can also be adjusted to control the unmanned aerial vehicle to move, for example, a first adjustment amount of the cradle head and a second adjustment amount of the unmanned aerial vehicle can be determined according to the difference between the preset target contour and the selected target contour, the first adjustment amount comprises the movement amount of the cradle head in the determined movement direction, the second adjustment amount comprises the movement amount of the unmanned aerial vehicle in the determined movement direction, the cradle head is automatically controlled to adjust according to the first adjustment amount, and the unmanned aerial vehicle is automatically controlled to adjust according to the second adjustment amount, so that the imaging device is driven to move, and the contour of the target object in the frame acquired by the imaging device overlaps with the preset target contour.
In an exemplary embodiment, referring to FIG. 7, FIG. 7 shows a schematic flow diagram of another patterning device:
in step S31, a screen acquired by an imaging device mounted on the unmanned aerial vehicle and indication information of a fixed position, for example, a horizontal-vertical bisector, a horizontal-vertical trisection, a diagonal line, and an intersection of these lines, which is a displayed point or line, are displayed on the display interface.
In step S321, it is determined whether the user selects the target graphic from the frame, if so, step S33 is performed; if not, step S322 is performed.
In step S322, it is identified whether the target graphic exists in the screen, if so, step S33 is executed, and if not, step S321 is executed.
In step S33, a selected target contour of the target graphic is displayed.
In step S34, if it is detected that the selected target profile is dragged to the vicinity of the fixed position, a preset target profile is acquired, and the preset target profile is displayed at the fixed position. For example, the preset target profile is symmetrically placed with the fixed position as a center.
In step S35, according to the difference between the preset target profile and the selected target profile, a translation direction of the unmanned aerial vehicle under the conditions that the flying height, the direction and the acquisition angle of the imaging device are unchanged is determined, and the unmanned aerial vehicle is controlled to automatically drive the imaging device to translate towards the translation direction.
In step S36, a frame acquired by the imaging device is received and displayed.
In step S37, it is determined whether the contour of the target object in the screen coincides with the preset target contour; if so, executing step S38; if not, step S36 is performed.
In step S38, the unmanned aerial vehicle is controlled to stop moving, and a target picture in which the outline of the target object coincides with the preset target outline is displayed.
The various technical features of the above embodiments may be arbitrarily combined as long as there is no conflict or contradiction between the features, but are not described in detail, and therefore, the arbitrary combination of the various technical features of the above embodiments is also within the scope of the disclosure of the present specification.
Accordingly, referring to fig. 8, an embodiment of the present application further provides a patterning device, including:
a memory 52 for storing executable instructions;
one or more processors 51;
a display 53 for displaying a screen acquired by an imaging device mounted in the unmanned plane on a display interface; displaying a selected target contour and a preset target contour, wherein the selected target contour is a contour of a target object selected in the picture, and the preset target contour has the same shape as the selected target contour and is positioned at a fixed position on the display interface;
wherein the one or more processors 51, when executing the executable instructions, are individually or collectively configured to: according to the difference between the preset target contour and the selected target contour, the unmanned aerial vehicle is controlled to automatically drive the imaging device to move, so that the contour of the target object in the picture acquired by the imaging device coincides with the preset target contour.
The processor 51 executes executable instructions included in the memory 52. The processor 51 may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 52 stores executable instructions of a patterning process, and the memory 52 may include at least one type of storage medium including flash memory, a hard disk, a multimedia card, a card memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, and the like. Moreover, the apparatus may cooperate with a network storage device that performs the storage function of the memory via a network connection. The memory 52 may be an internal storage unit of the device 10, such as a hard disk or a memory of the device 10. The memory 52 may also be an external storage device of the apparatus 10, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the apparatus 10. Further, the memory 52 may also include both internal storage units and external storage devices of the apparatus 10. The memory 52 is used to store a computer program 55 as well as other programs and data required by the device. The memory 52 may also be used to temporarily store data that has been output or is to be output.
The display 53 includes, but is not limited to, a touch display (e.g., an infrared-type touch screen, a resistive-type touch screen, a surface acoustic wave-type touch screen, or a capacitive-type touch screen) or a non-touch display.
In some embodiments, the preset target profile is obtained from the selected target profile via at least one of: rotation, movement or scaling; and/or the difference between the preset target profile and the selected target profile comprises at least one of the following parameter differences: angle difference, position difference, or display size difference.
In some embodiments, the preset target contour and the selected target contour have one or more corresponding points therebetween, and the parameter difference is determined according to a positional relationship between the one or more corresponding points.
In some embodiments, the one or more corresponding points include: points located within the preset target profile and within the selected target profile, respectively, and/or points located on the preset target profile and on the selected target profile, respectively.
In some embodiments, the corresponding points include a center point within the preset target profile and a center point within the selected target profile.
In some embodiments, the target object comprises at least one of: a target character, a target object, a target graphic, a selected point, a selected line, or a selected area.
In some embodiments, the target graphic comprises at least a symmetrical graphic; and/or the selected point, selected line and/or selected area is selected around an object displayed in the screen.
In some embodiments, the target graphic is selected based on a selection box in the screen and/or the target graphic is selected from at least one candidate graphic identified in the screen.
In some embodiments, if the target object is a selected line, the selected target outline displayed in the display interface includes: the selected line and its extension.
In some embodiments, the processor 51 is further configured to display, via a display, a selected target profile in the display interface; and if the selected target contour is detected to be dragged to the vicinity of the fixed position, acquiring a preset target contour, and displaying the preset target contour at the fixed position.
In some embodiments, the processor 51 is further configured to adsorb the acquired preset target profile to the fixed location if it is detected that the selected target profile is dragged to the vicinity of the fixed location.
In some embodiments, the preset target profile is obtained by copying the selected target profile.
In some embodiments, the display colors of the selected target profile and the preset target profile are different; and/or one of the selected target profile and the preset target profile is a solid line, and the other is a dashed line.
In some embodiments, the preset target profile is displayed point-symmetrically with the fixed position as a center.
In some embodiments, the screen is displayed full screen in the display interface; the fixed location includes at least one of: the center point of the display interface, the intersection point of the transverse N-bisector and the longitudinal M-bisector of the display interface, a closed curve, a diagonal line or the transverse N-bisector and/or the longitudinal M-bisector in the display interface; wherein N, M are integers greater than 1.
In some embodiments, the processor 51 is further configured to: and determining the movement direction of the unmanned aerial vehicle according to the difference between the preset target profile and the selected target profile, so that the unmanned aerial vehicle automatically drives the imaging device to move towards the movement direction until the profile of the target object in the real-time picture acquired by the imaging device coincides with the preset target profile.
In some embodiments, the direction of motion comprises a translational direction of the drone at a constant altitude, orientation, and acquisition angle of the imaging device.
In some embodiments, the cradle head is mounted on the unmanned aerial vehicle, the imaging device is connected with the cradle head, the processor 51 is further configured to determine an adjustment amount of the cradle head according to a difference between the preset target profile and the selected target profile, so that the unmanned aerial vehicle maintains a constant posture, and automatically control the cradle head to adjust according to the adjustment amount, so as to drive the imaging device to move, so that the profile of the target object in the picture acquired by the imaging device coincides with the preset target profile.
In some embodiments, the preset target profile is obtained by rotating the selected target profile, and the processor is further configured to control the unmanned aerial vehicle to automatically drive the imaging device to move according to an angle difference between the preset target profile and the selected target profile, so as to adjust an orientation of the imaging device; and/or the number of the groups of groups,
the processor is further used for controlling the unmanned aerial vehicle to automatically drive the imaging device to move according to the position difference between the preset target contour and the selected target contour so as to adjust the position of the target object in the acquired picture in the imaging device; and/or the number of the groups of groups,
The preset target contour is obtained by scaling according to the selected target contour, and the processor is further used for controlling the unmanned aerial vehicle to automatically drive the imaging device to move according to the display size difference between the preset target contour and the selected target contour so as to adjust the distance between the imaging device and the target object.
For the device embodiments, reference is made to the description of the method embodiments for the relevant points, since they essentially correspond to the method embodiments. The various embodiments described herein may be implemented using a computer readable medium, such as computer software, hardware, or any combination thereof. For hardware implementation, the embodiments described herein may be implemented through the use of at least one of Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), processors, controllers, micro-controllers, microprocessors, electronic units designed to perform the functions described herein. For a software implementation, an embodiment such as a process or function may be implemented with a separate software module that allows for performing at least one function or operation. The software codes may be implemented by a software application (or program) written in any suitable programming language, which may be stored in memory and executed by a controller.
Devices may include, but are not limited to, a processor 51, a memory 52, a display 53. It will be appreciated by those skilled in the art that fig. 8 is merely an example of the apparatus 10 and is not intended to limit the apparatus 10, and may include more or fewer components than shown, or may combine certain components, or different components, e.g., devices may also include input and output devices, network access devices, buses, etc.
Correspondingly, referring to fig. 1, the embodiment of the application also provides a composition processing system, which comprises the composition processing device and the unmanned aerial vehicle with the imaging device. The composition processing device is in communication connection with the unmanned aerial vehicle.
Accordingly, in an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as a memory, comprising instructions executable by a processor of an apparatus to perform the above-described method. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
A non-transitory computer readable storage medium, which when executed by a processor of a terminal, enables the terminal to perform the above-described method.
Accordingly, embodiments of the present application also provide a computer program product comprising a computer program for a method as described in any of the above.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing has outlined the detailed description of the method and apparatus provided in the embodiments of the present application, wherein specific examples are provided herein to illustrate the principles and embodiments of the present application, the above examples being provided solely to assist in the understanding of the method and core ideas of the present application; meanwhile, as those skilled in the art will have modifications in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.

Claims (22)

  1. A patterning method, comprising:
    displaying a picture acquired by an imaging device carried in the unmanned aerial vehicle in a display interface; the method comprises the steps of,
    displaying a selected target contour and a preset target contour in the display interface, wherein the selected target contour is a contour of a target object selected in the picture, and the preset target contour has the same shape as the selected target contour and is positioned at a fixed position on the display interface;
    according to the difference between the preset target contour and the selected target contour, the unmanned aerial vehicle is controlled to automatically drive the imaging device to move, so that the contour of the target object in the picture acquired by the imaging device coincides with the preset target contour.
  2. The method of claim 1, wherein the preset target profile is obtained from the selected target profile via at least one of: rotation, movement or scaling; and/or the number of the groups of groups,
    the differences between the preset target profile and the selected target profile include at least one of the following parameter differences: angle difference, position difference, or display size difference.
  3. The method of claim 2, wherein the predetermined target profile and the selected target profile have one or more corresponding points therebetween, and wherein the parameter difference is determined based on a positional relationship between the one or more corresponding points.
  4. A method according to claim 3, wherein the one or more corresponding points comprise:
    points located within the preset target profile and within the selected target profile, respectively, and/or points located on the preset target profile and on the selected target profile, respectively.
  5. The method of claim 4, wherein the corresponding points comprise a center point within the preset target profile and a center point within the selected target profile.
  6. The method of claim 1, wherein the target object comprises at least one of: a target character, a target object, a target graphic, a selected point, a selected line, or a selected area.
  7. The method of claim 6, wherein the target graphic comprises at least a symmetrical graphic; and/or the selected point, selected line and/or selected area is selected around an object displayed in the screen.
  8. The method of claim 6, wherein the target graphic is selected based on a selection box in the screen and/or the target graphic is selected from at least one candidate graphic identified in the screen.
  9. The method of claim 6, wherein if the target object is a selected line, displaying a selected target outline in the display interface comprises: the selected line and its extension.
  10. The method of claim 1, wherein displaying the selected target profile and the preset target profile in the display interface comprises:
    displaying a selected target profile in the display interface;
    and if the selected target contour is detected to be dragged to the vicinity of the fixed position, acquiring a preset target contour, and displaying the preset target contour at the fixed position.
  11. The method as recited in claim 10, further comprising:
    and if the selected target contour is detected to be dragged to the vicinity of the fixed position, the acquired preset target contour is adsorbed to the fixed position.
  12. The method according to claim 10 or 11, wherein the predetermined target profile is obtained by copying the selected target profile.
  13. The method according to any one of claims 1 to 11, wherein the display colors of the selected target profile and the preset target profile are different; and/or one of the selected target profile and the preset target profile is a solid line, and the other is a dashed line.
  14. The method of claim 1, wherein the predetermined target profile is displayed point-symmetrically centered on the fixed location.
  15. The method of claim 1, wherein the screen is displayed full screen in the display interface;
    the fixed location includes at least one of: the center point of the display interface, the intersection point of the transverse N-bisector and the longitudinal M-bisector of the display interface, a closed curve, a diagonal line or the transverse N-bisector and/or the longitudinal M-bisector in the display interface; wherein N, M are integers greater than 1.
  16. The method of claim 1, wherein controlling the drone to automatically move the imaging device based on the difference between the preset target profile and the selected target profile comprises:
    and determining the movement direction of the unmanned aerial vehicle according to the difference between the preset target profile and the selected target profile, so that the unmanned aerial vehicle automatically drives the imaging device to move towards the movement direction until the profile of the target object in the real-time picture acquired by the imaging device coincides with the preset target profile.
  17. The method of claim 16, wherein the direction of motion comprises a translational direction of the drone at a constant altitude, orientation, and acquisition angle of the imaging device.
  18. The method of claim 1, wherein a cradle head is mounted on the unmanned aerial vehicle, the imaging device is connected with the cradle head, and the controlling the unmanned aerial vehicle to automatically drive the imaging device to move according to the difference between the preset target profile and the selected target profile comprises:
    according to the difference between the preset target profile and the selected target profile, the adjustment amount of the cradle head is determined so that the unmanned aerial vehicle keeps unchanged in posture, the cradle head is automatically controlled to adjust according to the adjustment amount, and therefore the imaging device is driven to move, and the profile of the target object in a picture acquired by the imaging device coincides with the preset target profile.
  19. The method of claim 2, wherein the pre-set target profile is obtained by rotation from the selected target profile, the method further comprising: according to the angle difference between the preset target profile and the selected target profile, controlling the unmanned aerial vehicle to automatically drive the imaging device to move so as to adjust the orientation of the imaging device; and/or the number of the groups of groups,
    The preset target profile is obtained by moving according to the selected target profile, and the method further comprises: according to the position difference between the preset target profile and the selected target profile, controlling the unmanned aerial vehicle to automatically drive the imaging device to move so as to adjust the position of the target object in the acquired picture in the imaging device; and/or the number of the groups of groups,
    the preset target profile is obtained by scaling according to the selected target profile, and the method further comprises: and controlling the unmanned aerial vehicle to automatically drive the imaging device to move according to the display size difference between the preset target outline and the selected target outline so as to adjust the distance between the imaging device and the target object.
  20. A composition processing apparatus, comprising:
    a memory for storing executable instructions;
    one or more processors;
    the display is used for displaying pictures acquired by an imaging device carried in the unmanned aerial vehicle in a display interface; displaying a selected target contour and a preset target contour, wherein the selected target contour is a contour of a target object selected in the picture, and the preset target contour has the same shape as the selected target contour and is positioned at a fixed position on the display interface;
    Wherein the one or more processors, when executing the executable instructions, are individually or collectively configured to perform the method of any one of claims 1 to 19.
  21. A composition processing system comprising at least one composition processing apparatus according to claim 20 and an unmanned aerial vehicle on which an imaging apparatus is mounted; the composition processing device is in communication connection with the unmanned aerial vehicle.
  22. A computer readable storage medium storing executable instructions which when executed by a processor implement the method of any one of claims 1 to 19.
CN202180101678.7A 2021-11-19 2021-11-19 Composition processing method, device, system and storage medium Pending CN117859103A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/131884 WO2023087272A1 (en) 2021-11-19 2021-11-19 Image composition processing method and apparatus, system, and storage medium

Publications (1)

Publication Number Publication Date
CN117859103A true CN117859103A (en) 2024-04-09

Family

ID=86396012

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180101678.7A Pending CN117859103A (en) 2021-11-19 2021-11-19 Composition processing method, device, system and storage medium

Country Status (2)

Country Link
CN (1) CN117859103A (en)
WO (1) WO2023087272A1 (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113163119A (en) * 2017-05-24 2021-07-23 深圳市大疆创新科技有限公司 Shooting control method and device
CN107357312A (en) * 2017-07-28 2017-11-17 上海瞬动科技有限公司合肥分公司 A kind of UAV Intelligent flight control method based on target pattern
CN109241820B (en) * 2018-07-10 2020-11-27 北京二郎神科技有限公司 Unmanned aerial vehicle autonomous shooting method based on space exploration
US11755984B2 (en) * 2019-08-01 2023-09-12 Anyvision Interactive Technologies Ltd. Adaptive positioning of drones for enhanced face recognition
CN112164015B (en) * 2020-11-30 2021-04-23 中国电力科学研究院有限公司 Monocular vision autonomous inspection image acquisition method and device and power inspection unmanned aerial vehicle

Also Published As

Publication number Publication date
WO2023087272A1 (en) 2023-05-25

Similar Documents

Publication Publication Date Title
US20220078349A1 (en) Gimbal control method and apparatus, control terminal and aircraft system
US11722647B2 (en) Unmanned aerial vehicle imaging control method, unmanned aerial vehicle imaging method, control terminal, unmanned aerial vehicle control device, and unmanned aerial vehicle
EP3188467B1 (en) Method for image capturing using unmanned image capturing device and electronic device supporting the same
JP6803919B2 (en) Flight path generation methods, flight path generation systems, flying objects, programs, and recording media
CN108702444B (en) Image processing method, unmanned aerial vehicle and system
WO2017075964A1 (en) Unmanned aerial vehicle photographing control method, unmanned aerial vehicle photographing method, mobile terminal and unmanned aerial vehicle
CN111344644B (en) Techniques for motion-based automatic image capture
KR20170136750A (en) Electronic apparatus and operating method thereof
WO2019227441A1 (en) Video control method and device of movable platform
US20180275659A1 (en) Route generation apparatus, route control system and route generation method
JP2018160228A (en) Route generation device, route control system, and route generation method
CN112154649A (en) Aerial survey method, shooting control method, aircraft, terminal, system and storage medium
CN106586011A (en) Aligning method of aerial shooting unmanned aerial vehicle and aerial shooting unmanned aerial vehicle thereof
WO2021035744A1 (en) Image collection method for mobile platform, device and storage medium
CN112040126A (en) Shooting method, shooting device, electronic equipment and readable storage medium
WO2017015959A1 (en) Method, control device and control system for controlling mobile device to photograph
WO2019230604A1 (en) Inspection system
WO2019183789A1 (en) Method and apparatus for controlling unmanned aerial vehicle, and unmanned aerial vehicle
WO2021251441A1 (en) Method, system, and program
JP2015118213A (en) Image processing apparatus, imaging apparatus including the same, image processing method, program, and storage medium
WO2019104684A1 (en) Unmanned aerial vehicle control method, device and system
WO2018214401A1 (en) Mobile platform, flying object, support apparatus, portable terminal, method for assisting in photography, program and recording medium
CN113906481A (en) Imaging display method, remote control terminal, device, system and storage medium
CN112514366A (en) Image processing method, image processing apparatus, and image processing system
WO2023087272A1 (en) Image composition processing method and apparatus, system, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination