CN113841381B - Visual field determining method, visual field determining device, visual field determining system and medium - Google Patents

Visual field determining method, visual field determining device, visual field determining system and medium Download PDF

Info

Publication number
CN113841381B
CN113841381B CN202080035373.6A CN202080035373A CN113841381B CN 113841381 B CN113841381 B CN 113841381B CN 202080035373 A CN202080035373 A CN 202080035373A CN 113841381 B CN113841381 B CN 113841381B
Authority
CN
China
Prior art keywords
view
field
image
sub
auxiliary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202080035373.6A
Other languages
Chinese (zh)
Other versions
CN113841381A (en
Inventor
翁松伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SZ DJI Technology Co Ltd
Original Assignee
SZ DJI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SZ DJI Technology Co Ltd filed Critical SZ DJI Technology Co Ltd
Publication of CN113841381A publication Critical patent/CN113841381A/en
Application granted granted Critical
Publication of CN113841381B publication Critical patent/CN113841381B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Abstract

A field of view determination method, a field of view determination device, a field of view determination system, and a medium, the field of view determination method comprising: acquiring a first user instruction; determining at least one base point from within the auxiliary field of view in response to the first user instruction; and determining a desired field of view based on at least one of the base points to obtain an image that coincides with the desired field of view. The application may determine at least one base point from the auxiliary field of view by the user such that a desired field of view of the user may be automatically determined based on the at least one base point.

Description

Visual field determining method, visual field determining device, visual field determining system and medium
Technical Field
The present application relates to the field of imaging technologies, and in particular, to a field of view determining method, a field of view determining device, a field of view determining system, and a medium.
Background
With the development of technology, shooting is performed through a movable platform, such as aerial photography, is increasingly favored by users. The unmanned aerial vehicle aerial photographing technology is lower in cost and safer than manned aerial photographing, and is popular with photographers. Unmanned aerial vehicle aerial photographing works generally adopt an image capturing device such as a video camera and a camera mounted on an aircraft for photographing.
However, in the related art, when photographing with a movable platform, a desired field of view of a user may not be conveniently determined by an image capturing apparatus due to limitations of space and a field of view (FOV) of a camera, etc., and thus it is inconvenient for the user to photograph an image of a scene in the desired field of view.
Disclosure of Invention
In view of this, the embodiments of the present application provide a field of view determining method, a field of view determining device, a field of view determining system, and a medium, so as to satisfy the requirement that a user determines a desired field of view of the user through an image capturing device.
In a first aspect, an embodiment of the present application provides a field of view determining method, including: first, a first user instruction is obtained. Then, in response to a first user instruction, at least one base point is determined from within the auxiliary field of view. Next, a desired field of view is determined based on the at least one base point to obtain an image that matches the desired field of view.
In a second aspect, an embodiment of the present application provides a field of view determining apparatus, including: one or more processors; and a computer readable storage medium storing one or more computer programs that, when executed by the processor, implement: first, a first user instruction is obtained. Then, in response to a first user instruction, at least one base point is determined from within the auxiliary field of view. Next, a desired field of view is determined based on the at least one base point to obtain an image that matches the desired field of view.
In a third aspect, embodiments of the present application provide a field of view determination system, the system comprising: the mobile terminal comprises a control terminal and a movable platform which are in communication connection with each other, wherein the control terminal is provided with an image capturing device and is configured with: the control terminal acquires a first user instruction; the control terminal determining, in response to the first user instruction, at least one base point from within an auxiliary field of view, the auxiliary field of view being determined based on a field of view of an image capture device on the movable platform; and the control terminal determines a desired field of view based on at least one base point to obtain an image that coincides with the desired field of view.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium storing executable instructions that, when executed by one or more processors, cause the one or more processors to perform a method as above.
In a fifth aspect, embodiments of the present application provide a computer program comprising executable instructions which, when executed, implement a method as above.
In this embodiment, the user may select a base point in the auxiliary field of view, and further determine the desired field of view of the user based on the base point, and obtain an image matching the desired field of view. Therefore, through selection of the base points, the size of the expected view field and the coverage view field range can be planned at will, so that the size of the finally obtained image and the coverage view field range are completely consistent with the expected view field, the ingestion of unexpected picture contents is avoided, the later complex editing work is also avoided, and the user can feel the image shooting experience obtained by instant shooting.
Additional aspects of the application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the application.
Drawings
The above and other objects, features and advantages of embodiments of the present application will become more readily apparent from the following detailed description with reference to the accompanying drawings. Embodiments of the application will now be described, by way of example and not limitation, in the figures of the accompanying drawings, in which:
FIG. 1 is an application scenario of a field of view determining method, a field of view determining device, a field of view determining system and a medium provided in an embodiment of the present application;
FIG. 2 is an application scenario of a field of view determination method, a field of view determination apparatus, a field of view determination system, and a medium according to another embodiment of the present application;
FIG. 3 is a flow chart of a field of view determination method according to an embodiment of the present application;
FIG. 4 is a schematic view of a camera field of view and an auxiliary field of view provided by an embodiment of the present application;
FIG. 5 is a schematic view of a camera field of view and an auxiliary field of view according to another embodiment of the present application;
FIG. 6 is a schematic view of a camera field of view and an auxiliary field of view according to another embodiment of the present application;
FIG. 7 is a schematic diagram of an auxiliary field of view, a base point, and a desired field of view provided by an embodiment of the present application;
FIG. 8 is a schematic diagram of an auxiliary field of view, a base point and a desired field of view provided by another embodiment of the present application;
FIG. 9 is a schematic diagram of an auxiliary field of view, a base point and a desired field of view provided by another embodiment of the present application;
FIG. 10 is a schematic diagram of setting base point setting information according to an embodiment of the present application;
FIG. 11 is a schematic diagram of cropping an initial image according to an embodiment of the present application;
FIG. 12 is a schematic diagram of a decomposition of a desired field of view into a plurality of sub-acquisition fields of view provided by an embodiment of the present application;
FIG. 13 is a schematic diagram of capturing images of a plurality of sub-captured fields of view, respectively, according to an embodiment of the present application;
FIG. 14 is a schematic diagram of image fusion according to an embodiment of the present application;
fig. 15 is a panoramic image of a high building provided by the prior art;
fig. 16 is a schematic diagram of an image fusion process for the high building shown in fig. 15 according to an embodiment of the present application;
fig. 17 is a schematic structural diagram of a field of view determining apparatus according to an embodiment of the present application.
FIG. 18 is a schematic diagram of a movable platform according to an embodiment of the present application;
FIG. 19 is a schematic view of a movable platform according to another embodiment of the present application; and
fig. 20 is a schematic structural diagram of a movable platform according to another embodiment of the present application.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative and intended to explain the present application and should not be construed as limiting the application.
Taking an aerial scene as an example, the embodiment of the application can be suitable for a user (such as a flight crew) to shoot an image required by the flight crew at one time due to the limitation of space and a field of view (FOV) of a camera when the flight crew is aerial, or can shoot the image required by the flight crew based on a spherical panoramic shooting mode, so that the flight crew can shoot for a plurality of times through such as the unmanned aerial vehicle based on the self-defined FOV, and the image required by the flight crew is obtained after fusion, and is identical with the expected field of view of the flight crew.
Therefore, when unmanned aerial vehicle takes photo by plane, the unmanned aerial vehicle can automatically view the view from the FOV required by the flight crew, reduce the time of editing by using software in the later period, reduce the influence of the region outside the FOV on the region inside the FOV, facilitate the flight crew to edit the image on site, reduce the editing time of Photoshop (PS) in the later period, judge the image quality on site and reduce the repeated shooting times.
The above scenario is merely an exemplary scenario, and may be a shooting scenario for a tall building, a landscape cultural relic, or the like, and the present application is not limited thereto.
To facilitate a better understanding of the embodiments of the present application, a brief description of an algorithm related to photographing of a movable platform in the related art will be first provided. When the unmanned aerial vehicle cannot take all the expected fields of view (if the expected fields of view exceeds the maximum field of view of the camera at the current position and focal length), the related art can control the unmanned aerial vehicle to be far away from the scene to be taken, so that the camera fields of view can comprise the expected fields of view. However, the requirements of users on photographing and shooting effects cannot be met due to space limitation or loss of fine parts of a photographed picture, influence of light, influence of other elements and the like caused by lens distance. In addition, there may be a case where there is a screen or the like in the vision of a composition desired by the user, resulting in that the screen content does not conform to expectations. For example, panoramic images in the prior art, which may include occlusions, etc.
In order to facilitate understanding of the technical scheme of the present application, the following description will be made in detail with reference to fig. 1 to 19.
Fig. 1 is an application scenario of a field of view determining method, a field of view determining device, a field of view determining system and a medium provided in an embodiment of the present application.
As shown in fig. 1, after a user determines a desired field of view of a shot, the related art may adjust the field of view of the camera to the desired field of view of the user by controlling a pan/tilt of the drone, a focal length of the camera, and the like. However, in some scenes, the related art cannot satisfy the photographing needs of the user. For example, when the desired field of view of the user exceeds the current maximum field of view of the camera, the drone is typically only controlled away from the object to be photographed to obtain a larger field of view, however, this approach may not be applicable due to space limitations. For another example, when the desired view field of the user includes an unwanted shielding object (refer to the tree in the middle position of fig. 1), the unmanned aerial vehicle can be controlled to fly in front of the shielding object to shoot in the related art, however, this may cause that the view field of the camera is inconsistent with the desired view field of the user, so that the composition of the shot image is changed, and the user requirement cannot be met. For another example, due to mechanical structure limitations of the unmanned aerial vehicle, the cradle head, etc., the camera cannot realize 360 ° steering, and for a scene where the expected field of view includes a field of view outside the mechanical limiting angle, the shooting requirement of the user cannot be satisfied. For another example, in a scene where a user wants to take a plurality of images and then obtains a desired field of view by means of post-image stitching, the user needs to operate the unmanned aerial vehicle to fly to a plurality of positions to take images respectively according to his own speculative shooting position, and then obtains the image desired by the user by post-image processing, however, the image is affected by, for example, the time limit of flight of the unmanned aerial vehicle, the suitability of the speculative shooting position, and the like, and once the shooting process is interrupted, the user may be required to re-perform the whole shooting process, and even the shooting process cannot be completed.
According to the visual field determining method, the visual field determining device, the visual field determining system and the medium, the expected visual field is determined according to the base points determined in the auxiliary visual field, so that an image matched with the expected visual field is obtained. That is, through the selection of the base point, the size of the expected view field and the coverage view field range can be arbitrarily planned, so that the size of the finally obtained image and the coverage view field range are completely consistent with the expected view field, the ingestion of unexpected picture contents is avoided, the later complex editing work is also avoided, and the user can feel the image shooting experience obtained by instant shooting. Specifically, at least one base point may be determined from the auxiliary view field by the user, so that a desired view field of the user may be automatically determined based on the at least one base point, where each area in the auxiliary view field and a pose of the base point and the lens (may include position information and pose information, where the position information may be coordinate information, the pose information may be angle information, etc.) have a correspondence relationship, so that, after the desired view field of the user is determined, the desired view field may be converted into photographing sequence information (may include pose information of the lens), so that the unmanned aerial vehicle and the tripod head may control the pose of the lens based on the pose information, so as to perform photographing according to the photographing sequence information. For example, when the desired field of view of the user exceeds the current maximum field of view of the camera and is limited by space so that a larger field of view cannot be obtained by being far away from the object to be photographed, the desired field of view may be divided into a plurality of sub-fields of view, and then the pose of the camera is automatically adjusted by the unmanned aerial vehicle and the pan-tilt to acquire images corresponding to the respective sub-fields of view, respectively, so that the images for the desired field of view are conveniently synthesized. For another example, when an obstruction is present in the desired field of view, at least one base point may be determined by the user from the auxiliary field of view such that the desired field of view, excluding the obstruction, may be automatically determined based on the at least one base point, and an image for the desired field of view may be acquired by means of photographing, image processing, or the like. For example, after the user determines at least one base point from the auxiliary view field, the flight control or the unmanned aerial vehicle and the like can automatically determine the expected view field of the user and shoot based on the pose information, and the unmanned aerial vehicle does not need to be controlled by the user to arrive at the position of the user to be supposed one by one to shoot, so that shooting difficulty is effectively reduced, and shooting effect is improved.
Fig. 2 is an application scenario of a field of view determining method, a field of view determining device, a field of view determining system and a medium according to another embodiment of the present application. As shown in fig. 2, the image capturing apparatus 14 mounted on the movable platform 10 will be described as an example.
The movable platform 10 in fig. 2 includes a body 11, a carrier 13, and an image capturing device 14. Although the mobile platform 10 is described as an aircraft, such description is not limiting and any type of mobile platform described above is applicable (e.g., unmanned aircraft). In some embodiments, the image capture device 14 may be located directly on the movable platform 10 without the need for the carrier 13. The mobile platform 10 may include a power mechanism 15, a sensing system 12. In addition, the mobile platform 10 may also include a communication system.
The power mechanism 15 may include one or more rotating bodies, propellers, paddles, engines, motors, wheels, bearings, magnets, nozzles. For example, the rotating body of the power mechanism may be a self-fastening (self-fastening) rotating body, a rotating body assembly, or other rotating body power unit. The movable platform may have one or two, two or more, three or more, or four or more power mechanisms. All of the power mechanisms may be of the same type. Alternatively, one or more of the power mechanisms may be of a different type. The power mechanism 15 may be mounted on the movable platform by suitable means, such as by a support member (e.g., a drive shaft). The power mechanism 15 may be mounted at any suitable location on the mobile platform 10, such as the top, bottom, front, rear, sides, or any combination thereof.
In some embodiments, the power mechanism 15 enables the mobile platform 10 to take off vertically from the surface, or land vertically on the surface, without any horizontal movement of the mobile platform 10 (e.g., without sliding on a runway). Alternatively, the power mechanism 15 may allow the movable platform 10 to hover at a preset position and/or direction in the air. One or more of the power mechanisms 100 may be controlled independently of the other power mechanisms. Alternatively, one or more of the power mechanisms 100 may be controlled simultaneously. For example, the movable platform 10 may have a plurality of horizontally oriented rotating bodies to control lifting and/or pushing of the movable platform 10. The horizontally oriented rotator may be actuated to provide the capability of the mobile platform 10 to vertically take off, vertically land, hover. In some embodiments, one or more of the horizontally oriented rotating bodies may rotate in a clockwise direction while the other one or more of the horizontally oriented rotating bodies may rotate in a counter-clockwise direction. For example, the number of rotating bodies rotating clockwise is the same as the number of rotating bodies rotating counterclockwise. The rotational rate of each horizontally oriented rotator may be independently varied to achieve the lifting and/or pushing operation caused by each rotator to adjust the spatial orientation, speed, and/or acceleration (e.g., rotation and translation relative to up to three degrees of freedom) of the movable platform 10.
Sensing system 12 may include one or more sensors to sense peripheral obstructions, spatial orientations, velocities, and/or accelerations (e.g., rotations and translations relative to up to three degrees of freedom) of movable platform 10. The one or more sensors include any of the sensors described above, including but not limited to ranging sensors, GPS sensors, motion sensors, inertial sensors, or image sensors. The sensed data provided by the sensing system 12 may be used to control the spatial orientation, speed, and/or acceleration of the movable platform 10. Alternatively, the sensing system 12 may be used for data of the environment of the movable platform 10, such as climate conditions, surrounding obstacle distances, location of geographic features, location of man-made structures, etc.
Carrier 13 may be a variety of support structures including, but not limited to: a fixed bracket, a detachable bracket, a posture-adjustable structure, etc., for setting the image capturing apparatus 14 on the body 11. For example, the carrier 13 may be a pan and tilt head, and the image capture device 14 may be a camera that allows displacement of the camera relative to the body 11, or rotation along one or more axes, such as the carrier 13 allows combined translational movement of the camera along one or more of a pitch axis, a yaw axis, and a roll axis. For another example, the carrier 13 may allow the camera to rotate about one or more of a pitch axis, a yaw axis, and a yaw axis. Wherein the carrier 13 and the body 11 may have a coordinated conversion relationship, such that a first movement (e.g., movement or rotation) of the body 11 may be converted to a second movement of the carrier 13. And vice versa.
The communication system can realize communication between the mobile platform 10 and a control terminal 20 with the communication system through a wireless signal 30 transmitted and received by an antenna 22, and the antenna 22 is arranged on the body 21. The communication system may include any number of transmitters, receivers, and/or transceivers for wireless communications. The communication may be a one-way communication such that data may be sent from one direction. For example, unidirectional communication may include only the mobile platform 10 transmitting data to the control terminal 20, or vice versa. One or more transmitters of the communication system may transmit data to one or more receivers of the communication system and vice versa. Alternatively, the communication may be a two-way communication, such that data may be transmitted in both directions between the mobile platform 10 and the control terminal 20. Two-way communication includes one or more transmitters of a communication system that can transmit data to one or more receivers of the communication system, and vice versa.
In some embodiments, the control terminal 20 may provide control instructions to one or more of the movable platform 10, the carrier 13, and the image capture device 14 and receive information (e.g., position and/or motion information of the obstacle, the movable platform 10, the carrier 13, or the image capture device 14, load-sensed data, such as image data captured by a camera) from one or more of the movable platform 10, the carrier 13, and the image capture device 14. In some embodiments, the control data of the control terminal 20 may include instructions regarding position, motion, braking, or control of the movable platform 10, the carrier 13, and/or the image capture device 14. For example, the control data may cause a change in position and/or orientation of the movable platform 10 (e.g., by controlling the power mechanism 15), or cause movement of the carrier 13 relative to the movable platform 10 (e.g., by controlling the carrier 13). The control data of the control terminal 20 may cause load control such as controlling the operation of a camera or other image capturing device (capturing still or moving images, zooming, turning on or off, switching imaging modes, changing image resolution, changing focal length, changing depth of field, changing exposure time, changing viewing angle or field of view). In some embodiments, the communication of the mobile platform 10, carrier 13, and/or image capture device 14 may include information from one or more sensors (e.g., the distance sensor 12 or the image sensor of the image capture device 14). The communication may include sensed information transmitted from one or more different types of sensors, such as a GPS sensor, a motion sensor, an inertial sensor, a proximity sensor, or an image sensor. The sensed information is related to the position (e.g., direction, position), motion, or acceleration of the movable platform 10, the carrier 13, and/or the image capture device 14. The sensed information transmitted from the image capture device 14 includes data captured by the image capture device 14 or a status of the image capture device 14. The control terminal 20 transmits the provided control data that may be used to control the state of one or more of the movable platform 10, the carrier 13, or the image capture device 14. Optionally, one or more of the carrier 13 and the image capturing mechanism 14 may include a communication module for communicating with the control terminal 20 so that the control terminal 20 may communicate with or control the movable platform 10, the carrier 13 and the image capturing mechanism 14 individually. The control terminal 20 may be a remote controller of the mobile platform 10, or may be an intelligent electronic device such as a mobile phone, an iPad, a wearable electronic device, or the like, which can be used to control the mobile platform 10.
It should be noted that, the control terminal 20 may be remote from the movable platform 10 to realize remote control of the movable platform 10, and may be fixedly or detachably disposed on the movable platform 10, and may be specifically set as required.
In some embodiments, the mobile platform 10 may communicate with other remote devices in addition to the control terminal 20, or remote devices other than the control terminal 20. The control terminal 20 may also communicate with another remote device and the mobile platform 10. For example, the mobile platform 10 and/or the control terminal 20 may communicate with another mobile platform or a carrier or load of another mobile platform. The additional remote device may be a second terminal or other computing device (e.g., a computer, desktop, tablet, smart phone, or other mobile device) as desired. The remote device may transmit data to the mobile platform 10, receive data from the mobile platform 10, transmit data to the control terminal 20, and/or receive data from the control terminal 20. Alternatively, the remote device may be connected to the Internet or other telecommunications network to enable data received from the mobile platform 10 and/or control terminal 20 to be uploaded to a website or server.
The movable platform 10 may be a land robot, an unmanned vehicle, a handheld cradle head, or the like, which is not limited herein.
Fig. 3 is a flow chart of a field of view determining method according to an embodiment of the present application. As shown in fig. 3, the field determination method may include operations S301 to S305.
In operation S301, a first user instruction is acquired.
In this embodiment, the first user instruction may be determined based on a user operation input by the user on the control terminal. For example, the control terminal is provided with a button, a shift lever and other components, and a user can input a first user instruction by operating the components. For another example, the control terminal may include a display screen, and the user may input the first user instruction through an interaction component (e.g., virtual key, joystick, etc.) displayed on the display screen.
In operation S303, at least one base point is determined from within the auxiliary field of view in response to the first user instruction.
Wherein the auxiliary field of view may be determined by the user. For example, if the user wishes to take a panoramic view of a front building, the unmanned aerial vehicle may be controlled to fly to a position where an image of the upper left corner of the building can be taken, and the camera field of view at the current position is taken as an auxiliary field of view. Then, the unmanned aerial vehicle can be further controlled to fly to a position capable of shooting a right lower corner image of the high building, and the camera view field at the current position is used as another auxiliary view field. In addition, a virtual auxiliary field of view may be constructed based on the two auxiliary fields of view, and the virtual auxiliary field of view may include a desired field of view of the user so that the user determines the desired field of view from the auxiliary fields of view.
For example, the object for which the first user operation is directed may be a control terminal communicatively connected to the movable platform. For example, the user inputs at least one of the following information on the control terminal: selection information, base point coordinate input information, a specified operation (e.g., photographing), an object, parameters of the specified operation (e.g., focal length, aperture, exposure time), and the like. The control terminal may be integrated, for example, a processor, a memory, a display screen, etc. are disposed on the remote controller. The control terminal can be split, for example, the remote controller and other electronic devices can form the control terminal together, for example, the remote controller and the smart phone are interconnected to form the control terminal together. Wherein, can install Application (APP) on the smart mobile phone, can input operation instruction, set up operating parameter etc. on this APP.
Further, the specified state instruction may be determined and input based on gesture recognition, somatosensory or voice recognition, and the like. For example, a user may control the position, attitude, orientation, or other aspect of the movable platform by tilting the control terminal. The tilt of the control terminal may be detected by one or more inertial sensors and a corresponding motion command generated. For another example, the user may utilize the touch screen to adjust an operating parameter of the load (e.g., zoom), a pose of the load (via the carrier), or other aspect of any object on the movable platform.
In addition, the movable platform can also be a handheld carrier, and the control terminal can also be an electronic device arranged on the handheld carrier. Taking the handheld cradle head as an example for explanation, a camera and/or a mobile phone can be arranged on the handheld cradle head, and the specified state instruction can be input through a key or a touch screen on a handle of the handheld cradle head. The above are merely illustrative examples of specific state instruction inputs.
In one embodiment, determining at least one base point from within the auxiliary field of view in response to the first user instruction may include the following. In response to a first user instruction, determining an auxiliary field of view; and determining at least one base point in the auxiliary field of view.
The auxiliary field of view may be determined by a user operating a movable platform (e.g., a drone) and/or a cradle head, such as manually operating by a user to aim a lens at an area as the auxiliary field of view, or an auxiliary field of view determined based on multiple aiming areas. It should be noted that the auxiliary field of view may also be a preset field of view, for example, a specific area (for example, 1 time, 1.5 times or a certain area extending outwards along a certain direction) is extended outwards based on the current field of view of the camera, so as to determine the auxiliary field of view. For the movable platform, the pose of the movable platform and/or the cradle head may be adjusted in response to a control instruction from the control terminal so that the lens is aligned with a certain area as an auxiliary field of view. For example, the control instructions include: upward flight, upward flight by 10 meters and the like, the pitch angle of the cradle head is increased, the pitch angle is increased by 5 degrees and the like.
In one embodiment, the auxiliary field of view is larger than the current field of view of the image capture device used to capture the image, which facilitates the user to acquire a large field of view image. For example, a larger auxiliary field of view may be obtained by zooming. Wherein the desired field of view may be smaller than the minimum field of view of the image capturing device or larger than the maximum field of view of the image capturing device.
In one embodiment, the auxiliary field of view is greater than or equal to the desired field of view. For example, when a user wishes to accurately capture an image within a desired field of view, it is difficult for the scene to remain completely identical to the view of the user's desired field of view by moving the camera or determining the scene in the viewfinder by way of a focal length. At this time, a field of view larger than the expected field of view can be used as an auxiliary field of view to reduce the risk of missed shots, and then the expected field of view is determined from the auxiliary field of view by a user in a mode of selecting a base point, so that the subsequent shooting is conveniently carried out directly based on the expected field of view determined by the base point, and an image which corresponds to the expected field of view of the user and keeps consistent is obtained.
In one embodiment, the first user instruction includes at least one of a position adjustment instruction, an attitude adjustment instruction, a focus adjustment instruction, and a lens switching instruction of the movable platform. Accordingly, in response to the first user instruction, determining the auxiliary field of view may include: in response to a first user instruction, the current field of view is switched to the auxiliary field of view by at least one of adjusting a position and/or attitude of the movable platform, adjusting a focal length, and switching a lens. It should be noted that the first user instruction may further include an instruction for determining a base point from the auxiliary field of view, such as an instruction for clicking on a selected base point, an instruction for inputting base point information, and the like.
Fig. 4 is a schematic diagram of a camera field of view and an auxiliary field of view according to an embodiment of the present application.
As shown in fig. 4, the camera field of view 410 is smaller than the auxiliary field of view 400, which may be achieved by moving the camera away from the object to be photographed, adjusting the focal length, switching the wide angle lens, etc. To facilitate the user in determining the desired field of view, an image corresponding to the auxiliary field of view may be presented on the control terminal. This facilitates the user to choose the desired field of view needed based on the auxiliary field of view. The center point of the camera field of view 410 may or may not coincide with the auxiliary field of view 400.
Fig. 5 is a schematic view of a camera field of view and an auxiliary field of view according to another embodiment of the present application.
As shown in fig. 5, the size of the camera field of view 510 may be comparable to the size of the auxiliary field of view 500. For example, the user aims the lens at the upper left corner of the field of view in fig. 5 by controlling the pan-tilt rotation. Alternatively, the user directs the lens to the upper left hand corner of the field of view in fig. 5 by controlling the motion of the drone. Alternatively, the user aims the lens at the upper left corner of the field of view in fig. 5 by controlling the drone to move and the pan/tilt to rotate. The principle of alignment of the auxiliary field of view in the lower right corner is similar and will not be described in detail here.
In one embodiment, the auxiliary field of view comprises at least two sub-auxiliary fields of view.
Fig. 6 is a schematic view of a camera field of view and an auxiliary field of view according to another embodiment of the present application.
As shown in fig. 6, the auxiliary field of view 600 may include a plurality of sub-auxiliary fields of view 610, wherein each sub-auxiliary field of view 610 may correspond to one camera field of view. It should be noted that, the auxiliary field of view 600 may be formed by stitching a plurality of sub-auxiliary fields of view 610 (e.g., the user controls the camera to rest for a period of time exceeding a time threshold in a certain pose, or at which poses the camera is set as a sub-auxiliary field of view by the user). The auxiliary field of view 600 may also be a rectangular area covered by the coordinates of the upper left corner of the upper left sub-auxiliary field of view and the coordinates of the lower right corner of the lower right sub-auxiliary field of view in fig. 6, which may help to improve the user's ease of operation, such as by determining only two sub-auxiliary fields of view 610 to determine the auxiliary field of view 600.
For example, the first user instruction includes at least one of a position adjustment instruction of a pan head, a posture adjustment instruction of a pan head, a position adjustment instruction of a movable platform, a posture adjustment instruction of a movable platform, a focus adjustment instruction, and a lens switching instruction. Accordingly, in response to the first user instruction, determining the auxiliary field of view may include the following. And responding to a first user instruction, and sequentially switching the current field of view to each of at least two sub auxiliary fields of view by at least one of adjusting the position and/or the posture of the cradle head, adjusting the position and/or the posture of the movable platform, adjusting the focal length and switching the lens so as to determine at least two sub auxiliary fields of view. At least one base point is then determined from each sub-auxiliary field of view, respectively.
After the auxiliary field of view is determined, a base point may be determined by the user from the auxiliary field of view, which base point may characterize the boundary information of the desired field of view. For example, the user determines the base point by clicking on a point or inputting coordinates, or the like. In addition, the base point may be determined from the auxiliary field of view based on a preset rule, for example, the base point is an auxiliary field of view vertex, a center point, an edge midpoint, or any point or points in the auxiliary field of view.
In operation S305, a desired field of view is determined based on at least one base point to obtain an image that coincides with the desired field of view.
In this embodiment, the desired field of view may be automatically determined based on the base point. In one embodiment, determining the desired field of view based on the at least one base point may include the operations of: an image acquisition field of view is determined based on the at least one base point, the image acquisition region covered by the image acquisition field of view comprising the image acquisition region covered by the desired field of view so as to acquire an image coincident with the desired field of view.
For example, when there is only one base point, then it may be determined that the desired field of view needs to be determined according to preset rules (e.g., specified radius, specified size, etc.) based on the base point as a center or vertex. For another example, when there are two base points, then the rectangular area covered can be determined as the desired field of view based on the coordinates of the base points. For another example, when three or more base points exist, two adjacent base points may be connected to form one desired field of view, and furthermore, the desired field of view may be divided into a plurality of sub-desired fields of view according to user setting information. Wherein the desired field of view is greater than a minimum field of view of an image capturing device for capturing the image, the image capturing device comprising a lens.
The visual field determining method provided by the embodiment of the application can meet the shooting requirements of users on scenes with any size and any visual field coverage range. Meanwhile, the shooting steps when the user shoots a plurality of pictures to form a panoramic image can be effectively simplified, the post editing processing time is shortened, and the user experience is improved. In addition, the restriction condition that the aerial camera is subjected to the three-dimensional space layer of the shooting site is reduced.
The determination of the sub-auxiliary field of view, base point and desired field of view is exemplified below.
In one embodiment, determining at least one base point in the auxiliary field of view may include the following operations. First, a third user instruction is acquired, and then at least one base point is determined in the auxiliary field of view in response to the third user instruction. The determining manner of the third user instruction may refer to the first user instruction, which is not described herein.
In addition, to facilitate user selection of a base point from the auxiliary field of view, the method may further include the following operations. Before the third user instruction is acquired, an image corresponding to the auxiliary field of view is output. Accordingly, obtaining the third user instruction includes: in the process of outputting the image corresponding to the auxiliary view field, user operation for the image corresponding to the auxiliary view field is acquired, and a third user instruction is determined based on the user operation.
For example, taking unmanned aerial vehicle aerial photography as an example, the method is applied to a control terminal, an image capturing device is arranged on a movable platform, the control terminal is used for controlling the movable platform, and the control terminal can acquire image information acquired by the image capturing device; correspondingly, the method further comprises the steps of: before acquiring a third user instruction, the control terminal acquires image information corresponding to the auxiliary view field; the control terminal displays image information corresponding to the auxiliary field of view. Accordingly, obtaining the third user instruction includes: and the control terminal acquires user operation aiming at the image information corresponding to the auxiliary view field in the process of displaying the image information corresponding to the auxiliary view field, and determines a third user instruction based on the user operation.
Fig. 7 is a schematic diagram of the auxiliary field of view, the base point and the desired field of view provided by an embodiment of the present application.
As shown in fig. 7, the camera field of view 710 is a field of view when the lens is in a certain pose, such as a current pose or a pose of the unmanned aerial vehicle and/or the pan-tilt after the user adjusts the pose, so that the camera is in a certain pose. The auxiliary field of view 730 is set or preset by a user, for example, the user changes the focal length of the camera, or switches the wide-angle lens, or moves the camera away from the object to be photographed, so that the field of view is enlarged to obtain the auxiliary field of view 730, and the image in the field of view is sent to the control terminal for display. The user selects a center point in the auxiliary field of view 730 as the base point 720. Wherein the auxiliary field of view 730 may be larger than the desired field of view 700 in order to ensure complete coverage of the desired field of view 700.
In one embodiment, determining at least one base point in the auxiliary field of view may include determining at least one base point in the auxiliary field of view based on the second field of view preset position. Accordingly, the second field of view preset position comprises at least one of: center point, vertex, any point determined based on preset rules.
Referring to fig. 7, after the auxiliary field of view 730 is determined, if the user previously sets the center point as the base point, the user does not need to determine the base point through a click operation or the like, but the center point of the auxiliary field of view 730 is automatically set as the base point 720. The preset rule may be, for example: when the distance between the object to be photographed and the camera is smaller than the first distance threshold, a vertex or a midpoint of the edge near the object to be photographed is used as a base point. When the distance between the object to be photographed and the camera is greater than or the first distance threshold value, then the center point or designated position point of the auxiliary field of view 730 is used as a base point.
In one embodiment, if the auxiliary field of view includes a plurality of sub-auxiliary fields of view, determining at least one base point from each sub-auxiliary field of view, respectively, may include the operations of, first, acquiring a second user instruction. At least one base point is then determined in the sub-auxiliary field of view in response to the second user instruction. The second user instruction may be a user instruction generated by a user operation such as clicking, sliding, pressing, etc. on the interactive component, the mechanical button, etc. Further, it may be automatically determined from the auxiliary field of view based on preset rules. The determining manner of the second user instruction may refer to the first user instruction, which is not described herein.
For example, determining at least one base point from each sub-auxiliary field of view separately may include: at least one base point is determined in each sub-auxiliary field of view based on the first field of view preset position.
In one embodiment, the first field of view preset position comprises at least one of: center point, vertex, any point determined based on preset rules.
Fig. 8 is a schematic diagram of an auxiliary field of view, a base point and a desired field of view provided by another embodiment of the present application.
As shown in fig. 8, the auxiliary field of view 830 includes a plurality of sub-auxiliary fields of view 810, and a user may determine at least one base point from each sub-auxiliary field of view 810, respectively, or a plurality of base points based on a preset position. For example, when two sub-auxiliary fields of view 810 are included, the preset position may be a closest point between the two sub-auxiliary fields of view 810, e.g., the base point 821 of the upper left sub-auxiliary field of view 810 is a lower right vertex, and the base point 822 of the lower right sub-auxiliary field of view 810 is an upper left vertex. In other embodiments, the preset position may also be a center point, an edge midpoint, a designated position point, etc. of the sub-auxiliary field of view 810. It should be noted that one sub-auxiliary field of view 810 may include two or more base points. After the base point is determined, the user's desired field of view 800 may be determined based on the coordinates of the base point.
Fig. 9 is a schematic diagram of an auxiliary field of view, a base point and a desired field of view provided by another embodiment of the present application.
As shown in fig. 9, the auxiliary field of view includes at least four sub-auxiliary fields of view 910, and the user may determine at least one base point 921, 922, 923, 924 from each sub-auxiliary field of view 910, respectively, or determine a plurality of base points 921, 922, 923, 924 based on a preset position, for example, using a center point of each sub-auxiliary field of view 910 as a base point. Adjacent base points are then connected such that a determination of the user's desired field of view 900 based on the coordinates of the base points can be achieved. In other embodiments, the preset position may also be a center point, an edge midpoint, a designated position point, etc. of the sub-auxiliary field of view 910. One sub-auxiliary field of view 910 may include two or more base points.
In another embodiment, the first user instruction includes base point setting information. Accordingly, in response to the first user instruction, determining at least one base point from within the auxiliary field of view may include: at least one base point is determined from within the auxiliary field of view based on the base point setting information. For example, the base point setting information includes at least one of direction information, angle value information, coordinate information, or a reference point.
Fig. 10 is a schematic diagram of setting base point setting information according to an embodiment of the present application.
As shown in fig. 10, the left side of fig. 10 is an information setting area, and the right side is a display area.
The X value and the Y value may be coordinate information that the user wishes to set, such as (2, 2) representing a base point at (2, 2) with the (0, 0) point as a reference point. In addition, direction information and angle information may be replaced, for example, (upper left, 45 °) represents the point of intersection of (upper left, 45 °) rays with the (0, 0) point as the reference point as the base point. In addition, the reference point may be selected by the user, such as a center point, a certain vertex, and the like. It should be noted that, various setting manners may be displayed in the display interface so as to be convenient for the user to select.
In one embodiment, the method may further comprise the following operations. And cutting the acquired image based on at least one base point to obtain an image which is matched with the expected field of view. Because when the unmanned aerial vehicle, the cradle head and other control cameras are used for photographing, the boundary of a photographed image is difficult to be guaranteed to be completely matched with the boundary of an expected view field, and in order to provide redundancy, the boundary of an actual view finding can be slightly larger than an expected actual area during photographing. For example, the actual viewing boundary may approximate the boundary of the auxiliary field of view, or the actual viewing boundary may be smaller than the boundary of the auxiliary field of view and larger than the boundary of the desired field of view. This facilitates obtaining an image corresponding to the actual view, which may include therein an image conforming to the desired field of view. The user can acquire the image corresponding to the actual view finding, so that the image can be conveniently and automatically processed, shared and the like.
In addition, to obtain an image that matches the user's desired field of view, the image may be automatically image-processed by the image capture device (e.g., the image is first fused and then cropped according to the desired field of view).
It should be noted that the processing of the image includes, but is not limited to: at least one of image synthesis, image style (such as beach style, antique style, big head style, etc.), image rotation, image clipping, etc. is performed.
For example, cropping the acquired image based on at least one base point to obtain an image that matches the desired field of view may include the following operations. First, an initial image corresponding to an image acquisition field of view is acquired. Then, the initial image is cropped to obtain an image matching the desired field of view. The method for clipping the image may adopt various techniques in the related art, and is not limited herein.
For example, in order to match the cropped image with the desired field of view of the user, which is determined based on a base point, which is known information, the image may be cropped based on the base point. Specifically, cropping the initial image to obtain an image that matches the desired field of view may include: the initial image is cropped based on at least one base point to obtain an image matching the desired field of view.
For example, when the desired field of view is determined by two base points, the coordinates of the base point 1 located at the upper left are (X1, Y1), the coordinates of the base point 2 located at the lower right are (X2, Y2), and when clipping is performed, the base points are mapped onto the synthesized image to obtain coordinates 1 (X1 ', Y1') and coordinates 2 (X2 ', Y2'), then all columns in the image with coordinates smaller than X1 'are deleted, all rows in the image with coordinates larger than Y1' are deleted, all columns in the image with coordinates larger than X2 'are deleted, and all rows in the image with coordinates smaller than Y2' are deleted.
In some embodiments, after the expected view field is determined, shooting may be directly performed according to the expected view field, that is, planning a shooting track along a boundary of the expected view field, so that each imaging is in the expected view field, and an area outside the expected view field is not included, and an initial image corresponding to the image acquisition view field is an image matched with the expected view field. In this way, the initial image may not be cropped when an image matching the desired field of view is obtained.
Fig. 11 is a schematic diagram of cropping an initial image according to an embodiment of the present application.
As shown in fig. 11, the outer-circle dotted line is an image corresponding to an image acquisition field including a desired field of view (shown by the inner-circle dotted line) constituted by a plurality of base points. In order to ensure that images cannot be missed, redundancy is improved and the like, the area of the image acquisition view field is larger than the area of the expected view field. After the image corresponding to the image acquisition view field is obtained, the image can be cut based on the base points, so that the image which is matched with the expected view field of the user is obtained.
In one embodiment, acquiring an initial image corresponding to an image acquisition field of view may include the following operations. First, an image acquisition view field is decomposed based on a preset view field, and a plurality of sub-acquisition view fields are obtained. Then, the image capturing device is controlled to respectively acquire images under a plurality of sub-acquisition views to obtain a plurality of sub-images. Then, an initial image corresponding to the image acquisition field of view is synthesized based on the plurality of sub-images.
Wherein, based on predetermine the visual field and decompose the visual field of image acquisition, obtain a plurality of visual fields of acquisition can include: first, a first image acquisition region corresponding to an image acquisition field of view is determined, and a second image acquisition region corresponding to a preset field of view is determined. And then decomposing the first image acquisition area at least based on the second image acquisition area to obtain a plurality of sub-image acquisition areas. The second image capturing area may designate an image capturing area under the focal length, for example, if the user desires to obtain an image with high definition, the user may select a long focal length to obtain an image with high definition, and then synthesize the image with high definition. For another example, if the user wants to complete shooting quickly, a short focal length (larger field of view) may be selected to reduce the number of shots and increase the shooting speed.
Correspondingly, controlling the image capturing device to respectively acquire images under a plurality of acquisition fields of view, and obtaining a plurality of sub-images may include: the image capturing device is controlled to respectively acquire a plurality of sub-images corresponding to the plurality of sub-image acquisition areas. Each sub-image acquisition area can respectively correspond to pose information and shooting information. The pose information may be, for example, position information of the drone and/or the cradle head, and pose information of at least one of the drone, the cradle head, and the camera. The photographing information may include, for example, focal length information, exposure time length, sensitivity information, and the like. Thus, the camera can be driven to respectively acquire the sub-images of each sub-image acquisition area.
In one embodiment, to reduce the occurrence of missed shots, decomposing the first image acquisition region based at least on the second image acquisition region, the obtaining a plurality of sub-image acquisition regions includes: and decomposing the first image acquisition region based on the second image acquisition region and the region overlapping proportion to obtain a plurality of sub-image acquisition regions.
For example, the region overlap ratio is determined based on a user operation or a preset overlap ratio. Wherein the area overlap ratio between the sub-image acquisition areas may be the same or different. The area overlapping ratio of the areas such as the areas with more focal points may be higher. The region overlap ratio can be decomposed into a horizontal overlap ratio and a vertical overlap ratio.
In one embodiment, to facilitate the decomposition of the first image acquisition region, the decomposition of the first image acquisition region based on the second image acquisition region and the region overlapping proportion may include the following operations.
First, a first number of sub-image acquisition regions included in a length direction is determined based on a length of the second image acquisition region, a length of the first image acquisition region, and a length overlapping ratio.
Then, a second number of sub-image capturing areas included in the width direction is determined based on the width of the second image capturing area, the width of the first image capturing area, and the width overlapping ratio.
Then, the number of the sub-image acquisition areas is determined based on the first number and the second number to decompose the first image acquisition area.
By the method, the number of images to be shot can be determined, so that the images can be fused to obtain the images which are matched with the expected field of view.
FIG. 12 is a schematic diagram of a decomposition of a desired field of view into a plurality of sub-acquisition fields of view provided by an embodiment of the present application.
As shown in fig. 12, it may be determined that the acquisition field of view 1200 needs to be divided into a total of 3*3 of 9 sub-acquisition fields of view 1210 based on the size of the sub-acquisition field of view and the size of the desired field of view 1200. In addition, in order to reduce the risk of occurrence of missed images, it is necessary to set a certain overlapping ratio between each adjacent sub-image acquisition regions 1210. Furthermore, areas outside of the desired field of view 1200 need to be covered.
In a specific embodiment, first, a two-dimensional coordinate system may be displayed on a display screen of the control terminal, and x and y axes are separated along a longitudinal direction and a transverse direction with a center point of a current FOV viewing angle of the camera as an origin of the two-dimensional coordinate system, and the dimensions are calibrated.
Then, the user manipulates the unmanned aerial vehicle to move the camera field of view, and referring to fig. 8, a base point 821 is selected anywhere in the sub auxiliary field of view 810 in the upper left corner, and the program records the coordinate system value (x 1, y 1) of the first base point.
Next, the user manipulates the drone to move the camera field of view, and referring to fig. 8, a base point 822 is selected anywhere in the sub-auxiliary field of view 810 in the lower right corner, at which point the program records the coordinate system value (x 2, y 2) of the second base point.
Then, the built-in program calculates the length and width of the view angle desired by the user by the coordinates of the two base points, and combines the sub-acquisition fields (such as the length and width of the fields of view of the camera at the specified focal length, and calculates the sample to be photographed) (furthermore, the overlapping proportion of the sub-acquisition fields can be set, such as estimated based on overlapping by 50%).
The image acquisition process is illustrated below after determining the sub-acquisition field of view.
In one embodiment, each of the plurality of sub-image acquisition regions has corresponding shooting pose information. Thus, image acquisition is conveniently carried out on each sub-image acquisition area based on shooting pose information. Wherein, shooting pose information can be determined through calculation. For example, the determination may be based on a conversion of the fuselage of the unmanned aerial vehicle to a pan-tilt linkage.
Specifically, the shooting pose information is determined based on an image acquisition region corresponding to the current shooting pose and a plurality of sub-image acquisition regions, and is used for sequentially switching the image capturing device from the current image acquisition region to the plurality of sub-image acquisition regions so as to acquire images under the sub-image acquisition regions.
For example, shooting pose information includes: at least one of pitch angle information, yaw angle information and roll angle information. The shooting pose information can be formed by pose information of at least one of an unmanned aerial vehicle, a cradle head and a camera. Alternatively, the photographing pose information may include: pitch angle information and/or yaw angle information.
In one embodiment, when the image capturing device is disposed on a cradle head, and the cradle head is disposed on a movable platform, wherein: the pitch angle information includes at least one of movable platform pitch angle information and pan tilt pitch angle information, and/or the yaw angle information includes at least one of movable platform yaw angle information and pan tilt yaw angle information. In this way, the lens can be aligned to a certain sub-image acquisition area by adjusting the pose of the cradle head and/or the cradle body. It should be noted that, there is a correspondence between the sub-image acquisition area and the sub-acquisition view field, the sub-acquisition view field may be an actual view field of the lens, and the sub-image acquisition area may be an image acquisition area displayed on the control terminal.
In order to increase the shooting speed, when the pose needs to be adjusted, the component with high speed can be preferentially adjusted. If the adjustment speeds are relatively close, the low energy components may be preferentially adjusted. For example, the adjustment priority of the pan-tilt pitch angle in the pitch angle information is greater than or equal to the adjustment priority of the movable platform pitch angle, and/or the adjustment priority of the pan-tilt yaw angle in the yaw angle information is greater than or equal to the adjustment priority of the movable platform yaw angle.
In addition, the shooting pose information may further include: position information. The location information may be determined from location information of the drone and/or the cradle head. The location information may include: at least one of the horizontal position, the altitude, for example, may be expressed by an X value, a Y value, and a Z value in the world coordinate system.
Specifically, the shooting pose information may further include: the position information may include height information, which may represent displacement information of the movable platform in the vertical direction, and the image capturing device is disposed on the movable platform. For example, if the current height of the lens is 2 meters and the height of the lens corresponding to the sub-acquisition view field is 3 meters, the unmanned aerial vehicle can be controlled to ascend by 1 meter, or the pan-tilt is controlled to rotate upwards by a certain angle, or the combination of the two angles, so that the lens is aligned to the sub-acquisition view field. In addition, the position information may also include a horizontal position.
In one embodiment, when the image capturing apparatus is disposed on the cradle head and the cradle head is disposed on the movable platform, the movable platform is in any one of a hover state, a vertical lift state, or a horizontal movement state in acquiring an image conforming to a desired field of view. The hovering state means that the height and the horizontal position of the unmanned aerial vehicle are not changed basically or only have small change within an allowable range, but in the hovering state, the posture of the unmanned aerial vehicle can be changed, and the posture of a cradle head positioned on the unmanned aerial vehicle can be changed independently or simultaneously with the posture of the unmanned aerial vehicle.
Fig. 13 is a schematic diagram of capturing images of a plurality of sub-captured fields of view, respectively, according to an embodiment of the present application.
As shown in fig. 13, the camera 1320 needs to perform image acquisition for the 6 sub-acquisition fields of view 1310 separately. Wherein, the two sub-acquisition fields 1310 at the same height can be aligned by adjusting yaw angle information of the cradle head and/or the unmanned aerial vehicle, respectively, so as to perform image acquisition. However, in some situations, such as limited by the mechanical angle of the pan-tilt, it is inconvenient to align sub-acquisition fields of view with different heights by adjusting the pitch angle of the pan-tilt, and at this time, the embodiments of the present application may align each sub-acquisition field of view by adjusting the height of the unmanned aerial vehicle and/or the posture of the pan-tilt, so as to perform image acquisition.
The following exemplifies the manner of calculating the coordinates of the base point.
Taking an unmanned aerial vehicle with a tripod head as an example, for a scene with unchanged unmanned aerial vehicle position, simply rotating the plane direction and the tripod head direction, and taking an offset angle as a coordinate system, an offset point can be expressed as follows: (X, Y) = (unmanned plane horizontal direction offset angle + pan tilt horizontal direction offset angle, pan tilt direction offset angle)
For a scene where the position of the drone changes (e.g., shifts and/or lifts), a shift in coupling orientation to position is required. The two coordinate systems are coupled, and the base points correspond to matrix data.
For example, the matrix may be expressed as: [ unmanned aerial vehicle horizontal offset, unmanned aerial vehicle height offset; unmanned aerial vehicle horizontal direction offset angle + cloud platform horizontal direction offset angle, cloud platform every single move direction offset angle ].
In one embodiment, the images acquired for each sub-acquisition field of view may be image processed in the following manner. For example, synthesizing an initial image corresponding to an image acquisition field of view based on a plurality of sub-images may include: and synthesizing the plurality of sub-images based on the eclosion mode to obtain an initial image corresponding to the image acquisition view field. Wherein, the eclosion is to make the junction part inside and outside the selected area virtual, and the effect of gradual change is achieved so as to achieve the effect of natural junction.
In order to improve the image processing efficiency on the basis of not obviously reducing the image synthesis quality, the different image fusion areas consume different operation resources in the fusion process. For example, fewer computing resources may be allocated to areas of the sky, distant mountains, etc. in the image, and more computing resources may be allocated to objects of interest to the user (e.g., characters, landmark buildings, animals), etc.
Fig. 14 is a schematic diagram of image fusion according to an embodiment of the present application.
As shown in fig. 14, two sub-images obtained by image acquisition of two sub-acquisition fields of view are spliced to obtain an image matching with the desired field of view. Wherein, to align the two sub-acquisition fields of view may be achieved by at least one of moving the movable platform, rotating the movable platform, or rotating the pan-tilt.
Fig. 15 is a panoramic image of a high building provided in the prior art.
As shown in fig. 15, in the related art, since the height of a building is high, it is difficult to obtain a simple image of the building. For example, a user may take a photograph using a wide angle lens, but due to being on the ground, the distortion of the image at the top of the building is severe and may include too many non-building images. If aerial photography is adopted, the aerial photography is limited by mechanical angles and the like, a user can only take photos for a plurality of times by supposing a plurality of shooting points, and then the panoramic view of the building is obtained through image processing such as PS, however, the unmanned aerial vehicle has limited electric quantity, and if the user operation is not skilled enough or is interrupted in the process, the whole shooting is difficult to complete.
Fig. 16 is a schematic diagram of an image fusion process for the high building shown in fig. 15 according to an embodiment of the present application.
As shown in fig. 16, in the embodiment of the present application, after a user sets a plurality of endpoints (such as top points and bottom points are shown as two black points in the left graph of fig. 16) of a building in an auxiliary view field as base points based on a composition, a desired view field of the user can be automatically generated, then the desired view field is divided based on a focal length desired by the user, etc., to obtain a plurality of sub-acquisition view fields, and shooting pose information is automatically determined based on a camera body and a pan-tilt linkage conversion algorithm, so that images of each sub-acquisition view field are respectively acquired, and the middle three graphs shown in fig. 16 are obtained. The panorama can then be synthesized based on these middle three images. The panorama can then be automatically cropped based on the base points to obtain a building image that matches the desired field of view as shown in the right hand diagram of fig. 16.
In one embodiment, to facilitate viewing of an image corresponding to a desired field of view by a user, the method may further comprise the operations of: after an image conforming to the desired field of view is acquired, an image conforming to the desired field of view is output. For example, the operation of synthesizing the image may be implemented on the control terminal, and may be displayed by a display of the control terminal. For example, the operation of synthesizing the images may be implemented on the drone, and the synthesized images may be sent by the drone to a control terminal or other terminal device communicatively coupled to the drone for viewing by a user.
In one embodiment, a first mapping relationship exists between the first user instruction and the preset pattern. Accordingly, the method may further include: after the first user instruction is acquired, a preset mode is entered in response to the first user instruction. Thus, the convenience of the user for acquiring the image corresponding to the expected field of view can be further improved. The entering into the preset mode may mean that the movable platform and the cradle head enter into a specified state, or enter into the preset mode via the control terminal, so that the movable platform and the cradle head enter into the specified state.
The image capturing device for capturing images is arranged on the cradle head, and the cradle head is arranged on the movable platform. Correspondingly, in the preset mode, the movable platform is in any one of a hovering state, a vertical lifting state or a horizontal moving state, and the cradle head is in a locking state or can rotate around at least one of at least one shaft. Wherein the shaft may include: at least one of a heading axis, a pitch axis and a roll axis.
In one embodiment, a certain period of time is required to reach a steady state after the movable platform, cradle head, etc. enter a designated state. The photographing process may be restarted after the first user instruction is acquired for a specified period of time.
In one embodiment, an image capturing device for capturing images is disposed on a cradle head, and the cradle head is disposed on a movable platform.
Correspondingly, in the preset mode, the movable platform is in any one of a hovering state, a vertical lifting state or a horizontal moving state, and the cradle head is in a locking state or can rotate around at least one of at least one shaft.
In a specific embodiment, the Application (APP) of the control terminal has a preset mode (e.g., including a normal mode and an unconventional mode), and the interaction component corresponding to the preset mode is operated (e.g., an automatic mode button is clicked) to enter the preset mode. Wherein, the 3A parameters of the mode (automatic or manual setting, exposure parameters, etc.) and the storage mode (such as storage in a memory card of the unmanned aerial vehicle or sending to a control terminal for storage) and the like.
Aiming at the conventional mode, the unmanned aerial vehicle can enter a hover lock to rotate at a high speed and a low speed, and the unmanned aerial vehicle can only slowly rotate a heading shaft (YAW, namely a horizontal steering shaft) and rotate a cradle head.
Then, the APP provides a two-dimensional dotting schematic diagram for a user, the user controls the body to turn YAW and the cradle head through the remote controller, and the user projects the two-dimensional coordinate diagram to set nodes, and at least 1 base point needs to be determined. Taking two base points as an example, the control terminal takes the four-image position offset farthest from the center of the coordinate axis, draws a composition quadrangle, and definitely moves the photographed sample (taking the overlapping of 50% as an estimated value, and converting based on the quadrangle and the FOV of the lens).
Then, APP prompts to shoot the sample number and time, and after user clicks and confirms, unmanned aerial vehicle starts the operation.
Then, the camera module recognizes the photographed pictures after photographing is completed, and the pictures are spliced to form a final panorama by recognizing seam lines of the pictures (a point-by-point seam line searching method: a dynamic programming dp method, a graph cutting method, etc.). This way the panorama can be output to the user.
For the non-regular mode, an Application (APP) of the control terminal is provided with an interaction component for the non-regular mode, and the non-regular mode is entered after the interaction component corresponding to the non-regular mode is operated (such as clicking an automatic mode button). Wherein, the 3A parameters of the mode (automatic or manual setting, exposure parameters, etc.) and the storage mode (such as storage in a memory card of the unmanned aerial vehicle or sending to a control terminal for storage) and the like.
In the unconventional mode, the user can freely set the movable mode, such as the longitudinal and/or transverse mode, whether the cradle head needs to be locked (in order to ensure the safety of the unmanned aerial vehicle, the cradle head and the image capturing device, a linkage conversion scheme is needed between the cradle head and the cradle head).
Then, the APP provides a two-dimensional dotting schematic diagram for a user, the user controls the body to turn YAW and the cradle head through the remote controller, the user projects the two-dimensional coordinate diagram to set nodes, and at least 1 base point (the base point can be remarked with information such as cradle head angle, unmanned aerial vehicle steering and the like) is required to be determined. Taking two base points as an example, the control terminal takes the four-image position offset farthest from the center of the coordinate axis, draws a composition quadrangle, and definitely moves the photographed sample (taking the overlapping of 50% as an estimated value, and converting based on the quadrangle and the FOV of the lens).
Then, APP prompts to shoot the sample number and time, and after user clicks and confirms, unmanned aerial vehicle starts the operation.
Then, the camera module recognizes the photographed pictures after photographing is completed, and the pictures are spliced to form a final panorama by recognizing seam lines of the pictures (a point-by-point seam line searching method: a dynamic programming dp method, a graph cutting method, etc.). This way the panorama can be output to the user.
The embodiment of the application provides the user with the mode co-user self-selection, so that the user can select the mode suitable for the user according to the self-demand and the unmanned aerial vehicle operation level.
The following describes an example of the execution subject of each operation described above, taking the unmanned aerial vehicle and its control terminal as an example.
The first user instruction may be determined based on a user operation input by a user on the control terminal.
The determining of the at least one base point from within the auxiliary field of view may be performed by the control terminal.
The determining of the desired field of view based on the at least one base point may be performed by the control terminal.
The determining of the at least one base point in the auxiliary field of view may be performed by the control terminal.
The conversion of the unmanned aerial vehicle body and the cradle head can be executed by at least one of an image capturing device, a control terminal or the cradle head.
Image synthesis and image processing (e.g., cropping) may be performed by the image capture device.
It should be noted that the foregoing execution bodies of the operations are only exemplary, and are not to be construed as limiting the present application, and may be independently implemented by one of the movable platform, the control terminal, the image capturing device, and the cradle head, or implemented by several of them in cooperation. For example, in the case where the movable platform is a land robot, a man-machine interaction module (e.g., including a display for displaying a man-machine interaction interface, etc.) may be disposed on the land robot, and the user may directly obtain user operations on the interaction interface presented by the movable platform to generate user instructions, determine an auxiliary field of view, determine a base point, determine a desired field of view, etc. Wherein independently completing includes actively or passively, directly or indirectly obtaining corresponding data from other devices to perform corresponding operations
The view field determining method provided by the embodiment of the application can meet the shooting requirements of a user on scenes with any size and any view field range under various complex scenes, so that the composition of the shot image is more consistent with the composition expected by the user. In addition, the method can effectively simplify the shooting steps when a user shoots a plurality of pictures and synthesizes a panoramic image, reduce the post editing processing time and improve the user experience, and if the user sets a base point, the user can obtain an image which is matched with the expected field of view of the user without other excessive operations. In addition, the method can effectively improve the differential competitiveness of the aerial photographing equipment and meet the multi-scene requirements of users. In addition, as one large view field can be decomposed into a plurality of small view fields to be shot respectively, the restriction condition of the aerial camera on the three-dimensional space level of the shooting site can be effectively reduced based on the linkage algorithm of the machine body and the cradle head.
In practical applications, the method described above may be applied to determination of a sensing range of a load other than an image capturing apparatus or the like, so that the sensing range of the load is arbitrarily planned by selection of a base point, and sensing data corresponding to a desired sensing range is obtained. Wherein the load includes, but is not limited to, an audio acquisition device, a ranging device, and the like.
Fig. 17 is a schematic structural diagram of a field of view determining apparatus according to an embodiment of the present application.
As shown in fig. 17, the field of view determining apparatus 1600 may include one or more processors 1610, and the one or more processors 1610 may be integrated into one processing unit or may be separately provided in a plurality of processing units. A computer-readable storage medium 1620 for storing one or more computer programs 1621 which, when executed by the processor, implement the field of view determination method as above, e.g., obtain first user instructions; determining at least one base point from within the auxiliary field of view in response to the first user instruction; and determining a desired field of view based on the at least one base point to obtain an image that matches the desired field of view.
Wherein the field of view determining apparatus 1600 may be provided in one execution body or in a plurality of execution bodies, respectively. For example, in a scenario of a land robot or the like that can implement a local control function, the field of view determining apparatus 1600 may be disposed in the land robot, such as a cradle head disposed on the land robot, a camera disposed on the cradle head, and a display screen disposed on a body of the land robot so as to facilitate interaction with a user. For another example, for a scenario in which a movable platform may be controlled using a non-native control terminal, at least part of the field of view determining apparatus 1600 may be provided in the control terminal, such as a related function accepting a user operation is provided in the control terminal. At least a portion of the field of view determining device 1600 may be disposed in a movable platform, such as at least one of an information transfer function, an environmental information sensing function, a coordinated control function, and the like. Further, at least part of the field of view determining apparatus 1600 may be provided in an image capturing apparatus, such as an image synthesizing function, an image cropping function, or the like.
For example, the processing unit may include a Field programmable gate array (Field-Programmable Gate Array, FPGA) or one or more ARM processors. The processing unit may be coupled to a non-volatile computer-readable storage medium 1620. The non-transitory computer readable storage medium 1620 may store logic, code, and/or computer instructions executed by a processing unit for performing one or more steps. The non-volatile computer-readable storage medium 1620 may include one or more storage units (removable media or external memory, such as an SD card or RAM). In some embodiments, the data sensed by the sensor may be transferred directly to and stored in the memory unit of the non-volatile computer readable storage medium 1620. The storage unit of the non-transitory computer readable storage medium 1620 may store logic, code, and/or computer instructions for execution by a processing unit to perform the various embodiments of the methods described herein. For example, a processing unit may be configured to execute instructions to cause one or more processors of the processing unit to perform the tracking functions described above. The storage unit may store the sensing module sensing data, which is processed by the processing unit. In some embodiments, the storage unit of the non-transitory computer readable storage medium 1620 may store the processing results generated by the processing unit.
In some embodiments, the processing unit may be coupled to the control module for controlling the state of the movable platform. For example, the control module may be used to control the power mechanism of the movable platform to adjust the spatial orientation, speed, and/or acceleration of the movable platform relative to six degrees of freedom. Alternatively or in combination, the control module may control one or more of the carrier, load or sensing module.
The processing unit may also be coupled to a communication module for transmitting and/or receiving data with one or more peripheral devices, such as a terminal, display device, or other remote control device. Any suitable communication method, such as wired or wireless communication, may be utilized herein. For example, the communication module may utilize one or more local area networks, wide area networks, infrared, radio, wi-Fi, point-to-point (P2P) networks, telecommunications networks, cloud networks, and the like. Alternatively, a relay station, such as a signal tower, satellite, or mobile base station, may be used.
The above components may be mutually adapted. For example, one or more components may be located on a movable platform, carrier, load, terminal, sensing system, or additional external devices in communication with the foregoing devices. In some embodiments, one or more of the processing units and/or the non-volatile computer readable medium may be located in different locations, such as on a movable platform, carrier, load, terminal, sensing system, or additional external devices in communication with the foregoing devices, as well as various combinations of the foregoing.
Furthermore, the control terminal adapted to the mobile platform may comprise an input module, a processing unit, a memory, a display module, and a communication module, all of which are connected via a bus or similar network.
The input module includes one or more input mechanisms to obtain input generated by a user by operating the input module. The input mechanisms include one or more joysticks, switches, knobs, slide switches, buttons, dials, touch screens, keypads, keyboards, mice, voice controls, gesture controls, inertial modules, and the like. The input module may be used to obtain user input for controlling the movable platform, the carrier, the load, or any aspect of the components therein. Any aspect includes pose, position, orientation, flight, tracking, etc. For example, the input mechanism may be a user manually setting one or more positions, each corresponding to a preset input, to control the movable platform.
In some embodiments, the input mechanism may be operated by a user to input control instructions to control the movement of the movable platform. For example, the user may input a movement pattern of the movable platform, such as automatic flight, autopilot, or movement according to a preset movement path, using knobs, switches, or similar input mechanisms. As another example, the user may control the position, attitude, orientation, or other aspect of the movable platform by tilting the control terminal in some way. The tilt of the control terminal may be detected by one or more inertial sensors and a corresponding motion command generated. For another example, the user may adjust an operating parameter of the load (e.g., zoom), a pose of the load (via the carrier), or other aspect of any object on the movable platform using the input mechanisms described above.
In some embodiments, the input mechanism may be operated by a user to input the aforementioned descriptive target information. For example, the user may select an appropriate tracking mode, such as a manual tracking mode or an automatic tracking mode, using a knob, switch, or similar input mechanism. The user may also use the input mechanism to select a particular target to be tracked, type of target information to be performed, or other similar information. In various embodiments, the input module may be performed by more than one device. For example, the input module may be implemented by a standard remote controller with a joystick. A standard remote controller with a joystick is connected to a mobile device (such as a smart phone) running a suitable application ("app") to generate control instructions for the movable platform. The app may be used to obtain user input.
The processing unit may be coupled to the memory. The memory includes volatile or nonvolatile storage media for storing data and/or logic executable by the processing unit, code, and/or program instructions for performing one or more rules or functions. The memory may include one or more memory units (removable media or external memory, such as an SD card or RAM). In some embodiments, the data of the input module may be transferred directly and stored in a memory unit of the memory. The storage unit of the memory may store logic, code, and/or computer instructions to be executed by the processing unit to perform the various embodiments of the methods described herein. For example, the processing unit may be configured to execute instructions to cause one or more processors of the processing unit to process and display sensed data (e.g., images) obtained from the mobile platform, generate control instructions based on user input, including movement instructions and object information, and cause the communication module to transmit and/or receive data, etc. The storage unit may store sensed data or other data received from an external device (e.g., a removable platform). In some embodiments, the storage unit of the memory may store the processing results generated by the processing unit.
In some embodiments, the display module may be used to display information about the position, translational velocity, translational acceleration, direction, angular velocity, angular acceleration, or a combination thereof, etc., of the movable platform 10, the carrier 13, and/or the image capture device 14 as in fig. 2. The display module may be used to obtain information sent by the mobile platform and/or the load, such as sensed data (images recorded by a camera or other image capturing device), tracking data as described, control feedback data, and the like. In some embodiments, the display module may be implemented by the same device as the input module. In other embodiments, the display module and the input module may be performed by different devices.
The communication module may be used to transmit and/or receive data from one or more remote devices (e.g., a mobile platform, carrier, base station, etc.). For example, the communication module may transmit control signals (e.g., motion signals, object information, tracking control commands) to peripheral systems or devices, such as the mobile platform 10, the carrier 13, and/or the image capture device 14 of fig. 2. The communication module may include a transmitter and a receiver for receiving data from and transmitting data to the remote device, respectively. In some embodiments, the communication module may include a transceiver that combines the functions of a transmitter and a receiver. In some embodiments, the transmitter and receiver and the processing unit may communicate with each other. The communication may utilize any suitable communication means, such as wired or wireless communication.
Images captured by the mobile platform during movement may be transmitted from the mobile platform or imaging device back to the control terminal or other suitable device for display, playback, storage, editing or other purposes. Such transfer may occur in real-time or near real-time as the imaging device captures the image. Optionally, there may be a delay between the capture and transmission of the image. In some embodiments, the images may be stored in the memory of the mobile platform without being transferred elsewhere. The user can view these images in real time, adjust the object information or adjust other aspects of the movable platform or its components, if desired. The adjusted target information may be provided to the mobile platform and the repeated process may continue until the desired image is obtained. In some embodiments, the image may be transmitted from the mobile platform, imaging device, and/or control terminal to a remote server. For example, images may be shared on some social networking platforms, such as a circle of WeChat friends or microblog.
In one embodiment, in response to a first user instruction, determining at least one base point from within the auxiliary field of view comprises: in response to a first user instruction, an auxiliary field of view is determined. At least one base point in the auxiliary field of view is then determined.
The specific content refers to the same parts of the previous embodiments, and will not be described here again.
In one embodiment, in response to a first user instruction, determining at least one base point from within the auxiliary field of view comprises: in response to a first user instruction, determining an auxiliary field of view; and determining at least one base point in the auxiliary field of view. The specific content refers to the same parts of the previous embodiments, and will not be described here again.
In one embodiment, the auxiliary field of view is larger than a current field of view of an image capture device used to capture the image; and/or the auxiliary field of view is greater than or equal to the desired field of view. The specific content refers to the same parts of the previous embodiments, and will not be described here again.
In one embodiment, the first user instruction includes at least one of a position adjustment instruction of the movable platform, a posture adjustment instruction of the movable platform, a focus adjustment instruction, and a lens switching instruction. Accordingly, in response to the first user instruction, determining the auxiliary field of view includes: in response to a first user instruction, the current field of view is switched to the auxiliary field of view by at least one of adjusting a position and/or attitude of the movable platform, adjusting a focal length, and switching a lens. The specific content refers to the same parts of the previous embodiments, and will not be described here again.
For example, the auxiliary field of view comprises at least two sub-auxiliary fields of view.
In one embodiment, the first user instruction includes at least one of a position adjustment instruction of the pan head, an attitude adjustment instruction of the pan head, a position adjustment instruction of the movable platform, an attitude adjustment instruction of the movable platform, a focus adjustment instruction, and a lens switching instruction. Accordingly, in response to the first user instruction, determining the auxiliary field of view includes: and responding to a first user instruction, and sequentially switching the current field of view to each of at least two sub auxiliary fields of view by at least one of adjusting the position and/or the posture of the cradle head, adjusting the position and/or the posture of the movable platform, adjusting the focal length and switching the lens so as to determine at least two sub auxiliary fields of view. Accordingly, determining at least one base point in the auxiliary field of view comprises: at least one base point is determined from each sub-auxiliary field of view. The specific content refers to the same parts of the previous embodiments, and will not be described here again.
In one embodiment, the desired field of view is greater than a minimum field of view of an image capture device for capturing images, the image capture device including a lens. The specific content refers to the same parts of the previous embodiments, and will not be described here again.
In one embodiment, for each sub-auxiliary field of view, determining at least one base point from each sub-auxiliary field of view, respectively, comprises: acquiring a second user instruction; and determining at least one base point in each sub-auxiliary field of view in part in response to a second user instruction. The specific content refers to the same parts of the previous embodiments, and will not be described here again.
In one embodiment, determining at least one base point in each sub-auxiliary field of view in response to a second user instruction comprises: in response to a second user instruction, at least one base point is determined in each sub-auxiliary field of view based on the first field of view preset position. The specific content refers to the same parts of the previous embodiments, and will not be described here again.
In one embodiment, the first field of view preset position comprises at least one of: center point, vertex, any point determined based on preset rules. The specific content refers to the same parts of the previous embodiments, and will not be described here again.
In one embodiment, determining at least one base point in the auxiliary field of view comprises: acquiring a third user instruction; and determining at least one base point in the auxiliary field of view in response to a third user instruction. The specific content refers to the same parts of the previous embodiments, and will not be described here again.
In one embodiment, before the third user instruction is acquired, the method further includes: and outputting an image corresponding to the auxiliary view field through the display module. Accordingly, obtaining the third user instruction includes: in outputting the image corresponding to the auxiliary field of view through the display module, a user operation for the image corresponding to the auxiliary field of view is acquired, and a third user instruction is determined based on the user operation. The specific content refers to the same parts of the previous embodiments, and will not be described here again.
In one embodiment, determining at least one base point in the auxiliary field of view comprises: at least one base point is determined in the auxiliary field of view based on the second field of view preset position. The specific content refers to the same parts of the previous embodiments, and will not be described here again.
For example, the second field of view preset position includes at least one of: center point, vertex, any point determined based on preset rules. The specific content refers to the same parts of the previous embodiments, and will not be described here again.
In one embodiment, the first user instruction includes base point setting information. Accordingly, in response to the first user instruction, determining at least one base point from within the auxiliary field of view comprises: at least one base point is determined from within the auxiliary field of view based on the base point setting information. The specific content refers to the same parts of the previous embodiments, and will not be described here again.
In one embodiment, the base point setting information includes at least one of direction information, angle value information, coordinate information, or a reference point.
In one embodiment, determining the desired field of view based on the at least one base point comprises: an image acquisition field of view is determined based on the at least one base point, the image acquisition region covered by the image acquisition field of view comprising the image acquisition region covered by the desired field of view so as to acquire an image coincident with the desired field of view. The specific content refers to the same parts of the previous embodiments, and will not be described here again.
In one embodiment, the method may further include: and cutting the acquired image to obtain an image matched with the expected view field. The specific content refers to the same parts of the previous embodiments, and will not be described here again.
In one embodiment, cropping the acquired image to obtain an image that matches the desired field of view comprises: acquiring an initial image corresponding to an image acquisition view field; and clipping the initial image to obtain an image which is matched with the expected view field. The specific content refers to the same parts of the previous embodiments, and will not be described here again.
In one embodiment, acquiring an initial image corresponding to an image acquisition field of view comprises: decomposing the image acquisition view field based on a preset view field to obtain a plurality of sub-acquisition view fields; controlling an image capturing device to respectively acquire images under a plurality of sub-acquisition views to obtain a plurality of sub-images; and synthesizing an initial image corresponding to the image acquisition field of view based on the plurality of sub-images.
The specific content refers to the same parts of the previous embodiments, and will not be described here again.
In one embodiment, decomposing the image acquisition field of view based on the preset field of view, the obtaining a plurality of acquisition fields of view includes: determining a first image acquisition area corresponding to an image acquisition view field, and determining a second image acquisition area corresponding to a preset view field; and decomposing the first image acquisition region at least based on the second image acquisition region to obtain a plurality of sub-image acquisition regions. Correspondingly, controlling the image capturing device to respectively acquire images under a plurality of acquisition fields to obtain a plurality of sub-images comprises: the image capturing device is controlled to respectively acquire a plurality of sub-images corresponding to the plurality of sub-image acquisition areas. The specific content refers to the same parts of the previous embodiments, and will not be described here again.
In one embodiment, decomposing the first image acquisition region based at least on the second image acquisition region to obtain a plurality of sub-image acquisition regions comprises: and decomposing the first image acquisition region based on the second image acquisition region and the region overlapping proportion to obtain a plurality of sub-image acquisition regions. The specific content refers to the same parts of the previous embodiments, and will not be described here again.
In one embodiment, the region overlap ratio is determined based on a user operation or a preset overlap ratio. The specific content refers to the same parts of the previous embodiments, and will not be described here again.
In one embodiment, decomposing the first image acquisition region based on the second image acquisition region and the region overlapping ratio to obtain a plurality of sub-image acquisition regions includes: determining a first number of sub-image acquisition areas included in the length direction based on the length of the second image acquisition area, the length of the first image acquisition area and the length overlapping proportion; determining a second number of sub-image acquisition regions included in the width direction based on the width of the second image acquisition region, the width of the first image acquisition region, and the width overlapping ratio; and determining the number of the sub-image acquisition areas based on the first number and the second number to decompose the first image acquisition area. The specific content refers to the same parts of the previous embodiments, and will not be described here again.
In one embodiment, each of the plurality of sub-image acquisition regions has corresponding shooting pose information.
The specific content refers to the same parts of the previous embodiments, and will not be described here again.
In one embodiment, the shooting pose information includes: at least one of pitch angle information, yaw angle information and roll angle information. Optionally, the shooting pose information includes: pitch angle information and/or yaw angle information.
In one embodiment, when the image capturing device is disposed on the pan-tilt, and the pan-tilt is disposed on the movable platform, the pitch angle information includes at least one of the movable platform pitch angle information and the pan-tilt pitch angle information; and/or the yaw angle information includes at least one of movable platform yaw angle information and pan and tilt yaw angle information.
The specific content refers to the same parts of the previous embodiments, and will not be described here again.
In one embodiment, the adjustment priority of the pan-tilt pitch angle in the pitch angle information is greater than or equal to the adjustment priority of the movable platform pitch angle; and/or the adjustment priority of the yaw angle of the cradle head in the yaw angle information is greater than or equal to the adjustment priority of the yaw angle of the movable platform. The specific content refers to the same parts of the previous embodiments, and will not be described here again.
In one embodiment, the capturing pose information further comprises: the position information comprises height information, the height information represents displacement information of the movable platform in the vertical direction, and the image capturing device is arranged on the movable platform.
The specific content refers to the same parts of the previous embodiments, and will not be described here again.
In one embodiment, the shooting pose information is determined based on an image acquisition region and a plurality of sub-image acquisition regions corresponding to the current shooting pose for sequential switching of the image capturing device from the current image acquisition region to the plurality of sub-image acquisition regions for image acquisition under each sub-image acquisition region.
The specific content refers to the same parts of the previous embodiments, and will not be described here again.
In one embodiment, when the image capturing apparatus is disposed on the cradle head and the cradle head is disposed on the movable platform, the movable platform is in any one of a hover state, a vertical lift state, or a horizontal movement state in acquiring an image conforming to a desired field of view. The specific content refers to the same parts of the previous embodiments, and will not be described here again.
In one embodiment, synthesizing an initial image corresponding to an image acquisition field of view based on a plurality of sub-images includes: and synthesizing the plurality of sub-images based on the eclosion mode to obtain an initial image corresponding to the image acquisition view field. The specific content refers to the same parts of the previous embodiments, and will not be described here again.
In one embodiment, the computing resources consumed in fusing different image fusion regions are different. The specific content refers to the same parts of the previous embodiments, and will not be described here again.
In one embodiment, after acquiring the image that matches the desired field of view, the method further comprises: and outputting an image matched with the expected field of view through the display module.
The specific content refers to the same parts of the previous embodiments, and will not be described here again.
In one embodiment, cropping the initial image to obtain an image that matches the desired field of view may include: the initial image is cropped based on at least one base point to obtain an image matching the desired field of view.
The specific content refers to the same parts of the previous embodiments, and will not be described here again.
In one embodiment, a first mapping relationship exists between the first user instruction and the preset pattern; correspondingly, the method further comprises the steps of: after the first user instruction is acquired, a preset mode is entered in response to the first user instruction.
In one embodiment, an image capturing device for capturing images is disposed on a cradle head, and the cradle head is disposed on a movable platform; and in a preset mode, the movable platform is in any one of a hovering state, a vertical lifting state or a horizontal moving state, and the cradle head is in a locking state or can rotate around at least one of at least one shaft.
The specific content refers to the same parts of the previous embodiments, and will not be described here again.
Another aspect of the application provides a field of view determination system comprising: the control terminal and the movable platform are in communication connection with each other, and an image capturing device is arranged on the movable platform; the control terminal is used for acquiring a first user instruction; the control terminal is configured to determine, in response to a first user instruction, at least one base point from within an auxiliary field of view, the auxiliary field of view being determined based on a field of view of an image capturing device on the movable platform; the control terminal is used for determining a desired view field based on at least one base point so as to obtain an image which is matched with the desired view field.
In one embodiment, the control terminal for determining at least one base point from within the auxiliary field of view in response to the first user instruction may comprise: the control terminal is used for responding to the first user instruction and controlling the image capturing device of the movable platform to determine an auxiliary field of view; and the control terminal is used for determining at least one base point in the auxiliary field of view.
The specific content refers to the same parts of the previous embodiments, and will not be described here again.
In one embodiment, the auxiliary field of view is larger than a current field of view of an image capture device used to capture the image; and/or the auxiliary field of view is greater than or equal to the desired field of view.
The specific content refers to the same parts of the previous embodiments, and will not be described here again.
In one embodiment, the first user instruction includes at least one of a position adjustment instruction of the movable platform, a posture adjustment instruction of the movable platform, a focus adjustment instruction, and a lens switching instruction.
Accordingly, the control terminal for controlling the image capturing device of the movable platform to determine the auxiliary field of view in response to the first user instruction comprises: the control terminal is used for responding to a first user instruction, and switching the current field of view to the auxiliary field of view by at least one of controlling the movable platform to adjust the position and/or the gesture, controlling the image capturing device to adjust the focal length and controlling the image capturing device to switch the lens.
The specific content refers to the same parts of the previous embodiments, and will not be described here again.
For example, the auxiliary field of view comprises at least two sub-auxiliary fields of view.
In one embodiment, the first user instruction includes at least one of a position adjustment instruction of the pan head, an attitude adjustment instruction of the pan head, a position adjustment instruction of the movable platform, an attitude adjustment instruction of the movable platform, a focus adjustment instruction, and a lens switching instruction.
Accordingly, the control terminal controlling the image capturing device of the movable platform to determine the auxiliary field of view in response to the first user instruction comprises: the control terminal responds to the first user instruction, and at least one of the position and/or the gesture of the cradle head is controlled, the position and/or the gesture of the movable platform is controlled, the focal length of the image capturing device is controlled, and the switching lens of the image capturing device is controlled, so that the current view field is sequentially switched to each of at least two sub auxiliary view fields, and at least two sub auxiliary view fields are determined.
Accordingly, the control terminal determining at least one base point in the auxiliary field of view comprises: the control terminal determines at least one base point from each sub-auxiliary field of view respectively.
The specific content refers to the same parts of the previous embodiments, and will not be described here again.
In one embodiment, the desired field of view is greater than a minimum field of view of an image capture device for capturing images, the image capture device including a lens. The specific content refers to the same parts of the previous embodiments, and will not be described here again.
For example, for each sub-auxiliary field of view, the control terminal determining at least one base point from each sub-auxiliary field of view, respectively, comprises: the control terminal acquires a second user instruction; and the control terminal determining at least one base point in each sub-auxiliary field of view in response to a second user instruction.
The specific content refers to the same parts of the previous embodiments, and will not be described here again.
In one embodiment, the control terminal for determining at least one base point in each sub-auxiliary field of view in response to a second user instruction comprises: the control terminal is used for responding to a second user instruction and respectively determining at least one base point in each sub auxiliary view field based on the preset position of the first view field.
For example, the first field of view preset position includes at least one of: center point, vertex, any point determined based on preset rules.
In one embodiment, the control terminal for determining at least one base point in the auxiliary field of view may comprise operations of, first, the control terminal for obtaining a third user instruction. The control terminal is then operable to determine at least one base point in the auxiliary field of view in response to a third user instruction.
The specific content refers to the same parts of the previous embodiments, and will not be described here again.
In one embodiment, the control terminal may be further configured to output, via the display module, an image corresponding to the auxiliary field of view before the control terminal is configured to obtain the third user instruction.
Accordingly, the control terminal for acquiring the third user instruction may include the following operations. The control terminal acquires a user operation for the image corresponding to the auxiliary field of view in a process for outputting the image corresponding to the auxiliary field of view through the display module, and determines a third user instruction based on the user operation.
The specific content refers to the same parts of the previous embodiments, and will not be described here again.
In one embodiment, the control terminal for determining at least one base point in the auxiliary field of view comprises: the control terminal is used for determining at least one base point in the auxiliary view field based on the second view field preset position.
For example, the second field of view preset position includes at least one of: center point, vertex, any point determined based on preset rules.
In one embodiment, the first user instruction includes base point setting information. Accordingly, the control terminal, in response to the first user instruction, determining at least one base point from within the auxiliary field of view may include: the control terminal determines at least one base point from within the auxiliary field of view based on the base point setting information.
Wherein the base point setting information may include at least one of direction information, angle value information, coordinate information, or a reference point.
In one embodiment, the control terminal for determining the desired field of view based on the at least one base point may comprise the operations of: the control terminal is used for determining an image acquisition view field based on at least one base point, and an image acquisition area covered by the image acquisition view field comprises an image acquisition area covered by a desired view field so as to acquire an image matched with the desired view field.
The specific content refers to the same parts of the previous embodiments, and will not be described here again.
In one embodiment, the control terminal is configured to determine an image acquisition field of view based on the at least one base point. Correspondingly, the image capturing device is also used for acquiring the base point; and clipping the acquired image to obtain an image which is matched with the expected view field.
For example, the base point is determined by the control terminal in response to a user operation, and then the base point is transmitted to the camera via the movable platform by the control terminal or directly transmitted to the camera by the control terminal.
The specific content refers to the same parts of the previous embodiments, and will not be described here again.
In one embodiment, the image capturing device is configured to crop the acquired image to obtain an image that matches the desired field of view, which may include the following operations: the image capturing device is used for acquiring an initial image corresponding to the image acquisition view field. The image capture device is then used to crop the initial image to obtain an image that matches the desired field of view.
In one embodiment, the image capturing means for acquiring an initial image corresponding to an image acquisition field of view may comprise the following operations. Firstly, the image capturing device is used for respectively carrying out image acquisition under a plurality of sub-acquisition view fields to obtain a plurality of sub-images, wherein the plurality of sub-acquisition view fields are obtained by decomposing the image acquisition view fields based on a preset view field by the movable platform and/or the image capturing device. The image capture device is then operable to synthesize an initial image corresponding to the image acquisition field of view based on the plurality of sub-images.
The specific content refers to the same parts of the previous embodiments, and will not be described here again.
In one embodiment, the plurality of sub-acquisition fields of view may be determined by: first, the movable platform and/or the image capturing device is used to determine a first image acquisition area corresponding to an image acquisition field of view and to determine a second image acquisition area corresponding to a preset field of view. The movable platform and/or the image capturing device is then configured to decompose the first image acquisition area based at least on the second image acquisition area to obtain a plurality of sub-image acquisition areas to determine a plurality of sub-acquisition fields of view. Accordingly, the image capturing device being configured to perform image capturing under the plurality of sub-capturing fields, respectively, to obtain the plurality of sub-images may include an operation in which the image capturing device is configured to capture a plurality of sub-images corresponding to the plurality of sub-image capturing areas, respectively.
The specific content refers to the same parts of the previous embodiments, and will not be described here again.
In one embodiment, the movable platform and/or the image capturing device is configured to decompose the first image capturing area based at least on the second image capturing area, resulting in a plurality of sub-image capturing areas comprising: the movable platform and/or the image capturing device are used for decomposing the first image acquisition area based on the second image acquisition area and the area overlapping proportion to obtain a plurality of sub-image acquisition areas.
For example, the region overlap ratio is determined based on a user operation or a preset overlap ratio. The specific content refers to the same parts of the previous embodiments, and will not be described here again.
In one embodiment, the movable platform and/or the image capturing device is configured to decompose the first image capturing area based on the second image capturing area and the area overlapping ratio, and obtaining the plurality of sub-image capturing areas may include an operation that, first, the movable platform and/or the image capturing device is configured to determine a first number of sub-image capturing areas included in a length direction based on a length of the second image capturing area, the length of the first image capturing area, and the length overlapping ratio, and then the movable platform and/or the image capturing device is configured to determine a second number of sub-image capturing areas included in a width direction based on a width of the second image capturing area, the width of the first image capturing area, and the width overlapping ratio, and then the movable platform and/or the image capturing device is configured to determine the number of sub-image capturing areas based on the first number and the second number to decompose the first image capturing area.
The specific content refers to the same parts of the previous embodiments, and will not be described here again.
In one embodiment, each of the plurality of sub-image acquisition regions has corresponding shooting pose information. The specific content refers to the same parts of the previous embodiments, and will not be described here again.
For example, shooting pose information includes: at least one of pitch angle information, yaw angle information and roll angle information.
In one embodiment, the image capturing device is disposed on a cradle head, and the cradle head is disposed on a movable platform. Correspondingly, the pitch angle information comprises at least one of pitch angle information of the movable platform and pitch angle information of the cradle head; and/or the yaw angle information includes at least one of movable platform yaw angle information and pan and tilt yaw angle information.
The specific content refers to the same parts of the previous embodiments, and will not be described here again.
In one embodiment, the adjustment priority of the pan-tilt pitch angle in the pitch angle information is greater than or equal to the adjustment priority of the movable platform pitch angle; and/or the adjustment priority of the yaw angle of the cradle head in the yaw angle information is greater than or equal to the adjustment priority of the yaw angle of the movable platform.
For example, shooting pose information further includes: the position information comprises height information, the height information represents displacement information of the movable platform in the vertical direction, and the image capturing device is arranged on the movable platform. The specific content refers to the same parts of the previous embodiments, and will not be described here again.
In one embodiment, the shooting pose information is determined based on an image acquisition region and a plurality of sub-image acquisition regions corresponding to the current shooting pose for sequential switching of the image capturing device from the current image acquisition region to the plurality of sub-image acquisition regions for image acquisition under each sub-image acquisition region.
In one embodiment, the image capturing device is disposed on a cradle head, and the cradle head is disposed on a movable platform, which is in any one of a hover state, a vertical lift state, or a horizontal movement state in acquiring an image coincident with a desired field of view. The specific content refers to the same parts of the previous embodiments, and will not be described here again.
In one embodiment, the image capturing device for synthesizing the initial image corresponding to the image acquisition field of view based on the plurality of sub-images may include an operation for synthesizing the plurality of sub-images based on the eclosion mode to obtain the initial image corresponding to the image acquisition field of view.
In one embodiment, the image capture device consumes different computational resources in fusing different image fusion regions. For example, image regions of interest to a user (e.g., image regions of people, buildings, landmarks, etc.) may be of higher importance to the computing resource, and vice versa.
In one embodiment, the control terminal is further configured to output, via the display module, an image that matches the desired field of view after being configured to acquire an image that matches the desired field of view.
In one embodiment, the image capturing means for cropping the initial image to obtain an image that matches the desired field of view may include operations for cropping the initial image based on the at least one base point to obtain an image that matches the desired field of view.
The specific content refers to the same parts of the previous embodiments, and will not be described here again.
In one embodiment, a first mapping relationship exists between the first user instruction and the preset pattern; correspondingly, the control terminal is used for responding to the first user instruction and entering a preset mode after the first user instruction is acquired.
The specific content refers to the same parts of the previous embodiments, and will not be described here again.
In one embodiment, the image capturing device is disposed on a cradle head, and the cradle head is disposed on a movable platform; and the movable platform is in any one of a hovering state, a vertical lifting state or a horizontal moving state in a preset mode, and the cradle head is in a locking state or can rotate around at least one of at least one shaft.
In one embodiment, the control terminal is disposed on a movable platform. Such as the control terminal and the movable translation are integral. The specific content refers to the same parts of the previous embodiments, and will not be described here again.
The movable platform is exemplified below.
Fig. 18 is a schematic structural diagram of a movable platform according to an embodiment of the present application.
As shown in fig. 18, the movable platform may be a drone 170, and the drone 170 may include a plurality of power systems 171 and foot rests. The cradle head may be disposed on the drone 170.
In one embodiment, the plurality of power systems 171 of the drone 170 are in one-to-one correspondence with the plurality of booms. Each power system 171 may include a motor assembly and a blade coupled to the motor assembly. Each power system 171 may be disposed on its corresponding horn from which the power system 171 is supported.
Furthermore, the drone 170 may also include a foot rest. The foot rest can be positioned below the tripod head and connected with the tripod head. The unmanned aerial vehicle 170 may be used to land the unmanned aerial vehicle 170 when the unmanned aerial vehicle 170 lands.
Fig. 19 schematically shows a schematic view of a movable platform according to another embodiment of the application.
As shown in fig. 19, the movable platform is a robot 180, such as a robot of a traveling on land type, on which a cradle head may be provided. Although the movable platform is described as a land robot, such description is not limiting and any type of movable platform described above is applicable (e.g., aerial robots, water robots). In some embodiments, the drive means may be located at the bottom of the movable platform. The sensing module may include one or more sensors to detect information related to the land robot, such as obstacle information, environment information, image information of a target object, etc. The land robot may further comprise a communication system for information interaction with one or more terminals. The driving means may be a power system 181 as described above, such as a motor or the like, and the sensing module may include a radar, a laser sensor, a positioning sensor, or the like. Terminals include, but are not limited to: the communication between the land robot and the terminal may be the same as the prior art, and will not be described in detail here.
Fig. 20 schematically shows a schematic view of a movable platform according to another embodiment of the application.
As shown in fig. 20, the movable platform is a handheld cradle head 190, and the handheld cradle head 190 may include a cradle head structure as described above. Handheld cradle head 190 may comprise: the holder and the handle for supporting the holder are parts for holding by a user, and can comprise control buttons so as to operate the holder. The handheld cradle head 190 is communicatively coupled to a functional component (e.g., a camera) in the cradle to acquire image information captured by the camera.
In addition, the handheld cradle head 190 may also be connected to a terminal device 191 (such as a mobile phone) or the like, so as to send information such as an image to the mobile phone.
The foregoing is a preferred embodiment of the present application, and it should be noted that the preferred embodiment is only for understanding the present application, and is not intended to limit the scope of the present application. Furthermore, features of the preferred embodiments, unless otherwise noted, are applicable to both method and apparatus embodiments, and features that occur in the same or different embodiments may be used in combination without conflict.
In some possible embodiments, it should finally be stated that: the above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced with equivalents; such modifications and substitutions do not depart from the essence of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present application.

Claims (113)

1. A method of field of view determination, the method comprising:
acquiring a first user instruction;
determining at least one base point from within the auxiliary field of view in response to the first user instruction; and
determining a desired field of view based on at least one of the base points to obtain an image that coincides with the desired field of view;
wherein the base points characterize boundary information of the desired field of view;
wherein the auxiliary field of view is different from the field of view range of the desired field of view.
2. The method of claim 1, wherein the determining at least one base point from within the auxiliary field of view in response to the first user instruction comprises:
determining the auxiliary field of view in response to the first user instruction; and
at least one of the base points in the auxiliary field of view is determined.
3. The method of claim 2, wherein the auxiliary field of view is greater than a current field of view of an image capture device used to capture the image; and/or
The auxiliary field of view is greater than or equal to the desired field of view.
4. The method of claim 3, wherein the first user instruction comprises at least one of a position adjustment instruction for the movable platform, a pose adjustment instruction for the movable platform, a focus adjustment instruction, and a lens switch instruction; and
The determining, in response to the first user instruction, an auxiliary field of view includes:
in response to the first user instruction, the current field of view is switched to an auxiliary field of view by at least one of adjusting a position and/or attitude of the movable platform, adjusting a focal length, and switching a lens.
5. The method of claim 2, wherein the auxiliary field of view comprises at least two sub-auxiliary fields of view.
6. The method of claim 5, wherein the first user instruction comprises at least one of a position adjustment instruction for a pan-tilt, a pose adjustment instruction for a pan-tilt, a position adjustment instruction for a movable platform, a pose adjustment instruction for a movable platform, a focus adjustment instruction, and a lens switch instruction; and
the determining, in response to the first user instruction, an auxiliary field of view includes:
responding to the first user instruction, and sequentially switching the current view field to each of at least two sub auxiliary view fields by adjusting the position and/or the gesture of the cradle head, adjusting the position and/or the gesture of the movable platform, adjusting the focal length and switching the lens so as to determine at least two sub auxiliary view fields;
The determining at least one base point in the auxiliary field of view comprises:
at least one base point is determined from each of the sub-auxiliary fields of view.
7. The method of claim 4 or 6, wherein the desired field of view is greater than a minimum field of view of an image capturing device for capturing images, the image capturing device comprising a lens.
8. The method of claim 6, wherein for each of the sub-auxiliary fields of view, the determining at least one base point from each of the sub-auxiliary fields of view, respectively, comprises:
acquiring a second user instruction; and
at least one of the base points is determined in each of the sub-auxiliary fields of view in response to the second user instruction.
9. The method of claim 8, wherein the determining at least one of the base points in each of the sub-auxiliary fields of view in response to the second user instruction comprises:
in response to the second user instruction, at least one base point is respectively determined in each of the sub auxiliary fields of view based on a first field of view preset position.
10. The method of claim 9, wherein the first field of view preset position comprises at least one of: center point, vertex, any point determined based on preset rules.
11. The method of claim 2, wherein the determining at least one base point in the auxiliary field of view comprises:
acquiring a third user instruction; and
at least one of the base points is determined in the auxiliary field of view in response to the third user instruction.
12. The method of claim 11, wherein prior to said obtaining a third user instruction, the method further comprises:
outputting an image corresponding to the auxiliary field of view; and
the obtaining a third user instruction includes:
and in the process of outputting the image corresponding to the auxiliary view field, acquiring a user operation for the image corresponding to the auxiliary view field, and determining a third user instruction based on the user operation.
13. The method of claim 2, wherein the determining at least one base point in the auxiliary field of view comprises:
at least one of the base points is determined in the auxiliary field of view based on a second field of view preset position.
14. The method of claim 13, wherein the second field of view preset position comprises at least one of: center point, vertex, any point determined based on preset rules.
15. The method of claim 1, wherein the first user instruction comprises base point setting information;
the determining, in response to the first user instruction, at least one base point from within the auxiliary field of view comprises:
at least one of the base points is determined from within the auxiliary field of view based on the base point setting information.
16. The method of claim 15, wherein the base point setting information includes at least one of direction information, angle value information, coordinate information, or a reference point.
17. The method of claim 1, wherein the determining a desired field of view based on at least one of the base points comprises:
an image acquisition field of view is determined based on at least one of the base points, the image acquisition region covered by the image acquisition field of view comprising the image acquisition region covered by the desired field of view so as to acquire an image coincident with the desired field of view.
18. The method of claim 17, wherein the method further comprises:
and cutting the acquired image to obtain an image matched with the expected view field.
19. The method of claim 18, wherein cropping the acquired image to obtain an image that matches the desired field of view comprises:
Acquiring an initial image corresponding to the image acquisition view field; and
and cutting the initial image to obtain an image which is matched with the expected view field.
20. The method of claim 19, wherein the acquiring an initial image corresponding to the image acquisition field of view comprises:
decomposing the image acquisition view field based on a preset view field to obtain a plurality of sub-acquisition view fields;
controlling an image capturing device to respectively acquire images under a plurality of sub-acquisition views to obtain a plurality of sub-images; and
and synthesizing an initial image corresponding to the image acquisition view field based on the plurality of sub-images.
21. The method of claim 20, wherein decomposing the image acquisition field of view based on a preset field of view comprises:
determining a first image acquisition area corresponding to the image acquisition view field, and determining a second image acquisition area corresponding to the preset view field;
decomposing the first image acquisition region based on at least the second image acquisition region to obtain a plurality of sub-image acquisition regions; and
the controlling the image capturing device to respectively acquire images under a plurality of acquisition fields to obtain a plurality of sub-images comprises:
And controlling the image capturing device to respectively acquire a plurality of sub-images corresponding to the plurality of sub-image acquisition areas.
22. The method of claim 21, wherein decomposing the first image acquisition region based at least on the second image acquisition region to obtain a plurality of sub-image acquisition regions comprises:
and decomposing the first image acquisition region based on the second image acquisition region and the region overlapping proportion to obtain a plurality of sub-image acquisition regions.
23. The method of claim 22, wherein the region overlap ratio is determined based on a user operation or a preset overlap ratio.
24. The method of claim 22, wherein decomposing the first image acquisition region based on the second image acquisition region and the region overlap ratio to obtain a plurality of sub-image acquisition regions comprises:
determining a first number of sub-image acquisition areas included in the length direction based on the length of the second image acquisition area, the length of the first image acquisition area and the length overlapping proportion;
determining a second number of sub-image acquisition areas included in the width direction based on the width of the second image acquisition area, the width of the first image acquisition area and the width overlapping proportion; and
And determining the number of the sub-image acquisition areas based on the first number and the second number so as to decompose the first image acquisition area.
25. The method of claim 21, wherein each of the plurality of sub-image acquisition regions has corresponding shooting pose information.
26. The method of claim 25, wherein the capturing pose information comprises: at least one of pitch angle information, yaw angle information and roll angle information.
27. The method of claim 26, wherein when the image capture device is disposed on a cradle head and the cradle head is disposed on a movable platform, wherein:
the pitch angle information comprises at least one of pitch angle information of the movable platform and pitch angle information of the cradle head;
and/or
The yaw angle information includes at least one of movable platform yaw angle information and pan/tilt yaw angle information.
28. The method of claim 26, wherein the adjustment priority of the pan-tilt pitch angle in the pitch angle information is greater than or equal to the adjustment priority of the movable platform pitch angle;
And/or
And the adjustment priority of the yaw angle of the cradle head in the yaw angle information is greater than or equal to the adjustment priority of the yaw angle of the movable platform.
29. The method of claim 26, wherein the capturing pose information further comprises: the position information comprises height information, the height information represents displacement information of the movable platform in the vertical direction, and the image capturing device is arranged on the movable platform.
30. The method of claim 25, wherein the capturing pose information is determined based on an image capturing area corresponding to a current capturing pose and a plurality of sub-image capturing areas, for sequential switching of the image capturing device from the current image capturing area to the plurality of sub-image capturing areas for image capturing under each of the sub-image capturing areas.
31. The method of claim 17, wherein when the image capturing device is disposed on a cradle head and the cradle head is disposed on a movable platform, the movable platform is in any one of a hover state, a vertical lift state, or a horizontal movement state during acquisition of an image coincident with the desired field of view.
32. The method of claim 20, wherein the synthesizing an initial image corresponding to the image acquisition field of view based on the plurality of sub-images comprises:
and synthesizing the plurality of sub-images based on the eclosion mode to obtain an initial image corresponding to the image acquisition view field.
33. The method of claim 32, wherein the computing resources consumed in fusing different image fusion regions are different.
34. The method of claim 17, wherein after acquiring an image that coincides with the desired field of view, the method further comprises:
and outputting the image matched with the expected field of view.
35. The method of claim 19, wherein cropping the initial image to obtain an image that matches the desired field of view comprises:
and clipping the initial image based on at least one base point to obtain an image matched with the expected view field.
36. The method of claim 1, wherein a first mapping relationship exists between the first user instruction and a preset pattern;
after obtaining the first user instruction, the method further comprises:
And responding to the first user instruction, and entering the preset mode.
37. The method of claim 36, wherein the image capturing device for capturing images is disposed on a cradle head, and the cradle head is disposed on a movable platform; and
in the preset mode, the movable platform is in any one of a hovering state, a vertical lifting state or a horizontal moving state, and the cradle head is in a locking state or can rotate around at least one of at least one shaft.
38. A field of view determination apparatus, the apparatus comprising:
one or more processors; and
a computer readable storage medium storing one or more computer programs, which when executed by the processor, implement:
acquiring a first user instruction;
determining at least one base point from within the auxiliary field of view in response to the first user instruction; and
determining a desired field of view based on at least one of the base points to obtain an image that coincides with the desired field of view;
wherein the base points characterize boundary information of the desired field of view;
wherein the auxiliary field of view is different from the field of view range of the desired field of view.
39. The apparatus of claim 38, wherein the determining at least one base point from within the auxiliary field of view in response to the first user instruction comprises:
determining an auxiliary field of view in response to the first user instruction; and
at least one of the base points in the auxiliary field of view is determined.
40. The apparatus of claim 39, wherein the auxiliary field of view is larger than a current field of view of an image capture device used to capture the image; and/or
The auxiliary field of view is greater than or equal to the desired field of view.
41. The apparatus of claim 40, wherein the first user instructions include at least one of a position adjustment instruction for the movable platform, a pose adjustment instruction for the movable platform, a focus adjustment instruction, and a lens switch instruction; and
the determining, in response to the first user instruction, an auxiliary field of view includes:
in response to the first user instruction, the current field of view is switched to the auxiliary field of view by at least one of adjusting a position and/or attitude of the movable platform, adjusting a focal length, and switching a lens.
42. The apparatus of claim 39, wherein the auxiliary field of view comprises at least two sub-auxiliary fields of view.
43. The apparatus of claim 42, wherein the first user instruction comprises at least one of a position adjustment instruction for a pan-tilt, a pose adjustment instruction for a pan-tilt, a position adjustment instruction for a movable platform, a pose adjustment instruction for a movable platform, a focus adjustment instruction, and a lens switch instruction; and
the determining, in response to the first user instruction, an auxiliary field of view includes:
responding to the first user instruction, and sequentially switching the current view field to each of at least two sub auxiliary view fields by adjusting the position and/or the gesture of the cradle head, adjusting the position and/or the gesture of the movable platform, adjusting the focal length and switching the lens so as to determine at least two sub auxiliary view fields;
the determining at least one base point in the auxiliary field of view comprises:
at least one base point is determined from each of the sub-auxiliary fields of view.
44. The apparatus of claim 41 or 43, wherein the desired field of view is greater than a minimum field of view of an image capturing apparatus for capturing images, the image capturing apparatus comprising the lens.
45. The apparatus of claim 43, wherein for each of said sub-auxiliary fields of view, said determining at least one base point from each of said sub-auxiliary fields of view, respectively, comprises:
Acquiring a second user instruction; and
at least one of the base points is determined in the sub-auxiliary field of view in response to the second user instruction.
46. The apparatus of claim 45, wherein said determining at least one of said base points in said sub-auxiliary field of view in response to said second user instruction comprises:
in response to the second user instruction, at least one base point is respectively determined in each of the sub auxiliary fields of view based on a first field of view preset position.
47. The apparatus of claim 46, wherein the first field of view preset position comprises at least one of: center point, vertex, any point determined based on preset rules.
48. The apparatus of claim 39, wherein said determining at least one base point in said auxiliary field of view comprises:
acquiring a third user instruction; and
at least one of the base points is determined in the auxiliary field of view in response to the third user instruction.
49. The apparatus of claim 48, wherein prior to said obtaining third user instructions, said computer program when executed by said processor is further configured to implement:
Outputting an image corresponding to the auxiliary view field through a display module; and
the obtaining a third user instruction includes:
and in the process of outputting the image corresponding to the auxiliary view field through the display module, acquiring a user operation aiming at the image corresponding to the auxiliary view field, and determining a third user instruction based on the user operation.
50. The apparatus of claim 39, wherein said determining at least one base point in said auxiliary field of view comprises:
at least one of the base points is determined in the auxiliary field of view based on a second field of view preset position.
51. The apparatus of claim 50, wherein the second field of view preset position comprises at least one of: center point, vertex, any point determined based on preset rules.
52. The apparatus of claim 38, wherein the first user instruction comprises base point setting information;
the determining, in response to the first user instruction, at least one base point from within the auxiliary field of view comprises:
at least one of the base points is determined from within the auxiliary field of view based on the base point setting information.
53. The apparatus of claim 52, wherein the base point setting information includes at least one of direction information, angle value information, coordinate information, or a reference point.
54. The apparatus of claim 38, wherein the determining a desired field of view based on at least one of the base points comprises:
an image acquisition field of view is determined based on at least one of the base points, the image acquisition region covered by the image acquisition field of view comprising the image acquisition region covered by the desired field of view so as to acquire an image coincident with the desired field of view.
55. The apparatus of claim 54, wherein the computer program when executed by the processor is further configured to:
and cutting the acquired image to obtain an image matched with the expected view field.
56. The apparatus of claim 55, wherein cropping the acquired image to obtain an image that matches the desired field of view comprises:
acquiring an initial image corresponding to the image acquisition view field; and
and cutting the initial image to obtain an image which is matched with the expected view field.
57. The apparatus of claim 56, wherein said acquiring an initial image corresponding to said image acquisition field of view comprises:
decomposing the image acquisition view field based on a preset view field to obtain a plurality of sub-acquisition view fields;
Controlling an image capturing device to respectively acquire images under a plurality of sub-acquisition views to obtain a plurality of sub-images; and
and synthesizing an initial image corresponding to the image acquisition view field based on the plurality of sub-images.
58. The apparatus of claim 57, wherein decomposing the image acquisition field of view based on a preset field of view comprises:
determining a first image acquisition area corresponding to the image acquisition view field, and determining a second image acquisition area corresponding to the preset view field;
decomposing the first image acquisition region based on at least the second image acquisition region to obtain a plurality of sub-image acquisition regions; and
the controlling the image capturing device to respectively acquire images under a plurality of acquisition fields to obtain a plurality of sub-images comprises:
and controlling the image capturing device to respectively acquire a plurality of sub-images corresponding to the plurality of sub-image acquisition areas.
59. The apparatus of claim 58, wherein decomposing the first image acquisition region based at least on the second image acquisition region to obtain a plurality of sub-image acquisition regions comprises:
And decomposing the first image acquisition region based on the second image acquisition region and the region overlapping proportion to obtain a plurality of sub-image acquisition regions.
60. The apparatus of claim 59, wherein the region overlap ratio is determined based on a user operation or a preset overlap ratio.
61. The apparatus of claim 59, wherein decomposing the first image acquisition region based on the second image acquisition region and the region overlap ratio comprises:
determining a first number of sub-image acquisition areas included in the length direction based on the length of the second image acquisition area, the length of the first image acquisition area and the length overlapping proportion;
determining a second number of sub-image acquisition areas included in the width direction based on the width of the second image acquisition area, the width of the first image acquisition area and the width overlapping proportion; and
and determining the number of the sub-image acquisition areas based on the first number and the second number so as to decompose the first image acquisition area.
62. The apparatus of claim 58, wherein each sub-image acquisition region of the plurality of sub-image acquisition regions has corresponding shooting pose information.
63. The apparatus of claim 62, wherein the shooting pose information comprises: at least one of pitch angle information, yaw angle information and roll angle information.
64. The device of claim 63, wherein when the image capture device is disposed on a cradle head and the cradle head is disposed on a movable platform, wherein:
the pitch angle information comprises at least one of pitch angle information of the movable platform and pitch angle information of the cradle head;
and/or
The yaw angle information includes at least one of movable platform yaw angle information and pan/tilt yaw angle information.
65. The apparatus of claim 63, wherein the adjustment priority of pan-tilt pitch angle in the pitch angle information is greater than or equal to the adjustment priority of movable platform pitch angle;
and/or
And the adjustment priority of the yaw angle of the cradle head in the yaw angle information is greater than or equal to the adjustment priority of the yaw angle of the movable platform.
66. The apparatus of claim 63, wherein the shooting pose information further comprises: the position information comprises height information, the height information represents displacement information of the movable platform in the vertical direction, and the image capturing device is arranged on the movable platform.
67. The apparatus of claim 62, wherein the capturing pose information is determined based on an image capturing area and a plurality of sub-image capturing areas corresponding to a current capturing pose for sequential switching of the image capturing apparatus from the current image capturing area to the plurality of sub-image capturing areas for image capturing under each of the sub-image capturing areas.
68. The apparatus of claim 54, wherein when the image capturing device is positioned on a cradle head and the cradle head is positioned on a movable platform, the movable platform is in any one of a hover state, a vertical lift state, or a horizontal movement state during acquisition of an image coincident with the desired field of view.
69. The apparatus of claim 57, wherein the synthesizing an initial image corresponding to the image acquisition field of view based on the plurality of sub-images comprises:
and synthesizing the plurality of sub-images based on the eclosion mode to obtain an initial image corresponding to the image acquisition view field.
70. The apparatus of claim 69, wherein different image fusion areas consume different computational resources during fusion.
71. The apparatus of claim 54, wherein after acquiring an image that coincides with the desired field of view, the computer program, when executed by the processor, is further configured to:
and outputting the image matched with the expected field of view through a display module.
72. The apparatus of claim 56, wherein said cropping said initial image to obtain an image that matches said desired field of view comprises:
and clipping the initial image based on at least one base point to obtain an image matched with the expected view field.
73. The apparatus of claim 38, wherein a first mapping exists between the first user instruction and a predetermined pattern;
after the first user instruction is obtained, the computer program when executed by the processor is further configured to implement:
and responding to the first user instruction, and entering the preset mode.
74. The device of claim 73, wherein the image capturing mechanism for capturing images is disposed on a cradle head, and the cradle head is disposed on a movable platform; and
in the preset mode, the movable platform is in any one of a hovering state, a vertical lifting state or a horizontal moving state, and the cradle head is in a locking state or can rotate around at least one of at least one shaft.
75. A field of view determination system, the system comprising: the system comprises a control terminal and a movable platform which are in communication connection with each other, wherein an image capturing device is arranged on the movable platform;
the control terminal is used for acquiring a first user instruction;
the control terminal is configured to determine, in response to the first user instruction, at least one base point from within an auxiliary field of view, the auxiliary field of view being determined based on a field of view of an image capture device on the movable platform;
the control terminal is used for determining a desired view field based on at least one base point so as to obtain an image which is matched with the desired view field;
wherein the base points characterize boundary information of the desired field of view;
wherein the auxiliary field of view is different from the field of view range of the desired field of view.
76. The system of claim 75, wherein the control terminal for determining at least one base point from within the auxiliary field of view in response to the first user instruction comprises:
the control terminal is used for responding to the first user instruction and determining an auxiliary view field; and
the control terminal is configured to determine at least one of the base points in the auxiliary field of view.
77. The system of claim 76 wherein the auxiliary field of view is larger than a current field of view of an image capture device used to capture the image; and/or
The auxiliary field of view is greater than or equal to the desired field of view.
78. The system of claim 77, wherein said first user instructions include at least one of position adjustment instructions for a movable platform, attitude adjustment instructions for a movable platform, focus adjustment instructions, and lens shift instructions; and
the control terminal for determining, in response to the first user instruction, an auxiliary field of view comprising:
the control terminal is used for responding to the first user instruction, and switching the current field of view to an auxiliary field of view by at least one of controlling the movable platform to adjust the position and/or the gesture, controlling the image capturing device to adjust the focal length and controlling the image capturing device to switch the lens.
79. The system of claim 76, wherein the auxiliary field of view comprises at least two sub-auxiliary fields of view.
80. The system of claim 79, wherein the first user instruction comprises at least one of a position adjustment instruction for a pan-tilt, a pose adjustment instruction for a pan-tilt, a position adjustment instruction for a movable platform, a pose adjustment instruction for a movable platform, a focus adjustment instruction, and a lens switch instruction; and
The control terminal for determining, in response to the first user instruction, an auxiliary field of view comprising:
the control terminal is used for responding to the first user instruction, and sequentially switching the current view field to each of at least two sub auxiliary view fields respectively by controlling at least one of adjusting the position and/or the gesture of the cradle head, controlling the position and/or the gesture of the movable platform, controlling the focal length of the image capturing device and controlling the switching lens of the image capturing device so as to determine at least two sub auxiliary view fields;
the control terminal for determining at least one of the base points in the auxiliary field of view comprises:
the control terminal is used for respectively determining at least one base point from each sub auxiliary view field.
81. The system of claim 78 or 80, wherein the desired field of view is greater than a minimum field of view of an image capturing apparatus for capturing images, the image capturing apparatus comprising a lens.
82. The system of claim 80, wherein for each of the sub-auxiliary fields of view, the control terminal for determining at least one base point from each of the sub-auxiliary fields of view, respectively, comprises:
The control terminal is used for acquiring a second user instruction; and
the control terminal is used for respectively determining at least one base point in each sub auxiliary view field in response to the second user instruction.
83. The system of claim 82, wherein the control terminal for determining at least one of the base points in each of the sub-auxiliary fields of view in response to the second user instruction comprises:
the control terminal is used for responding to the second user instruction and respectively determining at least one base point in each sub-auxiliary view field based on a first view field preset position.
84. The system of claim 83, wherein the first field of view preset position comprises at least one of: center point, vertex, any point determined based on preset rules.
85. The system of claim 76 wherein the control terminal for determining at least one of the base points in the auxiliary field of view comprises:
the control terminal is used for acquiring a third user instruction; and
the control terminal is configured to determine at least one of the base points in the auxiliary field of view in response to the third user instruction.
86. The system of claim 85, wherein the control terminal is further configured to output an image corresponding to the auxiliary field of view via a display module prior to the control terminal being configured to obtain a third user instruction; and
the control terminal for obtaining a third user instruction includes:
the control terminal is used for acquiring user operation for the image corresponding to the auxiliary view field in the process of outputting the image corresponding to the auxiliary view field through the display module, and determining a third user instruction based on the user operation.
87. The system of claim 76 wherein the control terminal for determining at least one of the base points in the auxiliary field of view comprises:
the control terminal is used for determining at least one base point in the auxiliary view field based on a second view field preset position.
88. The system of claim 87, wherein the second field of view preset position comprises at least one of: center point, vertex, any point determined based on preset rules.
89. The system of claim 75, wherein the first user instruction includes base point setting information;
The control terminal for determining at least one base point from within the auxiliary field of view in response to the first user instruction comprises:
the control terminal is used for determining at least one base point from the auxiliary view field based on the base point setting information.
90. The system of claim 89, wherein the base point setting information comprises at least one of direction information, angle value information, coordinate information, or a reference point.
91. The system of claim 75, wherein the control terminal for determining a desired field of view based on at least one of the base points comprises:
the control terminal is used for determining an image acquisition view field based on at least one base point, and an image acquisition area covered by the image acquisition view field comprises an image acquisition area covered by the expected view field so as to acquire an image matched with the expected view field.
92. The system of claim 91, wherein said image capturing means is adapted to crop the acquired image to produce an image that matches said desired field of view.
93. The system of claim 92 wherein the image capturing means for cropping the acquired image to obtain an image that matches the desired field of view comprises:
The image capturing device is used for acquiring an initial image corresponding to the image acquisition view field; and
the image capturing device is used for clipping the initial image to obtain an image which is matched with the expected view field.
94. The system of claim 93, wherein the image capturing means for acquiring an initial image corresponding to the image acquisition field of view comprises:
the image capturing device is used for respectively carrying out image acquisition under a plurality of sub-acquisition view fields to obtain a plurality of sub-images, wherein the plurality of sub-acquisition view fields are obtained by decomposing the image acquisition view fields based on a preset view field; and
the image capturing device is used for synthesizing an initial image corresponding to the image acquisition view field based on the plurality of sub-images.
95. The system of claim 94 wherein a plurality of said sub-acquisition fields of view are determined by:
the movable platform and/or the image capturing device are used for determining a first image acquisition area corresponding to the image acquisition field of view and determining a second image acquisition area corresponding to the preset field of view;
the movable platform and/or the image capturing device are/is used for decomposing the first image acquisition area based on at least the second image acquisition area to obtain a plurality of sub-image acquisition areas so as to determine a plurality of sub-acquisition view fields; and
The image capturing device is configured to perform image capturing under a plurality of sub-capturing views, to obtain a plurality of sub-images, including:
the image capturing device is used for respectively acquiring a plurality of sub-images corresponding to a plurality of sub-image acquisition areas.
96. The system of claim 95, wherein the movable stage and/or the image capture device is configured to decompose the first image capture region based at least on the second image capture region to obtain a plurality of sub-image capture regions comprising:
the movable platform and/or the image capturing device are/is used for decomposing the first image acquisition area based on the second image acquisition area and the area overlapping proportion to obtain a plurality of sub-image acquisition areas.
97. The system of claim 96, wherein the region overlap ratio is determined based on a user operation or a preset overlap ratio.
98. The system of claim 96, wherein the movable stage and/or the image capturing device are configured to decompose the first image capturing region based on the second image capturing region and a region overlapping ratio, the obtaining a plurality of sub-image capturing regions comprising:
The movable platform and/or the image capturing device are/is used for determining the first number of sub-image acquisition areas included in the length direction based on the length of the second image acquisition area, the length of the first image acquisition area and the length overlapping proportion;
the movable platform and/or the image capturing device are/is used for determining the second number of sub-image acquisition areas included in the width direction based on the width of the second image acquisition area, the width of the first image acquisition area and the width overlapping proportion; and
the movable platform and/or the image capturing device are/is configured to determine the number of sub-image acquisition areas based on the first number and the second number, so as to decompose the first image acquisition area.
99. The system of claim 95, wherein each sub-image acquisition region of the plurality of sub-image acquisition regions has corresponding capture pose information.
100. The system of claim 99, wherein the shooting pose information comprises: at least one of pitch angle information, yaw angle information and roll angle information.
101. The system of claim 100, wherein the image capture device is disposed on a cradle head, and the cradle head is disposed on a movable platform;
The pitch angle information comprises at least one of pitch angle information of the movable platform and pitch angle information of the cradle head;
and/or
The yaw angle information includes at least one of movable platform yaw angle information and pan/tilt yaw angle information.
102. The system of claim 100, wherein the adjustment priority of the pan-tilt pitch angle in the pitch angle information is greater than or equal to the adjustment priority of the movable platform pitch angle;
and/or
And the adjustment priority of the yaw angle of the cradle head in the yaw angle information is greater than or equal to the adjustment priority of the yaw angle of the movable platform.
103. The system of claim 100, wherein the shooting pose information further comprises: the position information comprises height information, the height information represents displacement information of the movable platform in the vertical direction, and the image capturing device is arranged on the movable platform.
104. The system of claim 99, wherein the capture pose information is determined based on an image capture region and a plurality of sub-image capture regions corresponding to a current capture pose for sequential switching of the image capture device from the current image capture region to the plurality of sub-image capture regions for image capture under each of the sub-image capture regions.
105. The system of claim 91, wherein the image capture device is disposed on a cradle head and the cradle head is disposed on the movable platform, the movable platform being in any one of a hover state, a vertical lift state, or a horizontal movement state during acquisition of an image coincident with the desired field of view.
106. The system of claim 94, wherein the image capture device for synthesizing an initial image corresponding to the image acquisition field of view based on a plurality of the sub-images comprises:
the image capturing device is used for synthesizing the plurality of sub-images based on the eclosion mode to obtain an initial image corresponding to the image acquisition view field.
107. The system of claim 106, wherein the image capture device consumes different computational resources during fusion of different image fusion regions.
108. The system of claim 91, wherein said control terminal, after being configured to acquire an image that matches said desired field of view, is further configured to output said image that matches said desired field of view via a display module.
109. The system of claim 93, wherein said image capturing means for cropping said initial image to obtain an image that matches said desired field of view comprises:
the image capturing device is used for clipping the initial image based on at least one base point to obtain an image which is matched with the expected view field.
110. The system of claim 75, wherein a first mapping exists between the first user command and a predetermined pattern;
the control terminal is further configured to enter the preset mode in response to the first user instruction after being configured to acquire the first user instruction.
111. The system of claim 110, wherein the image capture device is disposed on a cradle head, and the cradle head is disposed on the movable platform; and
the movable platform is in any one of a hovering state, a vertical lifting state or a horizontal moving state in the preset mode, and the cradle head is in a locking state or can rotate around at least one of at least one shaft.
112. The system of claim 75, wherein the control terminal is disposed on the movable platform.
113. A computer-readable storage medium storing executable instructions that, when executed by one or more processors, cause the one or more processors to perform the method of any one of claims 1 to 37.
CN202080035373.6A 2020-09-15 2020-09-15 Visual field determining method, visual field determining device, visual field determining system and medium Active CN113841381B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/115379 WO2022056683A1 (en) 2020-09-15 2020-09-15 Field of view determination method, field of view determination device, field of view determination system, and medium

Publications (2)

Publication Number Publication Date
CN113841381A CN113841381A (en) 2021-12-24
CN113841381B true CN113841381B (en) 2023-09-12

Family

ID=78963295

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080035373.6A Active CN113841381B (en) 2020-09-15 2020-09-15 Visual field determining method, visual field determining device, visual field determining system and medium

Country Status (2)

Country Link
CN (1) CN113841381B (en)
WO (1) WO2022056683A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117611689B (en) * 2024-01-23 2024-04-05 凯多智能科技(上海)有限公司 Calibration parameter calibration method, detection method, device, medium, equipment and system

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108781261A (en) * 2016-06-09 2018-11-09 谷歌有限责任公司 Photo is shot by dysopia

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4352334B2 (en) * 2004-12-27 2009-10-28 ソニー株式会社 Imaging apparatus and method, and program
US9619138B2 (en) * 2012-06-19 2017-04-11 Nokia Corporation Method and apparatus for conveying location based images based on a field-of-view
CN104486543B (en) * 2014-12-09 2020-11-27 北京时代沃林科技发展有限公司 System for controlling pan-tilt camera in touch mode of intelligent terminal
JP6470796B2 (en) * 2017-06-12 2019-02-13 株式会社コロプラ Information processing method, program, and computer
CN108259921B (en) * 2018-02-08 2020-06-16 青岛一舍科技有限公司 Multi-angle live broadcast system based on scene switching and switching method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108781261A (en) * 2016-06-09 2018-11-09 谷歌有限责任公司 Photo is shot by dysopia

Also Published As

Publication number Publication date
CN113841381A (en) 2021-12-24
WO2022056683A1 (en) 2022-03-24

Similar Documents

Publication Publication Date Title
US11797009B2 (en) Unmanned aerial image capture platform
US11644832B2 (en) User interaction paradigms for a flying digital assistant
US11233943B2 (en) Multi-gimbal assembly
US11106201B2 (en) Systems and methods for target tracking
US11644839B2 (en) Systems and methods for generating a real-time map using a movable object
US20170300051A1 (en) Amphibious vertical take off and landing unmanned device with AI data processing apparatus
CN105857582A (en) Method and device for adjusting shooting angle, and unmanned air vehicle
US20210112194A1 (en) Method and device for taking group photo
CN113841381B (en) Visual field determining method, visual field determining device, visual field determining system and medium
WO2020209167A1 (en) Information processing device, information processing method, and program
WO2019061334A1 (en) Systems and methods for processing and displaying image data based on attitude information
WO2022109860A1 (en) Target object tracking method and gimbal
KR20210106422A (en) Job control system, job control method, device and instrument
WO2022188151A1 (en) Image photographing method, control apparatus, movable platform, and computer storage medium
WO2020225979A1 (en) Information processing device, information processing method, program, and information processing system
WO2021195944A1 (en) Movable platform control method and device, movable platform and storage medium
KR102204435B1 (en) Apparatus for providing an augmented reality using unmanned aerial vehicle, method thereof and computer recordable medium storing program to perform the method
CN115437390A (en) Control method and control system of unmanned aerial vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant