CN112514366A - Image processing method, image processing apparatus, and image processing system - Google Patents

Image processing method, image processing apparatus, and image processing system Download PDF

Info

Publication number
CN112514366A
CN112514366A CN202080004280.7A CN202080004280A CN112514366A CN 112514366 A CN112514366 A CN 112514366A CN 202080004280 A CN202080004280 A CN 202080004280A CN 112514366 A CN112514366 A CN 112514366A
Authority
CN
China
Prior art keywords
image
focal length
region
processed
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202080004280.7A
Other languages
Chinese (zh)
Inventor
邹文
彭亮
丁硕
夏斌强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SZ DJI Technology Co Ltd
Original Assignee
SZ DJI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SZ DJI Technology Co Ltd filed Critical SZ DJI Technology Co Ltd
Publication of CN112514366A publication Critical patent/CN112514366A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Abstract

An image processing method comprising: obtaining a first maximum visual angle area corresponding to the first focal length under the current attitude angle, and obtaining a second maximum visual angle area corresponding to the second focal length; comparing the first maximum viewing angle region and the second maximum viewing angle region; and determining the selectable range of the region to be processed in the first image according to the comparison result so as to acquire at least one second image of the region to be processed at the second focal distance, wherein the first image is acquired at the first focal distance; wherein the second focal length is greater than the first focal length. The shooting range of the combined lens with various focal lengths under various focal lengths can be determined, so that a user can conveniently know the shooting range under the specified focal length, and the user experience is improved.

Description

Image processing method, image processing apparatus, and image processing system
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, and an image processing system.
Background
In surveying or inspection, requirements are often met, such as the need to shoot a tower 100 meters high, and the need to see each detail of the tower, see if any screws are rusted, or if any steel is abnormal.
However, since the combined lens with multiple focal lengths has different shooting ranges at different focal lengths, it is difficult for a user to determine the shooting ranges of the combined lens with multiple focal lengths at multiple focal lengths through a software application, and it is inconvenient for the user to control the combined lens through the software application to shoot a desired image at a specified focal length, resulting in poor user experience.
BRIEF SUMMARY OF THE PRESENT DISCLOSURE
In view of this, the embodiments of the present disclosure provide an image processing method, an image processing apparatus, and an image processing system, which can determine the range of the combined lens with multiple focal lengths under multiple focal lengths through specific application software, so that a user can know the range of the combined lens with the specified focal length, and user experience is improved.
In a first aspect, an embodiment of the present disclosure provides an image processing method, including: obtaining a first maximum visual angle area corresponding to the first focal length under the current attitude angle, and obtaining a second maximum visual angle area corresponding to the second focal length; comparing the first maximum viewing angle region and the second maximum viewing angle region; and determining the selectable range of the region to be processed in the first image according to the comparison result so as to acquire at least one second image of the region to be processed at the second focal length, wherein the first image is acquired at the first focal length; wherein the second focal length is larger than the first focal length.
In application, since the angle-of-view region corresponding to the first focal length and the angle-of-view region corresponding to the second focal length are different, the maximum pose adjustable range of the camera is affected by the mechanical structure of the pose adjustment device, etc., such as when the pose of the camera is adjusted by the pose adjustment device, resulting in a case where the second maximum angle-of-view region corresponding to the second focal length cannot completely include the first maximum angle-of-view region corresponding to the first focal length at the current pose angle. And since it is unclear to the user which regions in the first maximum viewing angle region overlap with the second maximum viewing angle region, that is, it is unclear to the user which regions in the first image can be photographed with the second focal length to obtain a clearer partial image. Therefore, when the area to be processed selected by the user exceeds the overlapped area, the user may not be satisfied by photographing with the second focal length. In addition, when the area to be processed selected by the user exceeds the overlapped area, the posture adjusting device is easily damaged. However, according to the image processing method provided by the embodiment of the present disclosure, a first maximum viewing angle region corresponding to the first focal length and a second maximum viewing angle region corresponding to the second focal length under the current attitude angle may be compared, and then a selectable range of the to-be-processed region in the first image corresponding to the first maximum viewing angle region is determined, so as to obtain at least one second image of the to-be-processed region under the second focal length. Therefore, the optional range of the region to be processed in the first image is determined, so that the user can not only determine which regions in the first image can be photographed by using the second focal length to obtain a clearer local image, but also reduce the possibility of damage to the posture adjusting device by means of measures such as limiting the operation of the user and limiting the adjusting range of the posture adjusting device through the optional range.
In a second aspect, an embodiment of the present disclosure provides an image processing apparatus, including: one or more processors; and a computer readable storage medium. The computer readable storage medium is for storing one or more computer programs which, when executed by a processor, implement: obtaining a first maximum visual angle area corresponding to the first focal length under the current attitude angle, and obtaining a second maximum visual angle area corresponding to the second focal length; comparing the first maximum viewing angle region and the second maximum viewing angle region; and determining the selectable range of the region to be processed in the first image according to the comparison result so as to acquire at least one second image of the region to be processed at the second focal length, wherein the first image is acquired at the first focal length; wherein the second focal length is larger than the first focal length.
The image processing device provided by the embodiment of the disclosure can automatically determine the optional range of the region to be processed in the first image, so that a user can conveniently determine the region which can be photographed by using the second focal length in the first image to obtain a clearer local image, thereby being beneficial to improving the user experience and reducing the possibility of damage of the posture adjusting device.
In a third aspect, an embodiment of the present disclosure provides an image processing system, including: the control device is used for acquiring a region to be processed, and the photographing device is used for acquiring a first maximum visual angle region corresponding to a first focal length of the photographing device under a current attitude angle and acquiring a second maximum visual angle region corresponding to a second focal length; comparing the first maximum viewing angle region and the second maximum viewing angle region; and determining the selectable range of the region to be processed in the first image according to the comparison result so as to control the photographing device to acquire at least one second image of the region to be processed at a second focal length, wherein the first image is acquired at the first focal length, and the second focal length is larger than the first focal length.
The image processing system provided by the embodiment of the disclosure can automatically determine the selectable range of the to-be-processed region in the first image by the photographing device, so that a user can conveniently determine the photographing region which can be photographed by using the second focal length in the first image, and the user can conveniently control the photographing device to obtain a clearer local image of the photographing region under the second focal length, thereby being beneficial to improving user experience and reducing the possibility of damage of the posture adjusting device.
In a fourth aspect, embodiments of the present disclosure provide a computer-readable storage medium storing executable instructions that, when executed by one or more processors, may cause the one or more processors to perform the method as above.
Advantages of additional aspects of the disclosure will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the disclosure.
Drawings
The above and other objects, features and advantages of the embodiments of the present disclosure will become more readily understood through the following detailed description with reference to the accompanying drawings. Various embodiments of the present disclosure will be described by way of example and not limitation in the accompanying drawings, in which:
fig. 1 is an application scenario of an image processing method, an image processing apparatus, and an image processing system provided in an embodiment of the present disclosure;
fig. 2 is an application scenario of an image processing method, an image processing apparatus, and an image processing system according to another embodiment of the disclosure;
fig. 3 is a schematic flowchart of an image processing method according to an embodiment of the disclosure;
fig. 4 is a schematic view of a range of a second maximum viewing angle region corresponding to a second focal length in a vertical direction according to an embodiment of the disclosure;
fig. 5-10 are schematic diagrams of a display interface of a control terminal provided by an embodiment of the present disclosure;
fig. 11 is a schematic diagram illustrating division of a region to be processed according to an embodiment of the present disclosure;
fig. 12 is a schematic view of a region to be photographed according to an embodiment of the present disclosure;
fig. 13 is a schematic diagram illustrating an odd number of shots according to an embodiment of the disclosure;
fig. 14 is a schematic diagram illustrating a case where the number of shots provided by the embodiment of the disclosure is an even number;
15-17 are schematic diagrams of display images provided by embodiments of the present disclosure;
fig. 18 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure;
fig. 19 is a schematic structural diagram of an image processing system provided in an embodiment of the present disclosure;
fig. 20 is a schematic structural diagram of a control terminal according to an embodiment of the present disclosure; and
fig. 21 is a schematic structural diagram of a photographing device according to an embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to the embodiments of the present disclosure, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below with reference to the drawings are exemplary and intended to be illustrative of the present disclosure, and should not be construed as limiting the present disclosure.
Fig. 1 is an application scenario of an image processing method, an image processing apparatus, and an image processing system according to an embodiment of the present disclosure. As shown in fig. 1, the unmanned aerial vehicle 10 is capable of rotating about up to three orthogonal axes, such as a first pitch axis (X1), a first heading axis (Y1), and a first roll axis (Z1). Rotation about three axes may refer to pitch rotation, yaw rotation, and roll rotation. The angular velocities of the UAV 10 about the first pitch, first heading, and first roll axes may be represented as ω X1, ω Y1, ω Z1. The unmanned aerial vehicle 10 is capable of translational execution motion along a first pitch axis, a first heading axis, and a first roll axis. The linear velocities of the UAV 10 along the first pitch axis, the first heading axis, and the first roll axis may be denoted as VX1, VY1, VZ1, respectively. It should be noted that the unmanned aerial vehicle 10 is only an exemplary illustration of an application scenario and is not to be understood as a limitation to the application scenario of the present disclosure, which may also be applied to a variety of movable platforms. For example, the movable platform may also be a cloud-based trolley, a handheld pan-tilt, a robot, etc., in which the movable platform may or may not rotate about one or both axes.
As shown in fig. 1, the imaging device 20 is mounted on the unmanned aerial vehicle 10 via a carrier. The carrier enables the camera 20 to move relative to the UAV 10 about and/or along up to three orthogonal axes (in other embodiments, non-orthogonal axes are also possible), such as a second pitch axis (X2), a second heading axis (Y2), and a second roll axis (Z2). The second pitch axis, the second heading axis, and the second roll axis may be parallel to the first pitch axis, the first heading axis, and the first roll axis, respectively. In some possible embodiments, the load is an imaging device including an optical module, and the second roll axis may be substantially parallel to an optical path and an optical axis of the optical module. The optical module may be coupled to an image sensor to capture an image. The carrier body can be rotated about up to three orthogonal axes, such as a second pitch axis, a second yaw axis, and a second roll axis, in response to a control signal from an actuator, such as a motor, coupled to the carrier body. Rotation about three axes may refer to pitch rotation, yaw rotation, and roll rotation, respectively. The angular velocities of the camera 20 about the second pitch axis, the second heading axis, and the second roll axis may be denoted as ω X2, ω Y2, ω Z2, respectively. The carrier may translate the camera 20 relative to the UAV 10 along a second pitch axis, a second heading axis, and a second roll axis. The linear velocities of the camera 20 along the second pitch axis, the second heading axis, and the second roll axis may be denoted as VX2, VY2, VZ2, respectively.
In some possible embodiments, the carrier may only allow rotational motion of the camera 20 relative to the UAV 10 about a subset of the three axes (second pitch axis, second heading axis, and second roll axis). For example, the carrier may only allow rotational movement of the camera 20 about the second pitch axis, the second yaw axis, and the second roll axis, or a combination of any of these axes, and not allow translational movement of the camera 20 along any of the axes. For example, the carrier may allow the camera 20 to rotate about only one of the second pitch axis, the second heading axis, and the second roll axis. The carrier may allow the camera 20 to rotate about only two of the second pitch axis, the second heading axis, and the second roll axis. The carrier may allow the camera 20 to rotate about three axes, a second pitch axis, a second yaw axis, and a second roll axis.
In some possible embodiments, the carrier may only allow translational movement of the camera 20 along the second pitch axis, the second heading axis, and the second roll axis, or a combination of any of these axes, and not allow rotational movement of the camera 20 about any of the axes. For example, the carrier may allow the camera 20 to move along only one of the second pitch axis, the second heading axis, and the second roll axis. The carrier may allow the camera 20 to move only along two of the second pitch axis, the second heading axis, and the second roll axis. The carrier may allow the camera 20 to move along three axes, a second pitch axis, a second yaw axis, and a second roll axis.
In some possible embodiments, the carrier may allow the camera 20 to perform rotational and translational movements with respect to the UAV 10. For example, the carrier may allow the camera 20 to move along and/or about one, two, or three of the second pitch axis, the second heading axis, and the second roll axis.
In some possible embodiments, the camera 20 may be mounted directly on the UAV 10 without a carrier, or the carrier does not allow movement of the camera 20 relative to the UAV 10. In this case, the attitude, position, and/or orientation of the camera 20 is fixed with respect to the unmanned aerial vehicle 10.
In some possible embodiments, the adjustment of the pose, orientation and/or position of the camera 20 may be achieved individually or collectively by a modest adjustment of the UAV 10, carrier and/or camera 20. For example, the camera 20 can rotate 80 degrees around a specified axis (e.g., a heading axis) by: unmanned aerial vehicle 10 rotates 80 degrees alone, the load is rotated 80 degrees relative to unmanned aerial vehicle 10 by carrier actuation, unmanned aerial vehicle 10 rotates 50 degrees while the load rotates 30 degrees relative to unmanned aerial vehicle 10.
Similarly, translational movement of other loads may be achieved through modest adjustments to the UAV 10 and carrier. Further, the adjustment of the operating parameters of the load may also be accomplished by one or more of the unmanned aerial vehicle 10, the carrier, or a control terminal of the unmanned aerial vehicle 10. The operating parameter of the load may include, for example, a zoom level or a focal length of the imaging device.
For example, the unmanned aerial vehicle 10 may include an unmanned aerial vehicle, and the photographing device 20 may include an imaging device.
In one embodiment, a control terminal 30 may also be included in the application scenario. Wherein, the movable platform can be an aircraft (such as an unmanned aerial vehicle), a cloud platform truck, a handheld cloud platform, a robot and the like. The control terminal 30 may be a mobile phone, a tablet computer, or the like, and may further have a remote control function to realize remote control of the movable platform such as the unmanned aerial vehicle 10. The unmanned aerial vehicle 10 may include an attitude adjusting device (such as the above-described carrier) such as a pan/tilt head, and the imaging device 20 may be mounted on the pan/tilt head. The photographing device 20 may be a single photographing device, and the photographing device 20 may include a first lens and a second lens, which are respectively used for performing different photographing tasks, and of course, the photographing device 20 may also include two or more lenses, and the first lens and the second lens are located in different photographing devices. The first lens and the second lens may be lenses corresponding to a conventional photographing function of the camera. For example, the first lens and the second lens may be a wide-angle lens and a zoom lens, respectively. The wide-angle lens can be used for obtaining a wide and complete picture, the zoom lens can be used for obtaining high-definition details, the holder can be used for adjusting the shooting angle of the shooting device 20, and the unmanned aerial vehicle 10 can be used for ensuring stable movement and no drift. The control terminal 30 may be used to control the movement of the unmanned aerial vehicle 10. The control terminal 30 can also obtain the shooting picture returned by the shooting device 20 for the user to view. Further, the control terminal 30 may also acquire a control instruction of the user to the photographing apparatus 20 and transmit the control instruction to the photographing apparatus. The control instruction may be, for example, an instruction to control the zoom magnification of the zoom lens 30 head of the photographing apparatus 20, or an instruction to control the photographing range of the zoom lens of the photographing apparatus 20, or the like.
The Unmanned Aerial Vehicle 10 may include various types of UAVs (Unmanned Aerial vehicles), such as a quad-rotor UAV, a hexa-rotor UAV, and the like. The cloud platform that it includes can be the triaxial cloud platform, and the gesture of this cloud platform can be controlled on these three axles of pitch axis, roll axis and course axle promptly to confirm the corresponding gesture of cloud platform, make the shooting device 20 of carrying on the cloud platform can accomplish corresponding shooting task.
In the embodiment of the present disclosure, the unmanned aerial vehicle 10 may establish a communication connection with the control terminal 30 through a wireless connection manner (for example, a wireless connection manner based on WiFi or radio frequency communication, etc.). The control terminal 30 may be a controller with a joystick and a display screen, and controls the unmanned aerial vehicle 10 through the joystick amount. The control terminal 30 may also be an intelligent device such as a smart phone or a tablet personal computer, and may control the unmanned aerial vehicle 10 to automatically fly by configuring a flight trajectory on the user interface UI, or control the unmanned aerial vehicle 10 to automatically fly by using a motion sensing method, or control the unmanned aerial vehicle 1 to automatically fly along the recorded flight trajectory after recording the flight trajectory in the flight process of the unmanned aerial vehicle 10 in advance.
In the embodiment of the present disclosure, the photographing device 20 may also establish a communication connection with the control terminal 30 through a wireless connection manner (e.g., a wireless connection manner based on WiFi or radio frequency communication, etc.). For example, the photographing device 20 establishes a wireless communication connection with the control terminal 30 through a wireless communication module of the movable platform. For another example, the camera 20 may establish a communication connection with the control terminal 30 by itself, wherein in some embodiments, the camera 20 is a separate camera device, such as a motion camera. The control terminal 30 may install application software for controlling the photographing apparatus 20, through which a user can view a photographing screen returned by the photographing apparatus in a display interface of the control terminal 30, and provide an interactive interface of the user with the control terminal 30 and the photographing apparatus 20.
The camera 20 includes a charge-coupled device (CCD), a complementary metal-oxide-semiconductor (CMOS), an N-type metal-oxide-semiconductor (NMOS), a Live MOS), or other types of sensors. Specifically, the sensor may be a part of an imaging device (e.g., a camera) mounted on the unmanned aerial vehicle, and the imaging device may be mounted on the unmanned aerial vehicle through a cradle head, which may adjust an attitude angle of the photographing device 20.
The image data captured by camera 20 may be stored in a data storage device. The data storage device may be based on semiconductor, magnetic, optical or any suitable technology, and may include flash memory, USB drives, memory cards, solid state drives, hard disk drives, floppy disks, optical disks, magnetic disks, etc. For example, the data storage device includes a removable storage device that is removably attachable to the imaging device, such as any suitable format of Memory card, e.g., personal computer card, micro flash Memory card, SM card, Memory Stick Duo Memory Stick, Memory Stick PRO Duo Memory Stick, mini Memory card, multimedia Memory card, micro multimedia Memory card, MMCmro card, PS2 card, SD card, SxS Memory card, UFS card, micro SD card, MicroSD card, xD card, iSlick card, serial flash module, NT card, XQD card, and the like. The data storage device may also include an external hard disk drive, optical disk drive, magnetic disk drive, floppy disk drive or any other suitable storage device for connection to the imaging device.
The image captured by the photographing device 20 may be transmitted to the control terminal 30 through the wireless communication module. In some possible embodiments, the image data may be compressed or otherwise processed before being transmitted by the wireless communication module. In other embodiments, the image data may not be compressed or otherwise processed prior to transmission. The transmitted image data may be displayed on the control terminal 30 so that a user operating the user terminal may view the image data and/or interact with the control terminal 30 based on the image data.
Camera 20 may pre-process the captured image data. The pre-processing of the image data may be performed by hardware, software, or a combination thereof. The hardware may include a Field-Programmable Gate Array (FPGA), or the like. In particular, preprocessing may be performed on unprocessed image data after image data is captured for outlier removal, contrast enhancement, scale-space representation, and the like.
Fig. 2 is an application scenario of an image processing method, an image processing apparatus, and an image processing system according to another embodiment of the present disclosure.
As shown in fig. 2, the photographing device 20, the control terminal 30, and the posture adjustment device 40 may be included in the scene. The posture adjustment device 40 may be a device for adjusting the posture of the camera 20, such as a pan-tilt, to adjust the posture of the camera 20. The control terminal 30 may be an electronic device such as a mobile phone, a tablet computer, a notebook, a desktop, a remote controller, and the like, so as to realize remote control of the posture adjustment device 40 and the photographing device 20. For example, the imaging device 20 may be mounted on the attitude adjustment device 40, which is a pan/tilt head. The photographing device 20 may include a zoom lens to perform a photographing task. The zoom lens may be a lens corresponding to a conventional photographing function of the camera. For example, the zoom lens may be used to obtain a complete picture in an environment at a low focal length, the zoom lens may be used to obtain high definition details at a high focal length, and the posture adjustment device 40 may be used to adjust a photographing angle of the photographing device 20, and to make the movement of the photographing device 20 smooth and drift-free. The control terminal 30 may be used to control the movement of the posture adjustment device 40. The control terminal 30 can also obtain the shooting picture returned by the shooting device 20 for the user to view. Further, the control terminal 30 may also acquire a control instruction of the user to the photographing apparatus 20 and transmit the control instruction to the photographing apparatus. The control instruction may be, for example, an instruction to control the zoom magnification of the zoom lens of the photographing device 20, or may be an instruction to control the photographing range of the zoom lens of the photographing device 20, or may be an instruction to view a partially clear photograph, or the like.
The security scene is taken as an example for introduction. The user can set a monitoring task at the control terminal 30, which may include a task of acquiring an image of a designated area for a designated time period using a variety of focal lengths. For example, an image of the scene shown in fig. 2 is acquired at 9 a.m. (e.g., an image acquired at a low focal distance), and a partial image of a region in the image where the monitoring object is located is emphasized (e.g., an image acquired at a high focal distance). This facilitates the user to perform status monitoring and the like based on the image acquired by the photographing device 20. It is to be understood that the design application is not limited to the security scene, which is a shooting for a fixed range, and in addition, a shooting for a moving range, for example, a shooting of different scenes by a movable platform, may also be possible.
In addition, the user can acquire the image of the current time point and the specific area at any time by using various focal lengths. For example, when the user finds that there is a possibility of an abnormality in the scene shown in fig. 2, the user may control the imaging device 20 to capture an area where there is a possibility of an abnormality in time to perform state tracking or the like.
In the embodiment of the present disclosure, the photographing device 20 may also establish a communication connection with the control terminal 30 through a wired connection manner or a wireless connection manner (e.g., a wireless connection manner based on WiFi or radio frequency communication). The control terminal 30 may install application software for controlling the photographing apparatus 20, through which a user can view a photographing screen returned by the photographing apparatus in a display interface of the control terminal 30, and provide an interactive interface of the user with the control terminal 30 and the photographing apparatus 20. The camera 20 includes an image sensor to capture image information.
Next, an image processing method will be exemplarily described with reference to fig. 3 to 17.
Fig. 3 is a schematic flowchart of an image processing method according to an embodiment of the present disclosure.
As shown in fig. 3, the image processing method may include operations S301 to S305. It should be noted that the above operations may be performed on a variety of electronic devices with computing capabilities, such as at least one of an unmanned aerial vehicle, a camera, a cradle head camera, a control terminal, and the like. The holder camera is equipment comprising a holder and a camera, and the holder and the camera are connected in a non-quick-release manner.
A system having a pan/tilt camera (shooting end) and a control terminal (control end) will be described below as an example. In one embodiment, operations S301 to S305 are performed on a pan-tilt camera. In another embodiment, operations S301 to S305 are performed on the control terminal. In another embodiment, at least a portion of operations S301 to S305 is performed on the pan-tilt camera, and the rest is performed on the control terminal. In a scenario where some of operations S301 to S305 are performed on different electronic devices, the different electronic devices may communicate with each other in a communication manner, such as data required for the operations, user operations, intermediate operation results, and final operation results.
In the process of performing the operations S301 to S305, data, user operations, and the like required to be used may be collected by a specific sensor, for example, image data is collected by an image sensor, user operations are collected by a touch screen, and the like.
In operation S301, a first maximum viewing angle region corresponding to a first focal length at a current attitude angle is obtained, and a second maximum viewing angle region corresponding to a second focal length is obtained.
In the present embodiment, the second maximum viewing angle area corresponding to the second focal length is determined based on the maximum rotatable angle of the attitude adjusting device (e.g., pan/tilt head). The first maximum viewing angle area corresponding to the first focal length at the current attitude angle is associated with the pose of the camera. The camera installed on the unmanned aerial vehicle is installed on the unmanned aerial vehicle through the cradle head, and the posture of the camera is adjusted through the cradle head.
In one embodiment, the yaw range of the yaw angle includes a first range, such as (-90, 30), due to the pitch angle of the pan/tilt head when normally mounted in an down position, and a second range, such as (-300, 300). In order to avoid the arrangement of the pictures in the shape of Chinese character 'mi' caused when the pan-tilt controls the camera to shoot downwards, the adjustable ranges of the yaw angle and the pitch angle of the pan-tilt when the second focal length is used for shooting need to be limited. For example, the maximum adjustable range may be determined by the maximum rotation angle of the head.
Fig. 4 is a schematic view of a range of the second maximum viewing angle region corresponding to the second focal length in the vertical direction according to the embodiment of the disclosure. As shown in fig. 4, if the first range (MIN, MAX) is included in the yaw range for setting the pitch angle of the pan/tilt head at the second focal length, it may be determined that the second maximum viewing angle region corresponding to the second focal length includes:
Figure BDA0002924061760000124
wherein the content of the first and second substances,
Figure BDA0002924061760000125
is the vertical size of the viewing angle region corresponding to the second focal length. Accordingly, the deflection range in which the pitch angle of the pan/tilt head at the first focal length is set includes the second range (MIN, MAX) (since the photographing device corresponding to the first focal length and the second focal length can be controlled by the same pan/tilt head)The first range and the second range may be the same from the corresponding photographing apparatus), it may be determined that the first maximum viewing angle region corresponding to the first focal length includes:
Figure BDA0002924061760000121
wherein the content of the first and second substances,
Figure BDA0002924061760000122
is the vertical size of the viewing angle region corresponding to the first focal length.
When the distance is at a current attitude angle (cur _ pitch), the first maximum viewing angle region corresponding to the first focal length can be represented as
Figure BDA0002924061760000123
The first maximum viewing angle region corresponding to the first focal length at the current attitude angle may be a first viewing angle region corresponding to the first focal length at the current attitude angle, and the camera may capture an image of the first viewing angle region at the first focal length.
It should be noted that the second focal length includes any one of the following: a fixed focal length (e.g., the focal length of a camera having a fixed focal length), a preset focal length (e.g., the focal length specified in a preset shooting task), a focal length determined based on a focal length algorithm (e.g., determined based on historical focal length data used by the user), a focal length determined based on a sensor, and a user-selected focal length (e.g., a focal length selection component is displayed on a display interface, and the user selects a focal length by clicking a button, sliding a focal length selection bar, entering text, etc.). Wherein the focal distance determined based on the sensor comprises: a focal length determined based on the distance information, the distance information being determined by a laser rangefinder. For example, a laser rangefinder may be provided on a camera, pan-tilt or drone to determine the applicable focal length.
In operation S303, the first maximum viewing angle region and the second maximum viewing angle region are compared. For example, whether the first maximum viewing angle region includes the second maximum viewing angle region or whether the second maximum viewing angle region includes the first maximum viewing angle region is compared. For another example, an upper limit of the first maximum viewing angle region and an upper limit of the second maximum viewing angle region are compared, a lower limit of the first maximum viewing angle region and a lower limit of the second maximum viewing angle region are compared, a left limit of the first maximum viewing angle region and a left limit of the second maximum viewing angle region are compared, and a right limit of the first maximum viewing angle region and a right limit of the second maximum viewing angle region are compared.
In operation S305, a selectable range of the to-be-processed region in the first image is determined according to the comparison result so as to acquire at least one second image of the to-be-processed region at the second focal distance, wherein the first image is acquired at the first focal distance. Wherein the second focal length is larger than the first focal length.
In this embodiment, a selectable range of the region to be processed in the first image corresponding to the first maximum angle-of-view region may be determined based on the comparison result, and a region in the first image corresponding to the selectable range may be photographed based on the second focal length.
In one embodiment, if a first maximum viewing angle region corresponding to the first focal length is included within a second maximum viewing angle region corresponding to the second focal length, it is determined that the first maximum viewing angle region can be photographed based on the second focal length. That is, the upper and lower boundaries of the selectable range are the upper and lower boundaries of the first maximum viewing angle region, and the value of the selectable range is (0.0, 1.0) (for example, it means that the ratio of the values of the upper boundary of the selectable range and the upper boundary of the first maximum viewing angle region is 0% to 100%, and the ratio of the values of the lower boundary of the selectable range and the lower boundary of the first maximum viewing angle region is 0% to 100%).
In another embodiment, if the upper boundary of the first maximum viewing angle area exceeds the upper boundary of the second maximum viewing angle area, the upper limit of the selectable range needs to be calculated.
In another embodiment, if the lower boundary of the first maximum viewing angle region exceeds the lower boundary of the second maximum viewing angle region, the lower limit of the selectable range needs to be calculated.
In another embodiment, if the left boundary of the first maximum viewing angle area exceeds the left boundary of the second maximum viewing angle area, the left limit of the selectable range needs to be calculated.
In another embodiment, if the right boundary of the first maximum viewing angle area exceeds the right boundary of the second maximum viewing angle area, the right limit of the selectable range needs to be calculated.
In another embodiment, in order to facilitate the user to visually see the relationship between the first maximum viewing angle area and the second maximum viewing angle area and further select the appropriate area to be processed, when the method is applied to the control end of the shooting end, the method may further include displaying at least one of the plurality of boundaries of the selectable range. For example, one or more boundaries of the selectable range may be displayed in the display interface. For another example, when the selectable range is exceeded in the boundary of the to-be-processed region, the boundary of the exceeded selectable range may be displayed in the display interface. For another example, a boundary beyond the selectable range among the boundaries of the to-be-processed region may be displayed in the display interface. For another example, the non-selectable range and the selectable range may be displayed differently, such as by transparency or color.
Fig. 5-10 are schematic diagrams of display interfaces of a control terminal provided by an embodiment of the present disclosure.
As shown in fig. 5, determining the selectable range of the region to be processed in the first image according to the comparison result may include determining that the selectable range of the region to be processed in the first image is an arbitrary region in the first image if it is determined that the second maximum viewing angle region S2 includes the first maximum viewing angle region S1 according to the comparison result.
For example, if
Figure BDA0002924061760000141
The upper boundary lt _ point _ y is 0.0. If it is not
Figure BDA0002924061760000142
The lower boundary rb _ point _ y is 1.0.
As shown in fig. 6, determining the selectable range of the to-be-processed region in the first image according to the comparison result may include determining that the selectable range of the to-be-processed region in the first image is an overlapping region of the first maximum viewing angle region and the second maximum viewing angle region if it is determined that the second maximum viewing angle region S2 includes a part of the first maximum viewing angle region S1 according to the comparison result. As shown in fig. 6 in which the shaded portion is an overlapping area of the first maximum viewing angle area and the second maximum viewing angle area.
As shown in fig. 7, the current attitude angle is a current pitch angle, and determining the selectable range of the region to be processed in the first image according to the comparison result may include determining a selectable upper limit (upper boundary) of the selectable range in the first maximum viewing angle region based on the first upper limit, the second upper limit, and the vertical-direction viewing angle region corresponding to the first focal length, if it is determined that the first upper limit of the first maximum viewing angle region S1 exceeds the second upper limit of the second maximum viewing angle region S2 according to the comparison result.
For example, if
Figure BDA0002924061760000151
Then the upper boundary
Figure BDA0002924061760000152
Figure BDA0002924061760000153
As shown in fig. 8, the current attitude angle is a current pitch angle, and determining the selectable range of the region to be processed in the first image according to the comparison result may include determining a selectable lower limit (lower boundary) of the selectable range in the first maximum viewing angle region based on the second lower limit, the first lower limit, and the vertical direction viewing angle region corresponding to the first focal length, if it is determined that the second lower limit of the second maximum viewing angle region exceeds the first lower limit of the first maximum viewing angle region according to the comparison result.
For example, if
Figure BDA0002924061760000154
Then the lower boundary
Figure BDA0002924061760000155
Figure BDA0002924061760000156
As shown in fig. 9, the current attitude angle is a current yaw angle, and determining the selectable range of the region to be processed in the first image according to the comparison result may include determining the selectable left limit of the selectable range in the first maximum viewing angle region based on the first left limit, the second left limit, and the horizontal direction viewing angle region corresponding to the first focal length, if it is determined that the first left limit of the first maximum viewing angle region exceeds the second left limit of the second maximum viewing angle region according to the comparison result.
For example, the algorithm for the left limit of the selectable range (selectable left limit) may refer to the algorithm described above with respect to calculating the lower limit of the selectable range, and will
Figure BDA0002924061760000157
Replacing with the horizontal size of the viewing angle region corresponding to the first focal length
Figure BDA0002924061760000158
Will be provided with
Figure BDA0002924061760000159
Replacing with the horizontal size of the viewing angle region corresponding to the second focal length
Figure BDA00029240617600001510
And (4) finishing.
As shown in fig. 10, the current attitude angle is a current yaw angle, and determining the selectable range of the to-be-processed region in the first image according to the comparison result may include determining the selectable right limit of the selectable range in the first maximum viewing angle region based on the second right limit, the first right limit, and the horizontal direction viewing angle region corresponding to the first focal length, if it is determined that the second right limit of the second maximum viewing angle region exceeds the first right limit of the first maximum viewing angle region according to the comparison result.
For example, the algorithm for the right limit of the selectable range may refer to the algorithm described above with respect to calculating the upper limit of the selectable range, and will
Figure BDA00029240617600001511
Replacing with the horizontal size of the viewing angle region corresponding to the first focal length
Figure BDA00029240617600001512
Will be provided with
Figure BDA00029240617600001513
Replacing with the horizontal size of the viewing angle region corresponding to the second focal length
Figure BDA00029240617600001514
And (4) finishing.
In another embodiment, after determining the selectable range of the region to be processed in the first image according to the comparison result, the method may further include the following operation.
First, a region to be processed is determined, wherein the second maximum viewing angle region includes the region to be processed. For example, the to-be-processed area input by the user may be received at the control terminal, or the to-be-processed area may be determined based on an image recognition algorithm at the photographing apparatus or the control terminal. The region to be processed should be included in the second maximum viewing angle region, so as to avoid that a plurality of second images obtained by photographing the region to be processed at the second focal length cannot cover the complete image of the region to be processed (for example, a 'mi' -shaped image is formed after stitching).
In one embodiment, the determination of the to-be-processed area by user input is taken as an example for illustration. When the user inputs the to-be-processed area at the control terminal, the control terminal or the photographing device (the to-be-processed area input by the user is sent to the photographing device by the control terminal) is required to determine whether the candidate to-be-processed area input by the user exceeds the selectable range. The area to be processed may be a frame-like area input by a user from a display interface of the control terminal, an area formed by a plurality of coordinates input by the user, or an area formed by a plurality of points input by the user.
For example, before determining the region to be processed, the method may further include the following operations. The control terminal or the photographing device may acquire candidate regions to be processed (e.g., a person image region, a scenery spot image region, etc. obtained through an algorithm, or a key monitoring region determined according to a preset task, etc.), and accordingly determining the regions to be processed includes: and if the candidate to-be-processed area is determined not to exceed the selectable range, determining the candidate to-be-processed area as the to-be-processed area. In addition, in order to facilitate the user to input a suitable candidate to-be-processed area, for example, to avoid the candidate to-be-processed area from exceeding the selectable range, a prompt message may be output at the control terminal. For example, the selectable range is displayed on a display interface of the control terminal as the prompt message. In addition, after acquiring the candidate to-be-processed region, the method may further include: if the candidate to-be-processed area is determined to be beyond the selectable range, prompt information can be output at the control terminal. For example, when the candidate to-be-processed area input by the user is beyond the selectable range, prompt information is output on the control terminal, and the prompt information includes but is not limited to: and image information, text information, sound information, somatosensory information and the like, wherein when the lower boundary of the candidate to-be-processed area exceeds the lower limit of the selectable range, the lower limit of the selectable range is displayed in a display interface of the control terminal. It can be understood that, when the selectable range is displayed on the display interface of the control terminal, other areas on the first image in the display interface except the selectable range cannot be selected, that is, it is limited that the user can only select within the selectable range, that is, under such a condition, the obtained candidate to-be-processed area is the to-be-processed area, and it is not necessary to make a further judgment whether the candidate to-be-processed area exceeds the selectable range.
In one specific implementation, taking the example of determining the to-be-processed area based on the image recognition algorithm as an example, determining the to-be-processed area based on the image recognition algorithm may include: and performing image recognition on the first image, determining an object to be processed, and taking the area where the object to be processed is located as the area to be processed. In the image recognition process, a plurality of suitable image recognition techniques may be adopted, such as object recognition methods (edge detection, basic element (primary mask)) based on CAD models (CAD-like), recognition methods (such as edge matching, partition-controller search algorithm, gray value matching, gradient matching, perceptual domain histogram (histogram of perceptual field), or large models (large model bases) based on appearance (impression-based), recognition methods (such as interpretation tree, hypothesis and test (underlying-hashing), consistent posture (pose), clustered posture (normalization), constant posture (variation), geometric Scale (invariant-mapping), and non-geometric Scale transformation, SIFT), based on Speeded Up Robust Features (SURF), genetic algorithms (genetic algorithms), and the like.
In another embodiment, the determination of the region to be processed based on the preset task and the image recognition algorithm is taken as an example for explanation. Referring to the security scene shown in fig. 2, a user may set a preset task, such as an image capture task based on a preset period, through the control terminal in advance. When the preset task is triggered, the photographing device is controlled to photograph based on the preset task. The preset task may include various task parameters, such as a task period, shooting pose sequence information (the shooting pose sequence information may be input by a user, or may be automatically generated by the control terminal according to a key monitoring area and focal length information input by the user), alarm related information, and the like, where the shooting pose sequence information may include the number of shots, and pose information, focal length information, sensitivity information, shutter information, and the like taken each time. Alarm-related information includes, but is not limited to: alarm triggering conditions, alarm modes, etc. And comparing the recognition result recognized by using an image recognition algorithm with an alarm triggering condition to determine whether to alarm or not. For example, an alarm may be required to identify the presence of flammable and explosive materials at a monitored location through image recognition. For example, in the field of monitoring the state of equipment, an object such as a meter or a fastener is usually fixed at a predetermined position, and the state information of the object is determined based on a preset task and an image recognition algorithm, and further whether or not to perform an alarm is determined based on the state information of the object.
Then, the photographing associated information is determined based on the second focal length and the to-be-processed area, so that at least one second image of the to-be-processed area at the second focal length is acquired based on the photographing associated information.
In this embodiment, the photographing related information may include: shooting pose sequence information, camera shooting associated information, camera type information and the like. The shooting pose sequence information may include pose adjustment device association information. For example, the pose adjustment device association information includes, but is not limited to, at least one of: attitude angle series information, time series information, and the like. The camera shooting related information includes, but is not limited to, at least one of the following: exposure time length sequence information, sensitivity sequence information, denoising mode sequence information, output format sequence information, compression mode sequence information, and the like (it should be noted that each sequence information in the camera shooting correlation information may also be a single value, for example, the shooting parameters for shooting at each shooting pose of the shooting pose sequence information are the same). For example, the shooting pose sequence information is determined based on shooting combination information, which is determined based on the second focal length and the region to be processed.
Specifically, the photographing combination information may include: the number of times of photographing, and the region to be photographed for each photographing. Accordingly, determining the photographing related information based on the second focal length and the region to be processed includes: and determining the photographing times and the region to be photographed for each photographing based on the second focal length and the region to be processed. And determining shooting pose sequence information based on the region to be shot. And each shooting pose in the shooting pose sequence information corresponds to one region to be shot, so that a second image corresponding to the region to be shot is shot at a second focal length under each shooting pose of the shooting pose sequence information.
In a specific embodiment, determining the number of times of photographing and the region to be photographed for each photographing based on the second focal length and the region to be processed may include the following operations. First, a viewing angle region corresponding to the second focal length is determined based on the image sensor size and the second focal length. Then, the number of times of photographing and the region to be photographed for each photographing are determined based on the region to be processed and the viewing angle region corresponding to the second focal length. Wherein the region formed by the plurality of regions to be photographed includes a region to be processed.
Fig. 11 is a schematic diagram illustrating division of a to-be-processed area according to an embodiment of the disclosure.
As shown in fig. 11, the size of the image sensor of the camera is fixed, and at the second focal length of the specific posture (the size of the image obtained at the long focal length is smaller than that of the image obtained at the short focal length), the size of the image that the image sensor can capture on the display interface is: width W1, length L1. The size of the area to be processed on the display interface is as follows: width W2, length L2. The area to be processed may be divided into:
Figure BDA0002924061760000191
a plurality of zones, wherein,
Figure BDA0002924061760000192
meaning rounding up. Thus, the number of times of photographing is determined so that the plurality of second images photographed at the second focal length can completely cover the image of the region to be processed in the first image photographed at the first focal length. Further, each of the divided regions may be regarded as one region to be photographed.
In order to ensure that the plurality of second images taken at the second focal length can completely cover the image of the region to be processed in the first image taken at the first focal length, a certain redundant region may be set for the region to be photographed. For example, the number of times of photographing is determined based on the to-be-processed region and the viewing angle region corresponding to the second focal length, and the to-be-photographed region for each photographing may include: and determining the photographing times and the region to be photographed each time based on the region to be processed, the view angle region corresponding to the second focal length and the image overlapping proportion between the regions to be photographed. The image overlap ratio may be a preset value, and the image overlap ratio may be for one or more side lengths (for example, including at least one of a width overlap ratio and a length overlap ratio), or may be for an area (for example, an area overlap ratio).
Fig. 12 is a schematic view of a region to be photographed according to an embodiment of the present disclosure.
As shown in fig. 12, the illustrated region to be processed is divided into 9 regions to be photographed, in which a certain image overlap ratio exists between adjacent regions to be photographed. Due to the level of the area to be photographedThe length in the horizontal direction and the length in the vertical direction may be different, and therefore, the image overlap ratio (overlap) in the horizontal direction may be setx) And vertical image overlap ratio (overlap)y). Of course, the same image overlap ratio may be set for the horizontal direction and the vertical direction of the region to be photographed. In other embodiments, if the region to be photographed is square, the image overlap ratio may be set to be the same as the horizontal and vertical overlap ratios.
In another embodiment, taking a camera having two lens groups as an example, if the first focal length is focused by the first lens group and the second focal length is focused by the second lens group, since the baseline deviation is always present during assembly, the baseline deviation needs to be corrected, and the optical axis of the second lens group needs to be aligned with the optical axis of the original first lens by adjusting the attitude adjustment device to capture an image of a designated area. Specifically, determining the shooting pose sequence information based on the region to be photographed may include: and determining a shooting pose corresponding to the area to be shot for each shooting based on the current pose of the camera under the first focal length and the angular position deviation of the second focal length, and taking the set of all the determined shooting poses as shooting pose sequence information.
In another embodiment, the baseline deviation and the image overlap ratio can be considered at the same time to improve the accuracy of the shooting posture of the second image. For example, determining the shooting pose sequence information based on the region to be photographed may include: and determining a shooting pose corresponding to the region to be photographed which is photographed each time based on the current pose of the camera under the first focal length, the angular position deviation of the second focal length and the image overlapping proportion between the visual angle region corresponding to the second focal length and the region to be photographed, and taking the set of all the determined shooting poses as shooting pose sequence information.
Wherein the second focal length angular position deviation may be determined based on the first deviation, the second deviation, and the third deviation. Specifically, a first deviation between a specified position of the view angle region corresponding to the second focal length and a specified position of the view angle region corresponding to the first focal length is determined, for example, the specified position may be a center point. A second deviation between the specified position of the region to be processed and the specified position of the view angle region corresponding to the first focal length is determined. A third deviation between the specified position of the region to be photographed and the specified position of the region to be processed is determined. Thus, it is possible to determine the second focal length angular position deviation between the region to be photographed and the specified position of the angle-of-view region corresponding to the first focal length based on the first deviation, the second deviation, and the third deviation. Therefore, the camera can be deviated from the current attitude angle by the second focal length angle position deviation by controlling the holder, so that the picture taking of the area to be taken is realized.
The following describes how to adjust the posture of the pan/tilt head to take a picture of the region to be processed.
According to the focal length of the current telephoto camera and the size information of the sensor, a horizontal viewing angle (fov) corresponding to the second focal length is calculated using equations (1) and (2), respectivelyzx) And a vertical viewing angle (fov) corresponding to the second focal lengthzy) The size of (2).
Figure BDA0002924061760000211
Figure BDA0002924061760000212
Where W, H are the width and height of the sensor, and focal _ length is the second focal length.
Assuming that it is necessary to take a picture for the region to be processed at the second focal length, the horizontal angle of view of the region of view at the first focal length corresponding to the region to be processed is (fov)wx) And the vertical viewing angle of the viewing angle region at the first focal length corresponding to the region to be processed is (fov)wy) In order to ensure that the image taken at the second focal length (e.g., by the telephoto lens) can completely cover the region to be processed, the conditions shown in equations (3) and (4) may be satisfied.
fovzx+(1-overlapx)*fovzx*(m-1)≥fovwxFormula (3)
fovzy+(1-overlapy)*fovzy*(n-1)≥fovwyFormula (4)
Wherein, overlapxIs the image overlap ratio in the horizontal direction, overlapyIs the image overlap ratio in the vertical direction, which is shown with reference to fig. 12, i.e., the length overlap ratio and the width overlap ratio. m and n are the number of pictures to be taken in the horizontal direction and the vertical direction respectively, and in practical use, overlapxIt can be user-specified or calculated by an algorithm through a special rule.
To eliminate the baseline deviation as shown above, a Homography matrix of the first lens group and the second lens group at a specific target distance (assumed to be infinity in general) can be calculated by calibrating the intrinsic parameters, the rotation matrix and the translation matrix between the first lens group and the second lens group, and then a first deviation from the center of the viewing angle region corresponding to the second focal length to the center of the viewing angle region corresponding to the first focal length can be calculated. Specifically, the formula may be represented by the formulae (5) to (7).
Figure BDA0002924061760000213
Figure BDA0002924061760000221
Figure BDA0002924061760000222
Assuming that the position of the center of the to-be-processed area is (a, b) and the center of the view angle area corresponding to the first focal length is (wide _ center _ x, wide _ center _ y), a second deviation (offset _ 2) of the center of the to-be-processed area from the center of the view angle area corresponding to the first focal length can be calculatedx,offset_2y) Specifically, the following formulae (8) and (9) can be used.
offset_2xWite _ center _ x-a formula (8)
offset_2yWite _ center _ y-b formula (9)
After m and n are determined, a third deviation (offset _ 3) of the first to-be-photographed region in the upper left corner with respect to the center of the to-be-photographed region can be calculatedx,offset_3y)。
Fig. 13 is a schematic diagram illustrating an odd number of shots according to an embodiment of the disclosure, and fig. 14 is a schematic diagram illustrating an even number of shots according to an embodiment of the disclosure.
Third deviation (offset _ 3)x,offset_3y) The calculation can be performed using the equations (10) and (11).
Figure BDA0002924061760000223
Figure BDA0002924061760000224
Wherein, fovx-overlapxTo remove the overlapped horizontal viewing angle, fovy-overlapyTo remove the overlapped vertical viewing angle.
The offset is summed to obtain the angular position deviation of the area to be photographed at the upper left corner in the m × n areas to be photographed relative to the first lens group in the current pose. Wherein, the current pose may refer to a current shooting pose angle of the camera. The camera can be controlled to adjust the posture of the camera based on the angular position deviation, so that the camera can shoot the region to be shot at the upper left corner of the m x n regions to be shot under the second focal length.
Specifically, the angular position deviation of the region to be photographed in the upper left corner with respect to the first lens group in the current posture can be determined by equations (12) and (13).
offsetx=offset_1x+offset_2x+offset_3xFormula (12)
offsety=offset_1y+offset_2y+offset_3yFormula (13)
For example, the first focal length is determined by the wide-angle camera (including the first lens group)Now, the second focal length is achieved by the tele camera (including the second lens set). Assume that the current wide-angle camera position is (yaw)current,pitchcurrent) Then the position corresponding to the region to be photographed at the upper left corner of the wide-angle camera is (yaw)target(1,1),pitchtarget(1, 1)), it can be calculated by the equations (14) and (15).
yawtarget(1,1)=yawcurrent+offsetxFormula (14)
pitchtarget(1,1)=pitchcurrent+offsetyFormula (15)
At the position corresponding to the picture of the upper left corner of the wide-angle camera (yaw)target,pitchtarget) Thereafter, the view angle region (i.e., the horizontal view angle corresponding to the first focal length) of the current wide-angle camera is combined (fov)zx,fovzy) And (overlap)x,overlapy) Then, the positions of the other regions to be photographed in the m × n regions to be photographed can be obtained. For example, the calculation can be performed by the equations (16) and (17).
yawtarget(m,n)=yawtarget(1,1)+(1-overlapx)*fovzx*(m-1)
Formula (16)
pitchtarget(m,n)=pitchtarget(1,1)+(1-overlapy)*fovzy*(n-1)
Formula (17)
Through the operation, the angular position deviation of each to-be-photographed area relative to the current posture of the camera can be determined, and the shooting pose sequence information is obtained.
In another embodiment, after determining the shooting pose sequence based on the region to be photographed, the method may further include the operations of: and controlling the shooting end to shoot a second image corresponding to the area to be shot at a second focal length under each shooting pose of the shooting pose sequence information. It should be noted that the operation may be performed by the control terminal or the shooting terminal. For example, a shooting instruction may be sent to the shooting end by the control terminal. For another example, the shooting end automatically shoots based on the shooting pose sequence information when the shooting pose sequence information is obtained, and the shooting end is not limited herein.
For example, the shooting end includes a shooting device, and controlling the shooting end to shoot the second image corresponding to the region to be shot at the second focal length in each shooting pose of the shooting pose sequence information may include the following operations: and controlling the shooting device to be sequentially positioned at each shooting pose in the shooting pose sequence information through the pose adjusting device, and controlling the shooting device to shoot second images respectively corresponding to the areas to be shot at a second focal length under each shooting pose. For example, attitude adjusting device sets up on movable platform, and attitude adjusting device can be multiaxis cloud platform, and movable platform can be unmanned aerial vehicle.
In a particular embodiment, the first and second images are taken by a zoom camera at a first focal length and a second focal length, respectively. Referring to the security scene shown in fig. 2, the camera has a variable focal length lens, and the first image and the second image are obtained by adjusting the focal length of the variable focal length lens and adjusting the posture of the camera.
In another specific embodiment, the first image is captured by a first camera and the second image is captured by a second camera. The focal length of the first shooting device is at least partially different from that of the second shooting device. For example, the first photographing device and the second photographing device are two independent photographing devices and may be disposed on the same pan/tilt head so that the first photographing device and the second photographing device may move synchronously. For another example, the distance between the first camera and the second camera is fixed, and the first camera is disposed on the first pan/tilt and the second camera is disposed on the second pan/tilt, and the first pan/tilt and the second pan/tilt can move synchronously. For another example, the first camera and the second camera are packaged in a single housing and share a single image sensor, and the optical axis of the lens group of the first camera and the optical axis of the lens group of the second camera are parallel.
In another embodiment, at least one of the first camera and the second camera is adjustable in focal length. For example, the first camera is a fixed focus camera and the second camera is an adjustable focus camera. For another example, the first camera is a focus-adjustable camera and the second camera is a fixed focus camera. For another example, the first photographing device and the second photographing device are both focus-adjustable cameras. Adopt two shooting devices to shoot for adopting a focus adjustable shooting device (if have great focus adjustment range's high power zoom lens) to shoot, when using movable platform (as unmanned aerial vehicle) to go up, help reducing because of the focus adjustment leads to the focus of lens group to change, and then lead to the influence like unmanned aerial vehicle's focus changes. In addition, high cost caused by the adoption of the high-power zoom lens can be effectively reduced.
In another particular embodiment, the first camera is a wide angle camera and the second camera is a tele camera. The telephoto camera may be a camera with an adjustable focal length. The adjustable focal length range of the tele camera may depend on camera cost, etc. In addition, the adjustable focal length range of the telephoto camera may be smaller than that of the high power zoom lens. For example, the variable focal length range of the telephoto camera is located on the large focal length side of the variable focal length range of the high-power zoom lens.
The first camera described above may include a first lens group, and the second camera may include a second lens group.
In addition, in order to facilitate the user to refer to the first image and the second image, after controlling the photographing end to photograph the second image corresponding to the region to be photographed at the second focal length in each photographing pose of the photographing pose sequence information, the method may further include the operations of: the first image and the second image are stored in association. For example, the SD card stores the HTML file and the first and second images, so that coupling the SD card to a Personal Computer (PC) side can realize displaying the first and second images based on the HTML file at the PC side. The HTML file may include a mapping relationship between the first image and the second image.
Specifically, storing the first image and the second image in association may include the following operations. First, a first mapping relation between the second image and the region to be processed is determined, and a second mapping relation between the region to be processed and the first image is determined. Then, the first image, the area to be processed, the second image, the first mapping relation and the second mapping relation are stored. Here, the region to be processed may be represented as range information or the like, such as a range determined based on a plurality of coordinates.
In one embodiment, the second image is an unprocessed captured image when applied to the capture side. For example, the second image is an original captured image. This facilitates the user to refer to the original photographed image on a terminal such as a computer. It should be noted that, in order to improve the display effect, the original captured image may be subjected to preprocessing such as noise reduction, object recognition, and object region identification.
In a specific embodiment, when the image processing device is applied to a control terminal of a shooting terminal, the second image is a processed shooting image, and the resolution of the second image is smaller than that of an unprocessed shooting image. For example, the second image may be image compressed to reduce storage occupied by the second image. Alternatively, the network resources occupied by the transmission of the second image may be reduced. For example, in the communication process between the unmanned aerial vehicle and the control terminal, the thumbnail of the second image is transmitted by the unmanned aerial vehicle, when the control terminal needs to look up the specific second image, the thumbnail needs to be downloaded from the unmanned aerial vehicle, and when the compressed second image is transmitted, the consumption of network resources is reduced, and the transmission rate and the response speed of a user instruction are improved. The process of image compressing the second image may occur after the second image is captured, after user instructions for viewing the second image are received, or after user instructions for compressing the second image are received, without limitation.
In addition, the zoom lens can obtain a shot picture according to the new zoom multiple after adjusting the zoom multiple (for example, when a user changes the second focal length by adjusting a magnification adjusting component in a display interface of the control terminal), the shooting device can send the shot picture obtained by the zoom lens to the control terminal in real time, so that the control terminal can display the picture obtained by the zoom lens, and the user can visually check whether the definition of the picture obtained by the zoom lens meets the requirement of the user.
In another embodiment, the method may further include the operations of: and if an operation instruction for viewing the second image is received, acquiring an unprocessed shot image corresponding to the second image from the shooting end. Then, an unprocessed captured image corresponding to the second image is displayed. For example, after the photographing device photographs the second image at the second focal length, in order to facilitate the user to confirm the photographing effect, the compressed image of the second image may be sent to the control terminal side by the unmanned aerial vehicle side for presentation. If the user wishes to view the original captured image of the second image, an operation instruction for viewing the second image can be transmitted to the unmanned aerial vehicle side (or a capturing device thereon) through the control terminal side. The unmanned aerial vehicle side transmits an original captured image of the second image to the control terminal side in response to an operation instruction for viewing the second image. Therefore, the information interaction speed between the control terminal side and the unmanned aerial vehicle side can be simultaneously met, and various requirements of the user for the second image can also be met.
In addition, in order to further improve the convenience of the user in viewing the image of the region to be processed, after the shooting end is controlled to shoot the second image corresponding to the region to be shot (which may include a plurality of regions to be shot) at the second focal length in each shooting pose of the shooting pose sequence, the method may further include the following operation. First, a third image is acquired, and the third image is synthesized based on the second image. Then, the first image and the third image are stored in association. Wherein the third image may be synthesized using the plurality of second images, such as synthesized based on an image stitching technique. For the overlapped image part among the spliced second images, interpolation can be used to make the spliced part more natural.
Next, the manner of capturing the first image through the wide-angle lens and the second image through the telephoto lens is described in conjunction with the display interface of the control terminal to view the photographing result.
Fig. 15-17 are schematic diagrams of display images provided by embodiments of the present disclosure.
As shown in fig. 15, the content displayed in the display interface 1501 of the control terminal includes: and a first image shot at a first focal length in the current posture of the first shooting device. The user selects a region to be processed 1502 (included within a second maximum viewing angle region corresponding to a second focal length) from the first image and sets the second focal length. The shooting end or the control terminal divides the to-be-processed region 1502 based on the second focal length, determines four to-be-shot regions, and determines shooting associated information of the four to-be-shot regions. Therefore, the shooting end can respectively shoot the four regions to be shot under the second focal length based on the shooting associated information so as to obtain four super-resolution second images. The first image and the second image may then be stored in association, such as in an SD card.
In some possible embodiments, the display interface 1501 also includes a focus adjustment component 1503, focus information, etc. to facilitate user adjustment of the second focus. The current second focal length (ZOOM) is 10 times the focal length. In addition, the display interface 1501 may further include the number of shots and/or the shooting time period of the zoom lens. The photographing information changes as the photographing parameters of the zoom lens change. Therefore, the wide-angle lens picture parameters can be changed correspondingly, and the lens parameters are changed in a linkage manner.
As shown in fig. 16, a display interface 1601 of a display of a terminal is provided after a readable storage medium, such as an SD card, is coupled to a terminal device, such as a computer. In displaying the first image, annotation information 1602, such as a range mark corresponding to the region to be processed in fig. 15, text annotation information of the second image stored in association, and the like, may be displayed. Of course, the image identification and format information (dji00001.jpeg) of the first image may also be displayed. The user can view the associated stored second image by clicking on the annotation information 1602 or a specific functional component, e.g., the terminal can determine the second image the user wishes to view based on the HTML file and the annotation information 1602, and display it. In addition, the user can hide the annotation information 1602 by a specific operation to ensure the viewing effect. The display interface 1601 of the display of the terminal may further include an operation button 1603 so that the user may further perform operations such as editing, deleting, and sharing of the first image.
Note that, when the user needs to print the first image or the second image, the annotation information may not be printed. For example, when the terminal or the unmanned side transmits the first image or the second image to the printing apparatus, the HTML file may not be transmitted.
As shown in fig. 17, a display interface 1601 of a display of the terminal. In which the relationship between the currently displayed second image 1701 and the first image may be displayed. The second image 1701 as shown in fig. 17 is the first (1 of 4) of four second images corresponding to the region to be processed of the first image. The user may also perform operations such as editing, deleting, and sharing on the second image 1701.
While the method of the embodiments of the present application has been described in detail, in order to better implement the above-described aspects of the embodiments of the present application, the following also provides related apparatus for implementing the aspects.
Fig. 18 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure.
As shown in fig. 18, the image processing apparatus 1800 includes:
one or more processors 1810; and
a computer-readable storage medium 1820 for storing one or more computer programs 1821, the computer programs 1821, when executed by the processor 1810, implement:
obtaining a first maximum visual angle area corresponding to the first focal length under the current attitude angle, and obtaining a second maximum visual angle area corresponding to the second focal length;
comparing the first maximum viewing angle region and the second maximum viewing angle region; and
determining the selectable range of the region to be processed in the first image according to the comparison result so as to acquire at least one second image of the region to be processed at the second focal length, wherein the first image is acquired at the first focal length;
wherein the second focal length is larger than the first focal length.
In some possible embodiments, the second maximum viewing angle area corresponding to the second focal length is determined based on a maximum rotatable angle of the posture adjustment apparatus.
In some possible embodiments, determining the selectable range of the region to be processed in the first image according to the comparison result includes:
and if the second maximum visual angle area comprises the first maximum visual angle area according to the comparison result, determining that the selectable range of the area to be processed in the first image is any area in the first image.
In some possible embodiments, determining the selectable range of the region to be processed in the first image according to the comparison result includes:
and if the second maximum viewing angle area is determined to comprise part of the first maximum viewing angle area according to the comparison result, determining that the selectable range of the area to be processed in the first image is the overlapping area of the first maximum viewing angle area and the second maximum viewing angle area.
In some possible embodiments, the current attitude angle is a current pitch angle;
determining the selectable range of the region to be processed in the first image according to the comparison result, wherein the step comprises the following steps:
and if the first upper limit of the first maximum visual angle area exceeds the second upper limit of the second maximum visual angle area according to the comparison result, determining that the selectable range is at the selectable upper limit of the first maximum visual angle area based on the first upper limit, the second upper limit and the vertical direction visual angle area corresponding to the first focal length.
In some possible embodiments, the current attitude angle is a current pitch angle;
determining the selectable range of the region to be processed in the first image according to the comparison result, wherein the step comprises the following steps:
and if the second lower limit of the second maximum viewing angle area is determined to exceed the first lower limit of the first maximum viewing angle area according to the comparison result, determining the selectable lower limit of the selectable range in the first maximum viewing angle area based on the second lower limit, the first lower limit and the vertical direction viewing angle area corresponding to the first focal length.
In some possible embodiments, the current attitude angle is a current yaw angle;
determining the selectable range of the region to be processed in the first image according to the comparison result, wherein the step comprises the following steps:
and if the first left limit of the first maximum visual angle area exceeds the second left limit of the second maximum visual angle area according to the comparison result, determining the selectable left limit of the selectable range in the first maximum visual angle area based on the first left limit, the second left limit and the horizontal visual angle area corresponding to the first focal length.
In some possible embodiments, the current attitude angle is a current yaw angle;
determining the selectable range of the region to be processed in the first image according to the comparison result, wherein the step comprises the following steps:
and if the second right limit of the second maximum visual angle area is determined to exceed the first right limit of the first maximum visual angle area according to the comparison result, determining the selectable right limit of the selectable range in the first maximum visual angle area based on the second right limit, the first right limit and the horizontal visual angle area corresponding to the first focal length.
In some possible embodiments, applied to the control side of the shooting side, the computer program 1821, when executed by the processor 1810, is further configured to:
at least one of the plurality of boundaries of the selectable range is displayed.
In some possible embodiments, after determining the selectable range of the region to be processed in the first image according to the comparison result, the computer program 1821, when executed by the processor 1810, is further configured to implement:
determining a region to be processed, wherein the second maximum viewing angle region comprises the region to be processed; and
and determining photographing associated information based on the second focal length and the to-be-processed area so as to acquire at least one second image of the to-be-processed area under the second focal length based on the photographing associated information.
In some possible embodiments, the computer program 1821, when executed by the processor 1810, is further for, prior to determining the area to be processed, implementing:
acquiring a candidate region to be processed;
determining a region to be processed, comprising:
and if the candidate to-be-processed area is determined not to exceed the selectable range, determining the candidate to-be-processed area as the to-be-processed area.
In some possible embodiments, after acquiring the candidate regions to be processed, the computer program 1821, when executed by the processor 1810, is further configured to implement:
and if the candidate to-be-processed area is determined to be beyond the selectable range, outputting prompt information.
In some possible embodiments, determining the area to be processed includes:
and determining a region to be processed based on an image recognition algorithm.
In some possible embodiments, determining the region to be processed based on the image recognition algorithm comprises:
carrying out image recognition on the first image, and determining an object to be processed; and
and taking the area where the object to be processed is located as the area to be processed.
In some possible embodiments, determining the region to be processed based on the image recognition algorithm comprises:
and determining a region to be processed from the first image based on a preset task and an image recognition algorithm.
In some possible embodiments, the photographing related information includes: and shooting pose sequence information.
In some possible embodiments, the shooting pose sequence information is determined based on shooting combination information, which is determined based on the second focal length and the region to be processed.
In some possible embodiments, the shooting combination information includes: the number of times of photographing and the region to be photographed for each photographing;
determining the photographing related information based on the second focal length and the region to be processed includes:
determining the photographing times and the region to be photographed for each photographing based on the second focal length and the region to be processed; and
determining shooting pose sequence information based on a region to be shot;
and each shooting pose in the shooting pose sequence information corresponds to one region to be shot, so that a second image corresponding to the region to be shot is shot at a second focal length under each shooting pose of the shooting pose sequence information.
In some possible embodiments, determining the number of times of taking pictures and the area to be taken for each taking picture based on the second focal length and the area to be processed includes:
determining a viewing angle area corresponding to the second focal length based on the image sensor size and the second focal length;
determining the photographing times and the region to be photographed each time based on the region to be processed and the visual angle region corresponding to the second focal length;
wherein the region formed by the plurality of regions to be photographed includes a region to be processed.
In some possible embodiments, determining the number of times of photographing based on the to-be-processed region and the viewing angle region corresponding to the second focal length, and the to-be-photographed region for each photographing includes:
and determining the photographing times and the region to be photographed each time based on the region to be processed, the view angle region corresponding to the second focal length and the image overlapping proportion between the regions to be photographed.
In some possible embodiments, determining the shooting pose sequence information based on the region to be photographed includes:
and determining a shooting pose corresponding to the area to be shot for each shooting based on the current pose of the camera under the first focal length and the angular position deviation of the second focal length, and taking the set of all the determined shooting poses as shooting pose sequence information.
In some possible embodiments, the second focal length angular position deviation is determined by:
determining a first deviation between the specified position of the view angle region corresponding to the second focal length and the specified position of the view angle region corresponding to the first focal length;
determining a second deviation between the designated position of the area to be processed and the designated position of the view angle area corresponding to the first focal length;
determining a third deviation between the specified position of the region to be photographed and the specified position of the region to be processed; and
and determining a second focal length angle position deviation between the area to be photographed and the designated position of the visual angle area corresponding to the first focal length based on the first deviation, the second deviation and the third deviation.
In some possible embodiments, after determining the sequence of shooting poses based on the region to be photographed, the computer program 1821, when executed by the processor 1810, is further operable to implement:
and controlling the shooting end to shoot a second image corresponding to the area to be shot at a second focal length under each shooting pose of the shooting pose sequence information.
In some possible embodiments, the shooting end includes a shooting device, and the shooting end is controlled to shoot the second image corresponding to the region to be shot at the second focal length in each shooting pose of the shooting pose sequence information, including:
and controlling the shooting device to be sequentially positioned at each shooting pose in the shooting pose sequence information through the pose adjusting device, and controlling the shooting device to shoot second images respectively corresponding to the areas to be shot at a second focal length under each shooting pose.
In some possible embodiments, the attitude adjustment means is provided on the movable platform.
In some possible embodiments, after controlling the shooting end to shoot the second image corresponding to the region to be shot at the second focal length in each shooting pose of the shooting pose sequence information, the computer program 1821, when executed by the processor 1810, is further configured to implement:
the first image and the second image are stored in association.
In some possible embodiments, storing the first image and the second image in association comprises:
determining a first mapping relation between the second image and the area to be processed, and determining a second mapping relation between the area to be processed and the first image; and
and storing the first image, the area to be processed, the second image, the first mapping relation and the second mapping relation.
In some possible embodiments, applied to the camera side, the second image is an unprocessed camera image.
In some possible embodiments, the second image is a processed captured image, and the resolution of the second image is smaller than that of an unprocessed captured image.
In some possible embodiments, the computer program 1821, when executed by the processor 1810, is further for implementing:
if an operation instruction for viewing the second image is received, acquiring an unprocessed shot image corresponding to the second image from the shooting end;
and displaying the unprocessed shot image corresponding to the second image.
In some possible embodiments, after controlling the capturing end to capture a second image corresponding to the area to be photographed at the second focal length in each of the sequence of capturing poses, the computer program 1821, when executed by the processor 1810, is further configured to implement:
acquiring a third image, wherein the third image is synthesized based on the second image; and
the first image and the third image are stored in association.
In some possible embodiments, the first image and the second image are taken by a zoom camera at a first focal length and a second focal length, respectively.
In some possible embodiments, the first image is taken by a first camera and the second image is taken by a second camera; the focal length of the first shooting device is at least partially different from that of the second shooting device.
In some possible embodiments, a focal length of at least one of the first camera and the second camera is adjustable.
In some possible embodiments, the first camera is a wide-angle camera and the second camera is a tele camera.
In some possible embodiments, the second focal length comprises any one of: a fixed focal length, a preset focal length, a focal length determined based on a focal length algorithm, a focal length determined based on a sensor, and a user selected focal length.
In some possible embodiments, the focal distance determined based on the sensor comprises: a focal length determined based on the distance information, the distance information being determined by a laser rangefinder.
Fig. 19 is a schematic structural diagram of an image processing system according to an embodiment of the present disclosure.
As shown in fig. 19, the image processing system 1900 may include:
a photographing apparatus 1910 and a control terminal 1920 communicatively connected to the photographing apparatus 1910;
photographing means 1910 is configured to obtain a first maximum viewing angle area corresponding to the first focal length at the current attitude angle by photographing means 1910, and obtain a second maximum viewing angle area corresponding to the second focal length; comparing the first maximum viewing angle region and the second maximum viewing angle region; and determining the selectable range of the to-be-processed area in the first image according to the comparison result so as to control the photographing device 1910 to acquire at least one second image of the to-be-processed area at the second focal length, wherein the first image is acquired at the first focal length; wherein the second focal length is larger than the first focal length.
In some possible embodiments, the second maximum viewing angle area corresponding to the second focal length is determined based on a maximum rotatable angle of the posture adjustment apparatus.
In some possible embodiments, the photographing device 1910 is specifically configured to determine that the selectable range of the to-be-processed region in the first image is any region in the first image if it is determined that the second maximum viewing angle region includes the first maximum viewing angle region according to the comparison result.
In some possible embodiments, the photographing device 1910 is specifically configured to determine that the selectable range of the to-be-processed region in the first image is an overlapping region of the first maximum viewing angle region and the second maximum viewing angle region if it is determined that the second maximum viewing angle region includes a part of the first maximum viewing angle region according to the comparison result.
In some possible embodiments, when the current attitude angle is the current pitch angle, if it is determined from the comparison result that the first upper limit of the first maximum viewing angle region exceeds the second upper limit of the second maximum viewing angle region, the photographing apparatus 1910 determines the selectable upper limit of the selectable range in the first maximum viewing angle region based on the first upper limit, the second upper limit, and the vertical direction viewing angle region corresponding to the first focal length.
In some possible embodiments, when the current posture angle is the current pitch angle, if it is determined from the comparison result that the second lower limit of the second maximum viewing angle region exceeds the first lower limit of the first maximum viewing angle region, the photographing device 1910 determines the selectable lower limit of the selectable range in the first maximum viewing angle region based on the second lower limit, the first lower limit, and the vertical direction viewing angle region corresponding to the first focal length.
In some possible embodiments, the photographing device 1910 is specifically configured to, when the current attitude angle is the current yaw angle, determine that the selectable range is the selectable left limit of the first maximum viewing angle area based on the first left limit, the second left limit, and the horizontal direction viewing angle area corresponding to the first focal length if it is determined that the first left limit of the first maximum viewing angle area exceeds the second left limit of the second maximum viewing angle area according to the comparison result.
In some possible embodiments, when the current attitude angle is the current yaw angle, if it is determined according to the comparison result that the second right limit of the second maximum viewing angle area exceeds the first right limit of the first maximum viewing angle area, the photographing device 1910 determines the selectable right limit of the selectable range in the first maximum viewing angle area based on the second right limit, the first right limit, and the horizontal direction viewing angle area corresponding to the first focal length.
In some possible embodiments, the control terminal 1920 includes a display interface, and the control terminal 1920 is further configured to display at least one of the plurality of boundaries of the selectable range through the display interface.
In some possible embodiments, the photographing apparatus 1910 is further configured to determine the region to be processed after determining the selectable range of the region to be processed in the first image according to the comparison result, where the second maximum viewing angle region includes the region to be processed; and determining photographing related information based on the second focal length and the to-be-processed region, so as to control the photographing apparatus 1910 to acquire at least one second image of the to-be-processed region at the second focal length based on the photographing related information.
In some possible embodiments, the photographing apparatus 1910 is further configured to, before determining the to-be-processed area, acquire a candidate to-be-processed area;
and is specifically configured to determine the candidate to-be-processed area as the to-be-processed area if it is determined that the candidate to-be-processed area does not exceed the selectable range.
In some possible embodiments, the photographing device 1910 is further configured to, after acquiring the candidate pending area, output a prompt message if it is determined that the candidate pending area is beyond the selectable range.
In some possible embodiments, the photographing means 1910 are specifically configured to determine the region to be processed based on an image recognition algorithm.
In some possible embodiments, the photographing apparatus 1910 is specifically configured to perform image recognition on the first image, and determine an object to be processed; and taking the area where the object to be processed is located as the area to be processed.
In some possible embodiments, the photographing device 1910 is specifically configured to determine the region to be processed from the first image based on a preset task and an image recognition algorithm.
In some possible embodiments, the photographing related information includes: and shooting pose sequence information.
In some possible embodiments, the shooting pose sequence information is determined based on shooting combination information, which is determined based on the second focal length and the region to be processed.
In some possible embodiments, the shooting combination information includes the number of times of shooting, and the area to be shot for each shooting;
the photographing device 1910 is specifically configured to determine, based on the second focal length and the to-be-processed region, the number of times of photographing and the to-be-photographed region for each photographing; determining shooting pose sequence information based on the region to be shot; each shooting pose in the shooting pose sequence information corresponds to one region to be shot, so that the shooting device 1910 shoots a second image corresponding to the region to be shot at a second focal length at each shooting pose of the shooting pose sequence information.
In some possible embodiments, the photographing means 1910 is specifically configured to determine, based on the image sensor size and the second focal distance, a viewing angle area corresponding to the second focal distance; determining the photographing times and the region to be photographed each time based on the region to be processed and the visual angle region corresponding to the second focal length; wherein the region formed by the plurality of regions to be photographed includes a region to be processed.
In some possible embodiments, the photographing device 1910 is specifically configured to determine the number of times of photographing and the region to be photographed for each photographing based on the image overlapping ratio between the region to be processed, the viewing angle region corresponding to the second focal length, and the region to be photographed.
In some possible embodiments, the photographing apparatus 1910 is specifically configured to determine, based on the current pose of the camera at the first focal length and the angular position deviation of the second focal length, a photographing pose corresponding to the region to be photographed for each photographing, and use a set of all the determined photographing poses as the photographing pose sequence information.
In some possible embodiments, photographing apparatus 1910 is specifically configured to determine a first deviation between the specified position of the view angle region corresponding to the second focal length and the specified position of the view angle region corresponding to the first focal length; determining a second deviation between the designated position of the to-be-processed area and the designated position of the view angle area corresponding to the first focal length; determining a third deviation between the specified position of the region to be photographed and the specified position of the region to be processed; and determining a second focal length angle position deviation between the area to be photographed and the designated position of the viewing angle area corresponding to the first focal length based on the first deviation, the second deviation and the third deviation.
In some possible embodiments, the photographing apparatus 1910 is specifically configured to photograph the second image corresponding to the region to be photographed at the second focal length in each photographing pose of the photographing pose sequence information.
In some possible embodiments, the photographing apparatus 1910 includes a pose adjusting apparatus and a photographing apparatus, and the photographing apparatus 1910 is specifically configured to control, through the pose adjusting apparatus, the photographing apparatus to be in each photographing pose in the photographing pose sequence information in sequence, and control the photographing apparatus to photograph, at each photographing pose, a second image corresponding to the area to be photographed with the second focal length.
In some possible embodiments, the attitude adjustment means is provided on the movable platform.
In some possible embodiments, the photographing apparatus 1910 is further configured to store the first image and the second image in association after photographing the second image corresponding to the region to be photographed at the second focal length in each photographing pose of the photographing pose sequence information.
In some possible embodiments, the photographing apparatus 1910 is specifically configured to determine a first mapping relationship between the second image and the region to be processed, and determine a second mapping relationship between the region to be processed and the first image; and storing the first image, the area to be processed, the second image, the first mapping relation and the second mapping relation.
In some possible embodiments, the second image is an unprocessed captured image and is stored in the photographing device.
In some possible embodiments, the second image is a processed captured image and is stored in the control terminal, and the resolution of the second image is smaller than that of an unprocessed captured image.
In some possible embodiments, the control terminal further includes a display interface, and the control terminal 1920 is further configured to, if an operation instruction for viewing the second image is received, acquire an unprocessed captured image corresponding to the second image from the photographing apparatus 1910; and displaying the unprocessed shot image corresponding to the second image through the display interface.
In some possible embodiments, the photographing apparatus 1910 is further configured to, after controlling the photographing end to photograph a second image corresponding to the region to be photographed at the second focal length in each photographing pose of the sequence of photographing poses, acquire a third image, where the third image is synthesized based on the second image; and storing the first image and the third image in association.
In some possible embodiments, the photographing means 1910 includes zoom photographing means for photographing the first image and the second image at the first focal length and the second focal length, respectively.
In some possible embodiments, the photographing apparatus 1910 includes a first photographing apparatus and a second photographing apparatus;
the first shooting device is used for shooting a first image;
the second shooting device is used for shooting a second image;
the focal length of the first shooting device is at least partially different from that of the second shooting device.
In some possible embodiments, a focal length of at least one of the first camera and the second camera is adjustable.
In some possible embodiments, the first camera is a wide-angle camera and the second camera is a tele camera.
In some possible embodiments, the second focal length comprises any one of: a fixed focal length, a preset focal length, a focal length determined based on a focal length algorithm, a focal length determined based on a sensor, and a user selected focal length.
In some possible embodiments, the photographing device 1910 includes a laser range finder for measuring distance information to determine the second focal distance based on the distance information.
It should be noted that the image processing system shown above is only exemplary and should not be construed as limiting the present disclosure. Among other things, the operations implemented by the photographing apparatus 1910 in the image processing system described above can be at least partially executed by the control terminal 1920. Further, the operations implemented by the photographing apparatus 1910 in the image processing system described above may be performed, at least in part, by an unmanned aerial vehicle.
For example, a first maximum viewing angle area corresponding to the first focal length at the current attitude angle of the photographing apparatus 1910 may be obtained by the control terminal 1920, and a second maximum viewing angle area corresponding to the second focal length may be obtained; comparing the first maximum viewing angle region and the second maximum viewing angle region; and determining the selectable range of the to-be-processed area in the first image according to the comparison result so as to control the photographing device 1910 to acquire at least one second image of the to-be-processed area at the second focal length, wherein the first image is acquired at the first focal length; wherein the second focal length is greater than the first focal length.
For another example, the control terminal 1920 may be configured to determine that the selectable range of the to-be-processed region in the first image is any region in the first image if it is determined that the second maximum viewing angle region includes the first maximum viewing angle region according to the comparison result.
It should be noted that the photographing apparatus 1910 described above may be any device having a photographing function, and may refer to a camera, a pan-tilt camera, a movable platform including a camera, and the like.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Fig. 20 is a schematic structural diagram of a control terminal according to an embodiment of the present disclosure. As shown in fig. 20, the control terminal 70 may include: at least one processor 701, e.g., a CPU, at least one network interface 704, a user interface 703, a memory 705, at least one communication bus 702, a display 706. Wherein a communication bus 702 is used to enable connective communication between these components. The user interface 703 may include, among other things, a touch screen, a keyboard, a mouse, a joystick, and the like. The network interface 704 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), and a communication connection may be established with the server and the camera 20 via the network interface 704. The memory 705 may be a high-speed RAM memory or a non-volatile memory (non-volatile memory), such as at least one disk memory, and the memory 705 includes a flash in the embodiment of the present invention. The memory 705 may optionally be at least one memory system located remotely from the processor 701. As shown in fig. 20, the memory 705, which is a type of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and program instructions.
It should be noted that the network interface 704 may be connected to an acquirer, a transmitter or other communication module, and the other communication module may include, but is not limited to, a WiFi module, a bluetooth module, etc., and it is understood that the control terminal 70 may also include an acquirer, a transmitter and other communication module, etc. in the embodiments of the present application.
The processor 701 may be configured to call program instructions stored in the memory 705 and perform the following operations:
obtaining a first maximum visual angle area corresponding to the first focal length under the current attitude angle, and obtaining a second maximum visual angle area corresponding to the second focal length;
comparing the first maximum viewing angle region and the second maximum viewing angle region; and
determining the selectable range of the region to be processed in the first image according to the comparison result so as to acquire at least one second image of the region to be processed at the second focal length, wherein the first image is acquired at the first focal length;
wherein the second focal length is larger than the first focal length.
It is understood that the functions of the control terminal 70 of the present embodiment can be implemented according to the method of the above method embodiment, and are not described herein again.
Fig. 21 is a schematic structural diagram of a photographing device according to an embodiment of the present disclosure. As shown in fig. 21, the photographing apparatus 80 may include: at least one processor 801, such as a CPU, at least one network interface 804, zoom lens 803, memory 805, wide angle lens 806, and at least one communication bus 802. Wherein a communication bus 802 is used to enable connective communication between these components. The network interface 20 may optionally include a standard wired interface or a standard wireless interface (e.g., WI-FI interface), and the network interface 804 may establish a communication connection with the control terminal 70. The memory 805 may be a high-speed RAM memory or a non-volatile memory (non-volatile memory), such as at least one disk memory, and the memory 805 includes a flash in the embodiment of the present invention. The memory 805 may optionally be at least one memory system located remotely from the processor 801 as previously described. As shown in fig. 21, memory 805, which is a type of computer storage media, may include an operating system, a network communication module, and program instructions.
It should be noted that the network interface 804 may be connected to an acquirer, a transmitter, or other communication modules, and the other communication modules may include, but are not limited to, a WiFi module, a bluetooth module, and the like.
The processor 801 may be configured to call program instructions stored in the memory 805 and perform the following operations:
obtaining a first maximum visual angle area corresponding to the first focal length under the current attitude angle, and obtaining a second maximum visual angle area corresponding to the second focal length;
comparing the first maximum viewing angle region and the second maximum viewing angle region; and
determining the selectable range of the region to be processed in the first image according to the comparison result so as to acquire at least one second image of the region to be processed at the second focal length, wherein the first image is acquired at the first focal length;
wherein the second focal length is larger than the first focal length.
It is understood that the functions of the photographing device 80 of the present embodiment can be specifically implemented according to the method in the above method embodiment, and are not described herein again.
Embodiments of the present application also provide a computer-readable storage medium having stored therein instructions, which when executed on a computer or processor, cause the computer or processor to perform one or more steps of any one of the methods described above. The respective constituent modules of the signal processing apparatus may be stored in a computer-readable storage medium if they are implemented in the form of software functional units and sold or used as independent products.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in or transmitted over a computer-readable storage medium. The computer instructions may be transmitted from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above.
The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like. The technical features in the present examples and embodiments may be arbitrarily combined without conflict.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solutions of the present disclosure, and not for limiting the same; although the present disclosure has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the present disclosure.

Claims (112)

1. An image processing method, characterized in that the method comprises:
obtaining a first maximum visual angle area corresponding to the first focal length under the current attitude angle, and obtaining a second maximum visual angle area corresponding to the second focal length;
comparing the first maximum viewing angle region and the second maximum viewing angle region; and
determining a selectable range of the area to be processed in the first image according to the comparison result so as to acquire at least one second image of the area to be processed at the second focal distance, wherein the first image is acquired at the first focal distance;
wherein the second focal length is greater than the first focal length.
2. The method of claim 1, wherein the second maximum viewing angle area corresponding to the second focal length is determined based on a maximum rotatable angle of the posture adjustment device.
3. The method according to claim 1, wherein the determining the selectable range of the region to be processed in the first image according to the comparison result comprises:
and if the second maximum visual angle area comprises the first maximum visual angle area according to the comparison result, determining that the selectable range of the area to be processed in the first image is any area in the first image.
4. The method according to claim 1, wherein the determining the selectable range of the region to be processed in the first image according to the comparison result comprises:
and if the second maximum viewing angle area comprises part of the first maximum viewing angle area according to the comparison result, determining that the selectable range of the area to be processed in the first image is the overlapping area of the first maximum viewing angle area and the second maximum viewing angle area.
5. The method of claim 1, wherein the current attitude angle is a current pitch angle;
the determining the selectable range of the region to be processed in the first image according to the comparison result comprises:
and if the first upper limit of the first maximum viewing angle area exceeds the second upper limit of the second maximum viewing angle area according to the comparison result, determining that the selectable range is the selectable upper limit of the first maximum viewing angle area based on the first upper limit, the second upper limit and the vertical direction viewing angle area corresponding to the first focal length.
6. The method of claim 1, wherein the current attitude angle is a current pitch angle;
the determining the selectable range of the region to be processed in the first image according to the comparison result comprises:
and if the second lower limit of the second maximum viewing angle area is determined to exceed the first lower limit of the first maximum viewing angle area according to the comparison result, determining the selectable lower limit of the selectable range in the first maximum viewing angle area based on the second lower limit, the first lower limit and the vertical direction viewing angle area corresponding to the first focal length.
7. The method of claim 1, wherein the current attitude angle is a current yaw angle;
the determining the selectable range of the region to be processed in the first image according to the comparison result comprises:
and if the first left limit of the first maximum viewing angle area exceeds the second left limit of the second maximum viewing angle area according to the comparison result, determining that the selectable range is the selectable left limit of the first maximum viewing angle area based on the first left limit, the second left limit and the horizontal direction viewing angle area corresponding to the first focal length.
8. The method of claim 1, wherein the current attitude angle is a current yaw angle;
the determining the selectable range of the region to be processed in the first image according to the comparison result comprises:
and if the second right limit of the second maximum visual angle area exceeds the first right limit of the first maximum visual angle area according to the comparison result, determining the selectable right limit of the selectable range in the first maximum visual angle area based on the second right limit, the first right limit and the horizontal visual angle area corresponding to the first focal length.
9. The method of claim 1, applied to a control terminal of a photographing terminal, further comprising:
displaying at least one of the plurality of boundaries of the selectable range.
10. The method according to claim 1, further comprising, after the determining the selectable range of the region to be processed in the first image according to the comparison result:
determining the area to be processed, wherein the second maximum viewing angle area comprises the area to be processed; and
and determining photographing associated information based on the second focal length and the to-be-processed area so as to acquire at least one second image of the to-be-processed area under the second focal length based on the photographing associated information.
11. The method of claim 10, prior to said determining said area to be processed, further comprising:
acquiring a candidate region to be processed;
the determining the area to be processed comprises:
and if the candidate to-be-processed area is determined not to exceed the selectable range, determining the candidate to-be-processed area as the to-be-processed area.
12. The method according to claim 11, further comprising, after said obtaining the candidate pending region:
and if the candidate to-be-processed area is determined to be beyond the optional range, outputting prompt information.
13. The method of claim 10, wherein the determining the area to be processed comprises:
and determining the region to be processed based on an image recognition algorithm.
14. The method of claim 13, wherein the determining the region to be processed based on an image recognition algorithm comprises:
carrying out image recognition on the first image, and determining an object to be processed; and
and taking the area where the object to be processed is located as the area to be processed.
15. The method of claim 13, wherein the determining the region to be processed based on an image recognition algorithm comprises:
and determining the area to be processed from the first image based on a preset task and an image recognition algorithm.
16. The method of claim 10, wherein the photographing-related information comprises: and shooting pose sequence information.
17. The method according to claim 16, characterized in that the shooting pose sequence information is determined based on shooting combination information, which is determined based on the second focal distance and the region to be processed.
18. The method of claim 17, wherein the photographing combination information comprises: the number of times of photographing and the region to be photographed for each photographing;
the determining of the photographing related information based on the second focal length and the region to be processed includes:
determining the times of photographing and the region to be photographed for each photographing based on the second focal length and the region to be processed; and
determining the shooting pose sequence information based on the region to be shot;
each shooting pose in the shooting pose sequence information corresponds to one to-be-shot area, so that a second image corresponding to the to-be-shot area is shot at the second focal length under each shooting pose of the shooting pose sequence information.
19. The method of claim 18, wherein determining the number of times of photographing and the region to be photographed for each photographing based on the second focal distance and the region to be processed comprises:
determining a viewing angle region corresponding to the second focal length based on an image sensor size and the second focal length;
determining the photographing times and the region to be photographed each time based on the region to be processed and the visual angle region corresponding to the second focal length;
wherein the region formed by the plurality of regions to be photographed includes the region to be processed.
20. The method of claim 19, wherein determining the number of times of photographing based on the to-be-processed region and the view angle region corresponding to the second focal length, and the to-be-photographed region for each photographing comprises:
and determining the photographing times and the region to be photographed each time based on the image overlapping proportion among the region to be processed, the visual angle region corresponding to the second focal length and the region to be photographed.
21. The method according to claim 18, wherein the determining the photographing pose sequence information based on the region to be photographed includes:
and determining a shooting pose corresponding to the area to be shot for each shooting based on the current pose of the camera under the first focal length and the position deviation of the angle of the second focal length, and taking the set of all the determined shooting poses as the shooting pose sequence information.
22. The method of claim 21, wherein the second focal length angular position deviation is determined by:
determining a first deviation between the specified position of the view angle region corresponding to the second focal length and the specified position of the view angle region corresponding to the first focal length;
determining a second deviation between the designated position of the area to be processed and the designated position of the view angle area corresponding to the first focal length;
determining a third deviation between the specified position of the region to be photographed and the specified position of the region to be processed; and
and determining a second focal length angle position deviation between the area to be photographed and the designated position of the visual angle area corresponding to the first focal length based on the first deviation, the second deviation and the third deviation.
23. The method according to claim 18, further comprising, after the determining of the photographing pose sequence based on the region to be photographed:
and controlling a shooting end to shoot a second image corresponding to the area to be shot at the second focal length under each shooting pose of the shooting pose sequence information.
24. The method according to claim 23, wherein the photographing end includes a photographing device, and the controlling the photographing end to photograph a second image corresponding to the region to be photographed at the second focal length in each photographing pose of the photographing pose sequence information includes:
and controlling the shooting device to be sequentially positioned at each shooting pose in the shooting pose sequence information through a pose adjusting device, and controlling the shooting device to shoot second images respectively corresponding to the areas to be shot at the second focal length under each shooting pose.
25. The method of claim 24, wherein the attitude adjustment device is disposed on a movable platform.
26. The method according to claim 23, further comprising, after controlling the photographing end to photograph a second image corresponding to the region to be photographed at the second focal length in each photographing pose of the photographing pose sequence information:
storing the first image and the second image in association.
27. The method of claim 26, wherein the associatively storing the first image and the second image comprises:
determining a first mapping relation between the second image and the area to be processed, and determining a second mapping relation between the area to be processed and the first image; and
and storing the first image, the area to be processed, the second image, the first mapping relation and the second mapping relation.
28. The method of claim 26, applied to a camera, wherein the second image is an unprocessed camera image.
29. The method according to claim 26, wherein the second image is a processed captured image applied to a control terminal of the capturing terminal, and the resolution of the second image is smaller than that of an unprocessed captured image.
30. The method of claim 29, further comprising:
if an operation instruction for viewing the second image is received, acquiring an unprocessed shooting image corresponding to the second image from the shooting end;
and displaying the unprocessed shot image corresponding to the second image.
31. The method according to claim 23, further comprising, after the controlling the photographing end to photograph a second image corresponding to the region to be photographed at the second focal length in each photographing pose of the sequence of photographing poses:
acquiring a third image, the third image being synthesized based on the second image; and
storing the first image and the third image in association.
32. The method of claim 1, wherein the first image and the second image are taken by a zoom camera at the first focal length and the second focal length, respectively.
33. The method of claim 1, wherein the first image is captured by a first camera and the second image is captured by a second camera; wherein the focal length of the first camera is at least partially different from the focal length of the second camera.
34. The method of claim 33, wherein at least one of the first camera and the second camera is adjustable in focal length.
35. The method of claim 33, wherein the first camera is a wide angle camera and the second camera is a tele camera.
36. The method of claim 1, wherein the second focal length comprises any one of: a fixed focal length, a preset focal length, a focal length determined based on a focal length algorithm, a focal length determined based on a sensor, and a user selected focal length.
37. The method of claim 36, wherein the sensor-determined focal distance comprises: a focal length determined based on distance information determined by a laser range finder.
38. An image processing apparatus, characterized in that the apparatus comprises:
one or more processors; and
a computer readable storage medium storing one or more computer programs which, when executed by the processor, implement:
obtaining a first maximum visual angle area corresponding to the first focal length under the current attitude angle, and obtaining a second maximum visual angle area corresponding to the second focal length;
comparing the first maximum viewing angle region and the second maximum viewing angle region; and
determining a selectable range of the area to be processed in the first image according to the comparison result so as to acquire at least one second image of the area to be processed at the second focal distance, wherein the first image is acquired at the first focal distance;
wherein the second focal length is greater than the first focal length.
39. The apparatus of claim 38, wherein the second maximum viewing angle area corresponding to the second focal length is determined based on a maximum rotatable angle of the posture adjustment apparatus.
40. The apparatus of claim 38, wherein determining the selectable range of the region to be processed in the first image according to the comparison result comprises:
and if the second maximum visual angle area comprises the first maximum visual angle area according to the comparison result, determining that the selectable range of the area to be processed in the first image is any area in the first image.
41. The apparatus of claim 38, wherein determining the selectable range of the region to be processed in the first image according to the comparison result comprises:
and if the second maximum viewing angle area comprises part of the first maximum viewing angle area according to the comparison result, determining that the selectable range of the area to be processed in the first image is the overlapping area of the first maximum viewing angle area and the second maximum viewing angle area.
42. The apparatus of claim 38, wherein the current attitude angle is a current pitch angle;
the determining the selectable range of the region to be processed in the first image according to the comparison result comprises:
and if the first upper limit of the first maximum viewing angle area exceeds the second upper limit of the second maximum viewing angle area according to the comparison result, determining that the selectable range is the selectable upper limit of the first maximum viewing angle area based on the first upper limit, the second upper limit and the vertical direction viewing angle area corresponding to the first focal length.
43. The apparatus of claim 38, wherein the current attitude angle is a current pitch angle;
the determining the selectable range of the region to be processed in the first image according to the comparison result comprises:
and if the second lower limit of the second maximum viewing angle area is determined to exceed the first lower limit of the first maximum viewing angle area according to the comparison result, determining the selectable lower limit of the selectable range in the first maximum viewing angle area based on the second lower limit, the first lower limit and the vertical direction viewing angle area corresponding to the first focal length.
44. The apparatus of claim 38, wherein the current attitude angle is a current yaw angle;
the determining the selectable range of the region to be processed in the first image according to the comparison result comprises:
and if the first left limit of the first maximum viewing angle area exceeds the second left limit of the second maximum viewing angle area according to the comparison result, determining that the selectable range is the selectable left limit of the first maximum viewing angle area based on the first left limit, the second left limit and the horizontal direction viewing angle area corresponding to the first focal length.
45. The apparatus of claim 38, wherein the current attitude angle is a current yaw angle;
the determining the selectable range of the region to be processed in the first image according to the comparison result comprises:
and if the second right limit of the second maximum visual angle area exceeds the first right limit of the first maximum visual angle area according to the comparison result, determining the selectable right limit of the selectable range in the first maximum visual angle area based on the second right limit, the first right limit and the horizontal visual angle area corresponding to the first focal length.
46. The apparatus of claim 38, applied to a control terminal of a camera terminal, wherein the computer program, when executed by the processor, is further configured to implement:
displaying at least one of the plurality of boundaries of the selectable range.
47. The apparatus according to claim 38, wherein after said determining the selectable extent of the region to be processed in the first image based on the comparison, the computer program, when executed by the processor, is further configured to perform:
determining the area to be processed, wherein the second maximum viewing angle area comprises the area to be processed; and
and determining photographing associated information based on the second focal length and the to-be-processed area so as to acquire at least one second image of the to-be-processed area under the second focal length based on the photographing associated information.
48. The apparatus as claimed in claim 47, wherein prior to said determining the region to be processed, the computer program when executed by the processor is further configured to implement:
acquiring a candidate region to be processed;
the determining the area to be processed comprises:
and if the candidate to-be-processed area is determined not to exceed the selectable range, determining the candidate to-be-processed area as the to-be-processed area.
49. The apparatus as claimed in claim 48, wherein after said obtaining the candidate pending area, the computer program when executed by the processor is further configured to implement:
and if the candidate to-be-processed area is determined to be beyond the optional range, outputting prompt information.
50. The apparatus of claim 47, wherein the determining the area to be processed comprises:
and determining the region to be processed based on an image recognition algorithm.
51. The apparatus of claim 50, wherein the determining the region to be processed based on an image recognition algorithm comprises:
carrying out image recognition on the first image, and determining an object to be processed; and
and taking the area where the object to be processed is located as the area to be processed.
52. The apparatus of claim 50, wherein the determining the region to be processed based on an image recognition algorithm comprises:
and determining the area to be processed from the first image based on a preset task and an image recognition algorithm.
53. The apparatus of claim 47, wherein the photographing-related information comprises: and shooting pose sequence information.
54. The apparatus according to claim 53, characterized in that the shooting pose sequence information is determined based on shooting combination information, which is determined based on the second focal distance and the region to be processed.
55. The apparatus of claim 54, wherein the photographing combination information comprises: the number of times of photographing and the region to be photographed for each photographing;
the determining of the photographing related information based on the second focal length and the region to be processed includes:
determining the times of photographing and the region to be photographed for each photographing based on the second focal length and the region to be processed; and
determining the shooting pose sequence information based on the region to be shot;
each shooting pose in the shooting pose sequence information corresponds to one to-be-shot area, so that a second image corresponding to the to-be-shot area is shot at the second focal length under each shooting pose of the shooting pose sequence information.
56. The apparatus of claim 55, wherein the determining the number of times of taking pictures and the area to be taken for each picture based on the second focal length and the area to be processed comprises:
determining a viewing angle region corresponding to the second focal length based on an image sensor size and the second focal length;
determining the photographing times and the region to be photographed each time based on the region to be processed and the visual angle region corresponding to the second focal length;
wherein the region formed by the plurality of regions to be photographed includes the region to be processed.
57. The apparatus of claim 56, wherein the determining the number of photographing times based on the to-be-processed region and the view angle region corresponding to the second focal length, and the to-be-photographed region for each photographing comprises:
and determining the photographing times and the region to be photographed each time based on the image overlapping proportion among the region to be processed, the visual angle region corresponding to the second focal length and the region to be photographed.
58. The apparatus according to claim 55, wherein the determining of the photographing pose sequence information based on the region to be photographed includes:
and determining a shooting pose corresponding to the area to be shot for each shooting based on the current pose of the camera under the first focal length and the position deviation of the angle of the second focal length, and taking the set of all the determined shooting poses as the shooting pose sequence information.
59. The apparatus of claim 58, wherein the second focal length angular position offset is determined by:
determining a first deviation between the specified position of the view angle region corresponding to the second focal length and the specified position of the view angle region corresponding to the first focal length;
determining a second deviation between the designated position of the area to be processed and the designated position of the view angle area corresponding to the first focal length;
determining a third deviation between the specified position of the region to be photographed and the specified position of the region to be processed; and
and determining a second focal length angle position deviation between the area to be photographed and the designated position of the visual angle area corresponding to the first focal length based on the first deviation, the second deviation and the third deviation.
60. The apparatus as claimed in claim 55, wherein after said determining a sequence of photographing poses based on the region to be photographed, the computer program, when executed by the processor, is further configured to implement:
and controlling a shooting end to shoot a second image corresponding to the area to be shot at the second focal length under each shooting pose of the shooting pose sequence information.
61. The apparatus according to claim 60, wherein the photographing end includes a photographing apparatus, and the controlling the photographing end to photograph a second image corresponding to the area to be photographed at the second focal length in each photographing pose of the photographing pose sequence information includes:
and controlling the shooting device to be sequentially positioned at each shooting pose in the shooting pose sequence information through a pose adjusting device, and controlling the shooting device to shoot second images respectively corresponding to the areas to be shot at the second focal length under each shooting pose.
62. The apparatus of claim 61, wherein the attitude adjustment device is disposed on a movable platform.
63. The apparatus according to claim 60, wherein after controlling the photographing end to photograph the second image corresponding to the region to be photographed at the second focal length in each photographing pose of the photographing pose sequence information, the computer program, when executed by the processor, is further configured to implement:
storing the first image and the second image in association.
64. The apparatus of claim 63, wherein the associatively storing the first image and the second image comprises:
determining a first mapping relation between the second image and the area to be processed, and determining a second mapping relation between the area to be processed and the first image; and
and storing the first image, the area to be processed, the second image, the first mapping relation and the second mapping relation.
65. The apparatus of claim 63, wherein the second image is an unprocessed captured image when applied to a capture site.
66. The apparatus according to claim 63, wherein the second image is a processed captured image applied to a control terminal of the capturing terminal, and the resolution of the second image is smaller than that of an unprocessed captured image.
67. The apparatus as claimed in claim 66 wherein the computer program, when executed by the processor, is further configured to implement:
if an operation instruction for viewing the second image is received, acquiring an unprocessed shooting image corresponding to the second image from the shooting end;
and displaying the unprocessed shot image corresponding to the second image.
68. The apparatus of claim 60, wherein after the controlling the photographing end to photograph the second image corresponding to the area to be photographed at the second focal length in each photographing pose of the sequence of photographing poses, the computer program, when executed by the processor, is further configured to implement:
acquiring a third image, the third image being synthesized based on the second image; and
storing the first image and the third image in association.
69. The apparatus of claim 38, wherein the first and second images are captured by a zoom camera at the first and second focal lengths, respectively.
70. The apparatus of claim 38, wherein the first image is captured by a first camera and the second image is captured by a second camera; wherein the focal length of the first camera is at least partially different from the focal length of the second camera.
71. The device of claim 70, wherein at least one of the first camera and the second camera is adjustable in focal length.
72. The device of claim 70, wherein the first camera is a wide angle camera and the second camera is a tele camera.
73. The apparatus of claim 38, wherein the second focal length comprises any one of: a fixed focal length, a preset focal length, a focal length determined based on a focal length algorithm, a focal length determined based on a sensor, and a user selected focal length.
74. The apparatus of claim 73, wherein the sensor-based determined focal length comprises: a focal length determined based on distance information determined by a laser range finder.
75. An image processing system, characterized in that the image processing system comprises:
the device comprises a photographing device and a control terminal in communication connection with the photographing device;
the control device is used for acquiring a region to be processed;
the photographing device is used for obtaining a first maximum visual angle area corresponding to the first focal length of the photographing device under the current attitude angle and obtaining a second maximum visual angle area corresponding to the second focal length; comparing the first maximum viewing angle region and the second maximum viewing angle region; and determining the selectable range of the region to be processed in the first image according to the comparison result so as to control the photographing device to acquire at least one second image of the region to be processed at the second focal length, wherein the first image is acquired at the first focal length; wherein the second focal length is greater than the first focal length.
76. The system of claim 75, wherein the second maximum viewing angle area corresponding to the second focal length is determined based on a maximum rotational angle of the pose adjustment apparatus.
77. The system according to claim 75, wherein the photographing device is specifically configured to determine that the selectable range of the to-be-processed region in the first image is any region in the first image if it is determined that the second maximum viewing angle region includes the first maximum viewing angle region according to the comparison result.
78. The system according to claim 75, wherein the photographing device is specifically configured to determine that the selectable range of the to-be-processed region in the first image is an overlapping region of the first maximum viewing angle region and the second maximum viewing angle region if it is determined that the second maximum viewing angle region includes a part of the first maximum viewing angle region according to the comparison result.
79. The system of claim 75, wherein the photographing device is specifically configured to determine the selectable range as an upper selectable limit of the first maximum viewing angle region based on the first upper limit, the second upper limit, and a vertical viewing angle region corresponding to the first focal length if it is determined from the comparison that a first upper limit of the first maximum viewing angle region exceeds a second upper limit of the second maximum viewing angle region when the current attitude angle is the current pitch angle.
80. The system of claim 75, wherein the photographing device is specifically configured to determine, when the current posture angle is a current pitch angle, a selectable lower limit of the selectable range in the first maximum viewing angle area based on the second lower limit, the first lower limit, and a vertical viewing angle area corresponding to the first focal length if it is determined from the comparison that a second lower limit of the second maximum viewing angle area exceeds a first lower limit of the first maximum viewing angle area.
81. The system of claim 75, wherein the photographing device is specifically configured to determine that the selectable range is a selectable left limit of the first maximum viewing angle area based on the first left limit, the second left limit, and a horizontal viewing angle area corresponding to the first focal length if it is determined that a first left limit of the first maximum viewing angle area exceeds a second left limit of the second maximum viewing angle area according to the comparison result when the current posture angle is the current yaw angle.
82. The system of claim 75, wherein the photographing device is specifically configured to determine that the selectable range is a selectable right limit of the first maximum viewing angle area based on the second right limit, the first right limit, and a horizontal viewing angle area corresponding to the first focal length if it is determined that the second right limit of the second maximum viewing angle area exceeds the first right limit of the first maximum viewing angle area according to the comparison result when the current attitude angle is the current yaw angle.
83. The system of claim 75, wherein the control terminal comprises a display interface, and wherein the control terminal is further configured to display at least one of the plurality of boundaries of the selectable area via the display interface.
84. The system of claim 75, wherein the photographing device is further configured to determine the region to be processed after the selectable range of the region to be processed in the first image is determined according to the comparison result, wherein the second maximum viewing angle region includes the region to be processed; and determining photographing associated information based on the second focal length and the to-be-processed area so as to control the photographing device to acquire at least one second image of the to-be-processed area under the second focal length based on the photographing associated information.
85. The system according to claim 84, wherein said photographing device is further configured to obtain a candidate pending area before said determining said pending area; and is specifically configured to determine that the candidate to-be-processed area is the to-be-processed area if it is determined that the candidate to-be-processed area does not exceed the selectable range.
86. The system according to claim 85, wherein the photographing device is further configured to output a prompt message if it is determined that the candidate pending area is beyond the selectable range after the candidate pending area is obtained.
87. The system according to claim 84, wherein the photographing device is specifically configured to determine the region to be processed based on an image recognition algorithm.
88. The system according to claim 87, wherein the photographing device is specifically configured to perform image recognition on the first image to determine an object to be processed; and taking the area where the object to be processed is located as the area to be processed.
89. The system of claim 87, wherein the photographing device is specifically configured to determine the region to be processed from the first image based on a preset task and an image recognition algorithm.
90. The system of claim 84, wherein the photographing-related information comprises: and shooting pose sequence information.
91. The system according to claim 90, characterized in that the shooting pose sequence information is determined based on shooting combination information, which is determined based on the second focal distance and the region to be processed.
92. The system according to claim 91, wherein the shooting combination information includes the number of times of shooting, and the area to be shot for each shooting;
the photographing device is specifically used for determining the photographing times and the region to be photographed for each photographing based on the second focal length and the region to be processed; determining the shooting pose sequence information based on the region to be shot; each shooting pose in the shooting pose sequence information corresponds to one region to be shot, so that the shooting device can shoot a second image corresponding to the region to be shot at the second focal length under each shooting pose of the shooting pose sequence information.
93. The system of claim 92, wherein the means for taking pictures is specifically configured to determine a viewing angle region corresponding to the second focal length based on an image sensor size and the second focal length; determining the photographing times and the region to be photographed for each photographing based on the region to be processed and the visual angle region corresponding to the second focal length; wherein the region formed by the plurality of regions to be photographed includes the region to be processed.
94. The system according to claim 93, wherein the photographing device is specifically configured to determine the number of times of photographing and the region to be photographed for each photographing based on the image overlapping ratio between the region to be processed, the viewing angle region corresponding to the second focal length, and the region to be photographed.
95. The system according to claim 92, wherein the photographing apparatus is specifically configured to determine a photographing pose corresponding to the region to be photographed at each photographing based on the current pose of the camera at the first focal length and the angular position deviation of the second focal length, and use a set of all the determined photographing poses as the photographing pose sequence information.
96. The system of claim 95, wherein the means for taking pictures is specifically configured to determine a first deviation between the specified location of the view area corresponding to the second focal length and the specified location of the view area corresponding to the first focal length; determining a second deviation between the designated position of the to-be-processed area and the designated position of the view angle area corresponding to the first focal length; determining a third deviation between the specified position of the region to be photographed and the specified position of the region to be processed; and determining a second focal length angle position deviation between the area to be photographed and the designated position of the viewing angle area corresponding to the first focal length based on the first deviation, the second deviation and the third deviation.
97. The system according to claim 92, wherein the photographing apparatus is specifically configured to photograph a second image corresponding to the region to be photographed at the second focal length in each photographing pose of the photographing pose sequence information.
98. The system according to claim 97, wherein the photographing device comprises a pose adjusting device and a photographing device, and the photographing device is specifically configured to control the photographing device to sequentially stay at each photographing pose in the photographing pose sequence information through the pose adjusting device, and control the photographing device to photograph second images respectively corresponding to the regions to be photographed at the second focal length at each photographing pose.
99. The system of claim 98, wherein the attitude adjustment device is disposed on a movable platform.
100. The system according to claim 97, wherein the photographing apparatus is further configured to store the first image and the second image in association after photographing a second image corresponding to the region to be photographed at the second focal length in each photographing pose of the photographing pose sequence information.
101. The system of claim 100, wherein the photographing device is specifically configured to determine a first mapping relationship between the second image and the region to be processed, and determine a second mapping relationship between the region to be processed and the first image; and storing the first image, the area to be processed, the second image, the first mapping relation and the second mapping relation.
102. The system of claim 100, wherein the second image is an unprocessed captured image and is stored in the imaging device.
103. The system of claim 100, wherein the second image is a processed captured image and stored at the control terminal, and wherein the resolution of the second image is less than the resolution of an unprocessed captured image.
104. The system of claim 103, wherein the control terminal further comprises a display interface, and the control terminal is further configured to, if an operation instruction for viewing the second image is received, obtain an unprocessed captured image corresponding to the second image from the photographing device; and displaying the unprocessed shot image corresponding to the second image through the display interface.
105. The system of claim 97, wherein the photographing apparatus is further configured to acquire a third image after the control photographing end photographs a second image corresponding to the region to be photographed at the second focal length in each photographing pose of the sequence of photographing poses, the third image being synthesized based on the second image; and storing the first image and the third image in association.
106. The system of claim 75, wherein the means for taking the picture comprises means for taking a zoom to take the first image and the second image at the first focal length and the second focal length, respectively.
107. The system of claim 75, wherein the camera comprises a first camera and a second camera;
the first shooting device is used for shooting a first image;
the second shooting device is used for shooting a second image;
wherein the focal length of the first camera is at least partially different from the focal length of the second camera.
108. The system of claim 107, wherein at least one of the first camera and the second camera is adjustable in focal length.
109. The system of claim 107, wherein the first camera is a wide angle camera and the second camera is a tele camera.
110. The system according to claim 75, wherein the second focal length comprises any one of: a fixed focal length, a preset focal length, a focal length determined based on a focal length algorithm, a focal length determined based on a sensor, and a user selected focal length.
111. The system of claim 110 wherein the means for taking pictures comprises a laser range finder configured to measure distance information to determine the second focal length based on the distance information.
112. A computer-readable storage medium having stored thereon executable instructions that, when executed by one or more processors, may cause the one or more processors to perform the method of any one of claims 1 to 37.
CN202080004280.7A 2020-02-28 2020-02-28 Image processing method, image processing apparatus, and image processing system Pending CN112514366A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/077217 WO2021168804A1 (en) 2020-02-28 2020-02-28 Image processing method, image processing apparatus and image processing system

Publications (1)

Publication Number Publication Date
CN112514366A true CN112514366A (en) 2021-03-16

Family

ID=74953160

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080004280.7A Pending CN112514366A (en) 2020-02-28 2020-02-28 Image processing method, image processing apparatus, and image processing system

Country Status (2)

Country Link
CN (1) CN112514366A (en)
WO (1) WO2021168804A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113688824A (en) * 2021-09-10 2021-11-23 福建汇川物联网技术科技股份有限公司 Information acquisition method and device for construction node and storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115499565B (en) * 2022-08-23 2024-02-20 盯盯拍(深圳)技术股份有限公司 Image acquisition method and device based on double lenses, medium and automobile data recorder

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006074259A (en) * 2004-08-31 2006-03-16 Pentax Corp Trimming imaging apparatus
CN105245771A (en) * 2014-07-01 2016-01-13 苹果公司 Mobile camera system
CN106254780A (en) * 2016-08-31 2016-12-21 宇龙计算机通信科技(深圳)有限公司 A kind of dual camera camera control method, photographing control device and terminal
CN107277371A (en) * 2017-07-27 2017-10-20 青岛海信移动通信技术股份有限公司 A kind of method and device in mobile terminal amplification picture region
US20170352175A1 (en) * 2014-12-31 2017-12-07 Huawei Technologies Co., Ltd. Picture processing method and apparatus
CN110099213A (en) * 2019-04-26 2019-08-06 维沃移动通信(杭州)有限公司 A kind of image display control method and terminal
CN110602401A (en) * 2019-09-17 2019-12-20 维沃移动通信有限公司 Photographing method and terminal
CN110781879A (en) * 2019-10-31 2020-02-11 广东小天才科技有限公司 Point reading target identification method and system, storage medium and electronic equipment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106791376B (en) * 2016-11-29 2019-09-13 Oppo广东移动通信有限公司 Imaging device, control method, control device and electronic device
KR102013781B1 (en) * 2018-01-23 2019-08-23 광주과학기술원 a Method for object detection using two cameras with different focal lengths and apparatus thereof
CN112333380B (en) * 2019-06-24 2021-10-15 华为技术有限公司 Shooting method and equipment
CN110493526B (en) * 2019-09-10 2020-11-20 北京小米移动软件有限公司 Image processing method, device, equipment and medium based on multiple camera modules

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006074259A (en) * 2004-08-31 2006-03-16 Pentax Corp Trimming imaging apparatus
CN105245771A (en) * 2014-07-01 2016-01-13 苹果公司 Mobile camera system
US20170352175A1 (en) * 2014-12-31 2017-12-07 Huawei Technologies Co., Ltd. Picture processing method and apparatus
CN106254780A (en) * 2016-08-31 2016-12-21 宇龙计算机通信科技(深圳)有限公司 A kind of dual camera camera control method, photographing control device and terminal
CN107277371A (en) * 2017-07-27 2017-10-20 青岛海信移动通信技术股份有限公司 A kind of method and device in mobile terminal amplification picture region
CN110099213A (en) * 2019-04-26 2019-08-06 维沃移动通信(杭州)有限公司 A kind of image display control method and terminal
CN110602401A (en) * 2019-09-17 2019-12-20 维沃移动通信有限公司 Photographing method and terminal
CN110781879A (en) * 2019-10-31 2020-02-11 广东小天才科技有限公司 Point reading target identification method and system, storage medium and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113688824A (en) * 2021-09-10 2021-11-23 福建汇川物联网技术科技股份有限公司 Information acquisition method and device for construction node and storage medium
CN113688824B (en) * 2021-09-10 2024-02-27 福建汇川物联网技术科技股份有限公司 Information acquisition method, device and storage medium for construction node

Also Published As

Publication number Publication date
WO2021168804A1 (en) 2021-09-02

Similar Documents

Publication Publication Date Title
CN108702444B (en) Image processing method, unmanned aerial vehicle and system
US10970915B2 (en) Virtual viewpoint setting apparatus that sets a virtual viewpoint according to a determined common image capturing area of a plurality of image capturing apparatuses, and related setting method and storage medium
US9313400B2 (en) Linking-up photographing system and control method for linked-up cameras thereof
US9055216B1 (en) Using sensor data to enhance image data
CN108574825B (en) Method and device for adjusting pan-tilt camera
WO2017020150A1 (en) Image processing method, device and camera
US20200084424A1 (en) Unmanned aerial vehicle imaging control method, unmanned aerial vehicle imaging method, control terminal, unmanned aerial vehicle control device, and unmanned aerial vehicle
KR102155895B1 (en) Device and method to receive image by tracking object
US20180041733A1 (en) Video surveillance system with aerial camera device
JP6716015B2 (en) Imaging control device, imaging system, and imaging control method
CN112514366A (en) Image processing method, image processing apparatus, and image processing system
CN111194433A (en) Method and system for composition and image capture
JP6240328B2 (en) How to build an optical flow field
US10089726B2 (en) Image processing apparatus, image processing method, and storage medium, relating to generating an image corresponding to a predetermined three-dimensional shape by transforming a captured image
CN110036411B (en) Apparatus and method for generating electronic three-dimensional roaming environment
CN110930303A (en) Panorama forming method and system
KR100736565B1 (en) Method of taking a panorama image and mobile communication terminal thereof
WO2021134715A1 (en) Control method and device, unmanned aerial vehicle and storage medium
JP2018092507A (en) Image processing apparatus, image processing method, and program
KR20210112390A (en) Filming method, apparatus, electronic device and storage medium
JP2021197572A (en) Camera control apparatus and program
KR102637344B1 (en) Device and method to perform object recognition
JP2020005111A (en) Information processing apparatus, control method, and program
JP7407963B2 (en) Distance information generation device and distance information generation method
WO2024018973A1 (en) Information processing method, information processing device, and information processing program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
AD01 Patent right deemed abandoned

Effective date of abandoning: 20230825

AD01 Patent right deemed abandoned