CN210007799U - Image processing system and unmanned aerial vehicle - Google Patents

Image processing system and unmanned aerial vehicle Download PDF

Info

Publication number
CN210007799U
CN210007799U CN201821757917.6U CN201821757917U CN210007799U CN 210007799 U CN210007799 U CN 210007799U CN 201821757917 U CN201821757917 U CN 201821757917U CN 210007799 U CN210007799 U CN 210007799U
Authority
CN
China
Prior art keywords
picture
image data
image
module
processing system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201821757917.6U
Other languages
Chinese (zh)
Inventor
李昭早
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Autel Intelligent Aviation Technology Co Ltd
Original Assignee
Shenzhen Autel Intelligent Aviation Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Autel Intelligent Aviation Technology Co Ltd filed Critical Shenzhen Autel Intelligent Aviation Technology Co Ltd
Priority to CN201821757917.6U priority Critical patent/CN210007799U/en
Application granted granted Critical
Publication of CN210007799U publication Critical patent/CN210007799U/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The embodiment of the utility model relates to kinds of image processing system and unmanned aerial vehicle, the image processing system is including cutting the module and the picture-in-picture is synthesized the module, cut the module and be arranged in image data, cut and obtain the target area, image data are for coming from the image data of high resolution visible light camera lens, the picture-in-picture is synthesized the module and is used for superposing picture and second picture, generate the picture-in-picture image, show the target area in the picture, show second image data in the second picture, second image data is for coming from the infrared thermal imaging lens's infrared data.

Description

Image processing system and unmanned aerial vehicle
[ technical field ] A method for producing a semiconductor device
The utility model relates to a technical field that takes photo by plane especially relates to kinds of image processing system and unmanned aerial vehicle.
[ background of the invention ]
The unmanned aerial vehicle is used as types of hovering aerial vehicles with strong adaptability, low use cost and quick and convenient release, is applied to in different occasions, and can play an important role by carrying different types of functional components.
The conventional unmanned aerial vehicle is generally provided with an aerial camera formed by combining a plurality of cameras, and the aerial camera is used for shooting visible light and corresponding infrared thermal imaging image data.
By applying the aerial camera, a user on the ground can obtain an infrared imaging image and a visible light image at the same time, the aerial detection task can be conveniently completed by the user, and more detailed and accurate relevant information (such as topographic information or building and traffic road information in the shooting process) can be obtained.
However, the resolution of the aerial camera generally adopted by the existing aerial camera is limited, so that the accurate image details are difficult to acquire, and the requirement of a user on the detail acquisition of a local area in the aerial reconnaissance process cannot be supported. How to adjust the image acquisition and processing mode of an aerial camera and meet the use requirement of a user in the aerial detection process is the problem which needs to be solved urgently at present.
[ summary of the invention ]
In order to solve the technical problem, the embodiment of the utility model provides an compromise image processing system and unmanned aerial vehicle that local area detail was gathered and global field of vision observes simultaneously.
In order to solve the above technical problem, an embodiment of the present invention provides an image processing system, including:
a cropping module for cropping an obtained target area in th image data, the th image data being image data from a high-resolution visible light lens;
the picture-in-picture synthesis module is used for superposing th picture and second picture to generate a picture-in-picture image, the th picture shows the target area, the second picture shows second image data, and the second image data is infrared data from an infrared thermal imaging lens.
Optionally, the image processing system further comprises: and the coding module is used for coding and outputting the picture-in-picture image according to a preset coding format.
Optionally, the picture-in-picture synthesizing module is further configured to superimpose a third picture on the th picture to generate a picture-in-picture image including the th picture, the second picture and the third picture, where the third picture shows third image data, and the third image data is image data from a large-field-of-view lens.
Optionally, the cropping module is further configured to select a center coordinate and a magnification of the target area, enlarge the th image data according to the magnification, determine a position of the target area in the enlarged th image data according to the center coordinate, and enable the size of the target area to be equivalent to the size of the th picture.
Optionally, when determining the position of the target area in the enlarged th image data according to the center coordinate, the cropping module is specifically configured to:
judging whether the distance between the center coordinate and the edge of the amplified th image data is larger than an allowable size;
if the target area is cut from the th enlarged image data by taking the central coordinate as the center of the target area, and if not, cutting the target area closely to edges of the th enlarged image data.
Optionally, the cropping module, when performing the cropping of the target area against edges of the enlarged th image data, is specifically configured to crop the target area against a left edge of the enlarged th image data when the th condition is not satisfied, crop the target area against a right edge of the enlarged th image data when the second condition is not satisfied, crop the target area against a top edge of the enlarged th image data when the third condition is not satisfied, and crop the target area against a bottom edge of the enlarged th image data when the fourth condition is not satisfied.
In order to solve the technical problem, the embodiment of the utility model also provides the following technical proposal that unmanned aerial vehicles comprise a main body, an aerial camera carried on the main body and the image processing system;
the aerial camera comprises a high-resolution visible light lens, a large-view lens and an infrared thermal imaging lens; the image processing system is installed in the machine body main body, is connected with the aerial camera and is used for outputting corresponding picture-in-picture images according to user instructions.
Optionally, the drone further comprises a mass storage device; the mass storage device is used for independently storing image data obtained by shooting through different lenses of the aerial camera.
Optionally, the unmanned aerial vehicle further comprises an image coding chip; the image coding chip is used for coding and outputting the picture-in-picture image output by the image processing system according to a preset coding format.
Compared with the prior art, the embodiment of the utility model provides an image processing method provides more meticulous and abundant image data through the camera lens that can zoom of big resolution ratio to realize the function that local target area enlargies, the effectual experience that has improved the user satisfies the meticulous demand of looking over of user to specific target.
And , the global image is integrated as a picture-in-picture image, so that the user can control and know the global image while paying attention to the details of the local target area, thereby better completing the aerial photography task.
[ description of the drawings ]
the various embodiments are illustrated by way of example in the accompanying drawings and not by way of limitation, in which elements having the same reference number designation may be referred to by similar elements in the drawings and, unless otherwise indicated, the drawings are not to scale.
Fig. 1 is a schematic diagram of an application environment of an embodiment of the present invention;
fig. 2 is a functional block diagram of an unmanned aerial vehicle according to an embodiment of the present invention;
fig. 3 is a block diagram of an image processing system according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a picture-in-picture image according to an embodiment of the present invention;
fig. 5 is a block diagram of an electronic computing device according to an embodiment of the present invention;
fig. 6 is a flowchart of a method of an image processing method according to an embodiment of the present invention;
fig. 7 is a schematic diagram of an image processing method according to another embodiment of the present invention;
fig. 8 is a schematic diagram of th image data being cropped according to an embodiment of the present invention;
FIG. 9 is a flowchart of a method for cropping a target area from image data according to an embodiment of the present invention;
fig. 10 is a schematic diagram of a cut image data according to another embodiment of the present invention .
[ detailed description ] embodiments
For purposes of explanation, and not limitation, the terms "," "second," "third," etc. are used merely for descriptive purposes and are not intended to indicate or imply relative importance, as the terms " elements are used in connection with the description of the invention and are not intended to limit the invention, as the elements are" secured to "another elements, it may be directly on the other elements, or there may be or more intervening elements between them, as elements are" connected "to" another elements, it may be directly connected to the other elements, or there may be or more intervening elements between them.
The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention, the terminology used in the description herein "and/or" includes any and all combinations of or more of the associated listed items.
Furthermore, the technical features mentioned in the different embodiments of the invention described below can be combined with each other as long as they do not conflict with each other.
The aerial reconnaissance is a mode of shooting an image of an area in a mode of moving at a specific line and height through a flying carrier deployed in the air to realize functions of monitoring, reconnaissance and the like.
Fig. 1 is an application environment provided by an embodiment of the present invention. As shown in fig. 1, the application environment includes a drone 10, a smart terminal 20, and a wireless network 30.
The drone 10 may be any type of powered (e.g., electric) unmanned aerial vehicle including, but not limited to, quad drones, fixed wing aircraft, helicopter models, and the like. In this embodiment, a four-axis drone is taken as an example for presentation.
The unmanned aerial vehicle 10 has the volume or power corresponding to actual needs, and can provide load capacity, flight speed, flight endurance mileage and the like meeting use requirements for users, or more functional modules are mounted on the unmanned aerial vehicle 10 to execute corresponding tasks (such as aerial reconnaissance).
Fig. 2 is a functional block diagram of the unmanned aerial vehicle 10 provided in the embodiment of the present invention. As shown in fig. 2, in the present embodiment, the unmanned aerial vehicle 10 may be mounted with an aerial camera 11, an image processing system 12, an image transmission device 13, and a mass storage device 14.
The aerial camera 11 can be arranged on the unmanned aerial vehicle 10 through mounting and fixing components such as a cradle head and the like, and is used for collecting image information of a target area in the flying process.
The aerial camera may have two or more lenses for respectively acquiring different types of image data. For example, the aerial camera may include a high resolution visible light lens and an infrared thermal imaging lens that provide high resolution image data and infrared image data, respectively, of the target area.
Specifically, the high resolution refers to image data (for example, 4K resolution) that conforms to a high definition or ultra-high definition image standard. The high-resolution visible light lens can provide fine information of the target area, so that the local amplification function of a user is supported, and the user can amplify the target area or the local area which is interested in the aerial photography detection process.
Of course, the high-resolution visible lens may also be a lens with fixed optical zoom capability (e.g. 4K resolution, high-definition camera capable of supporting 30 times optical zoom), so as to enlarge the local area of interest to the user by changing the optical focal length.
In , the aerial camera may further include another wide-field lenses having a larger field of view angle (e.g., 140 ° or greater). The VGA lens having a lower resolution may be selected for use relative to the high resolution sightseeing lens.
The image transmission device 13 is a data transmission device arranged at the unmanned aerial vehicle end and used for transmitting the processed video data to an intelligent terminal or other ground end equipment.
Specifically, as shown in fig. 2, an independent image coding chip 15 may be further mounted on the unmanned aerial vehicle. The image encoding chip 15 is disposed between the image transmission device 13 and the image processing system 12, and is configured to encode the pip image output by the image processing system 12 in a preset encoding format (e.g., H265 encoding) and output the pip image to the image transmission device 13.
Of course, the image coding chip 15 can also be integrated into other functional systems of the unmanned aerial vehicle as functional modules.
The mass storage device 14 is a device module connected to the aerial camera, and may be any type of non-volatile storage device, such as an SD card, a flash memory, or an SSD hard disk.
Image data (such as high-resolution images or infrared thermal imaging images) obtained by shooting through different lenses of the aerial camera are respectively and independently stored in different storage areas, and can be exported or provided for subsequent functions as backup and record of original data.
The data collected by the aerial camera may be stored in the mass storage device 14 in or more forms, for example, it may be stored in the form of pictures encoded by JPEG, or in the form of video files encoded by H265.
The image processing system 12 is a hardware processing system for receiving the image data captured by the aerial camera and compressing, overlaying, cropping, etc. or more image processes, for example, the output of the image processing system may be a picture-in-picture image formed by the superposition of multiple frames based on multiple passes of data captured by different lenses of the aerial camera.
Fig. 3 is a functional block diagram of an image processing system according to an embodiment of the present invention. As shown in fig. 3, the image processing system may include: a cropping module 121 and a picture-in-picture synthesizing module 122.
The image processing system can be provided with a corresponding data input end, is connected with the aerial camera in a wired or wireless mode, and is used for receiving th image data, second image data, third image data and the like acquired by the aerial camera and providing the data to subsequent functional modules for processing.
In the present embodiment, the th image data represents image data from a high-resolution visible light lens, the second image data represents infrared data from an infrared thermal imaging lens, and the third image data represents global image data from a large-field lens.
The cropping module 121 is a pre-processing unit for cropping image data to obtain a target area of interest, which may be determined by user commands, during aerial photography, for example, the user may select the target area in one or more ways such as by a joystick or the like image data is image data captured by a high resolution lens, which has higher pixels and still maintains good resolution after cropping and magnification.
The pd compositing module 122 is a hardware module for overlaying multiple paths of video data to generate pd images, which are specific existing video content presentation modes, specifically, paths of image data are played on a partial area of a full-screen playing picture (as shown in fig. 4) while paths of image data are played in full-screen.
In another embodiment, when the pip composition module 122 receives the third image data (e.g., from the tfov lens), the pip composition module 122 may further superimposes the third image on the pip image to show the third image data (e.g., as shown in fig. 4).
Fig. 4 is a specific example of a pip image according to an embodiment of the present invention, as shown in fig. 4, in this embodiment, the pip image includes a main display screen 41 and two sub display screens 42 and 43 superimposed on the main display screen, the th screen, the second screen, and the third screen can be selected as the main display screen, after the main display screen is selected, other screens are selected as the sub display screens, and the th screen is taken as the main display screen in fig. 4.
By the mode, a user can observe a target area which is interested by the user and needs to be amplified on the main display picture, meanwhile, the attention to the global visual field and the infrared imaging information is kept through the two auxiliary display pictures, and a better and more comprehensive aerial photography detection result is provided for the user.
It should be noted that fig. 3 is a functional block diagram as an example, which describes in detail the structure of the image processing system provided by the embodiment of the present invention, and those skilled in the art can choose software, hardware or a combination of software and hardware to implement the functions of or more functional modules according to the practical requirements (e.g. chip power consumption, heat generation limitation, silicon chip cost or chip volume, etc.) according to the inventive concepts disclosed in the specification.
Fig. 5 is a block diagram of an electronic computing device according to an embodiment of the present invention. The electronic computing device can be used to implement the image processing system provided by the embodiment of the present invention, execute the functions of the functional modules shown in fig. 3, and output the corresponding picture-in-picture image.
As shown in fig. 5, the electronic computing platform 50 may include: a processor 51, a memory 52 and a communication module 53. The processor 51, the memory 52 and the communication module 53 establish a communication connection therebetween in a bus manner.
The processor 51 may be any type of integrated circuit having or more processing cores, which may perform single-threaded or multi-threaded operations for resolving instructions to perform operations for fetching data, performing logical operations, and issuing operation processing results, the processor 51 has a number of data interfaces, which may be configured to form data inputs or outputs.
The memory 52 serves as non-volatile computer-readable storage media, such as at least disk storage devices, flash memory devices, distributed storage devices remotely located from the processor 51, or other non-volatile solid-state storage devices.
The memory 52 may have a program storage area for storing non-volatile software programs, non-volatile computer-executable programs, and modules for the processor 51 to call to cause the processor 51 to perform or more method steps the memory 52 may also have a data storage area for storing the results of the arithmetic processing issued by the processor 52.
The communication module 53 is a hardware unit for establishing a communication connection and providing a physical channel for data transmission. The communication module 53 may be any type of wireless or wired communication module, including but not limited to a WiFi module or a bluetooth module, etc., that interfaces with other systems or functional units in the drone 10 (e.g., a picture transfer system), provides processed picture-in-picture image data or raw data collected by an aerial camera.
With continued reference to fig. 1, the smart terminal 20 may be any type of smart device, such as a mobile phone, a tablet computer, or a smart remote controller, for establishing a communication connection with the drone, the smart terminal 20 may be equipped with or more different user interaction devices, based on which to collect user instructions or to present and feed back information to the user.
For example, the smart terminal 20 may be equipped with a touch display screen through which a user's remote control command for the drone is received and a picture-in-picture image is presented to the user through the touch display screen.
Through the intelligent terminal 20, the user can observe three paths of image information (namely a visible light image with high resolution, an infrared thermal imaging image and a VGA (video graphics array) large-field image) on the ground, and the attention to the overall situation and the infrared imaging is kept while the region of interest is observed in an enlarged mode.
The wireless network 30 may be a wireless communication network for establishing a data transmission channel between two nodes based on any type of data transmission principle, such as a bluetooth network, a WiFi network, a wireless cellular network or a combination thereof located in different signal frequency bands. The frequency band or network format specifically used by the wireless network 30 is related to the communication devices (e.g., specifically used mapping devices) used by the drone 10 and the smart terminal 20.
The application environment shown in fig. 1 only shows the application of the aerial reconnaissance image processing system to the drone. It will be understood by those skilled in the art that the image processing system can also be mounted on other types of mobile vehicles (such as a remote control car), receive multiple image data collected by multiple cameras and perform the same function. The embodiment of the utility model provides an image processing system's utility model thinking is not limited to and uses on the unmanned aerial vehicle that figure 1 is shown.
Fig. 6 is a flowchart of a method of an image processing process according to an embodiment of the present invention. The image processing process can be performed by the image processing system disclosed in the above embodiments and/or a functional module connected thereto. As shown in fig. 6, the method may include the steps of:
601. th image data and second image data are received, the th image data being image data from a high resolution visible light lens, the second image data being infrared data from an infrared thermal imaging lens.
In another embodiments, it may be possible to further steps to include more paths of input image data to provide comprehensive image information (e.g., third image data provided by a large field lens).
The th image data, the second image data or the third image data are only used for distinguishing image data acquired from different lenses or image acquisition devices, and do not limit the data per se.
602. In the th image data, a target area is cropped.
"cropping" is a commonly used processing means in image processing, and as shown in fig. 8, means to cut out a part of image data from image frames with a larger size.
The target area refers to the area or position of an object of interest (such as a building or mountain land) in the process of aerial photography investigation by a user. The size of the target area may be generally limited by the size of the display screen of the smart terminal 20.
603. The target area is presented in a th picture and the second image data is presented in a second picture, respectively.
The th picture and the second picture are both independent display windows for displaying channels of image data, each window is independently played and can be resized (for example, enlarged or reduced according to a control instruction), in embodiments, when there are third or more channels of image data, more third pictures can be correspondingly arranged, and the like, so that multiple channels of different image data can be independently displayed and played in different display windows.
604. And superposing the th picture and the second picture to generate a picture-in-picture image.
For example, as shown in FIG. 4, a th screen can have the largest area (occupying the display screen of the whole intelligent terminal), and the second screen and the third screen are respectively superposed on a th screen to form a picture-in-picture image including a th screen, the second screen and the third screen.
Fig. 7 is a flowchart of a method of processing an image according to a preferred embodiment of the present invention. As shown in fig. 7, after step 604, the method may further include the steps of:
605. encoding the picture-in-picture image, and transmitting the encoded picture-in-picture image data.
The encoding process may be performed by an encoding chip or an encoding module, and the encoded pip image data is transmitted as complete data through the image transmission device to the intelligent terminal 20 for playing.
Therefore, the problem of data synchronization among the th picture, the second picture or the third picture does not exist, and better user experience can be provided.
With continued reference to FIG. 7, the PIP image is transmitted to the intelligent terminal 20 after being combined, and thus, the original image data collected by the aerial camera cannot be retained, in some embodiments, in addition to combining the PIP image, the following steps are included:
606. and independently storing the th image data, the second image data and the third image data in corresponding storage areas.
The storage area may be provided by mass storage provided by an aerial camera or drone. From this, aerial camera's original image data also can be kept to the user derives in the later stage and uses, reduces repeated aerial investigation and improves the investigation effect.
Fig. 9 is a flowchart of a method of a specific example of the trimming target area according to an embodiment of the present invention. In the actual use process, the user is allowed to send a control command to adjust the target area which the user needs to enlarge and view in the intelligent terminal 20 or other types of remote controllers. As shown in fig. 9, the image processing system may perform the trimming process of the target area by:
901. the center coordinates and magnification of the target area are selected.
The center coordinates refer to the location of the center of the target area in a coordinate system, which may typically be two-dimensional coordinates, where the location of the target area may be located as shown in FIG. 8, where the center coordinates may be apparent as a circle center or other graphical element on the display for the convenience of the user in controlling and adjusting his or her target area.
The magnification factor is a parameter for representing the degree of magnification of the partial image image data is image data of high pixels, therefore, when the intelligent terminal 20 views the aerial image, the user can enlarge the target area of interest to obtain more details according to the needs of the user.
902. And amplifying the th image data according to the amplification factor.
The user may output a control command on the smart terminal 20 or other remote controller to control the magnification (e.g., 2 times, 3 times, or more). after the magnification is determined, the th image data may be enlarged in size to the corresponding magnification, as shown in fig. 8.
903. The position of the target area is determined in the enlarged th image data based on the center coordinates.
For example, when the th picture is used as a main display picture, the th picture can be equivalent to the size of a display screen of the intelligent terminal, so that a picture with a corresponding size cut out from the enlarged th image data is played on the th picture.
As shown in FIG. 8, after the center coordinates and the size of the target area are determined, only of the enlarged th image data can be used to determine the portion to be cropped and displayed.
FIG. 10 shows a clipping process of th image data according to an embodiment of the present invention, as shown in FIG. 10, if the position of the center coordinate is too close to the edge of the enlarged th image data, the target area will be partially left empty and there will be no image, therefore, in embodiments, the method further includes the following steps in :
first, it is determined whether the distance between the center coordinates and the edge of the enlarged th image data is larger than an allowable size.
If not, it is determined that the target area is too close, and edges of the enlarged th image data need to be closely attached to each other to keep the target area from being left empty.
The above-described determination and cutting process is described in detail below, taking the embodiment shown in fig. 10 as an example:
first, when a user observes an area or a target of interest during an aerial reconnaissance process, the user can move the cursor to a specific position and send a zoom-in instruction through an interactive device (such as a joystick, a mouse wheel or a touch button) of a smart terminal or a remote controller located on the ground side.
Then, after the image processing system enlarges the th image data to the corresponding multiple according to the enlargement instruction, it is determined whether the requirement of the allowable size can be satisfied to keep the cursor at the center position of the target area according to the size of the th screen and the position (i.e., the center coordinate) where the cursor a is located.
Wherein the criterion of the allowable size is composed of the following four conditions:
-th Condition:
Figure BDA0001843787290000121
the second condition is that:
Figure BDA0001843787290000122
a third condition:
Figure BDA0001843787290000123
a fourth condition:
Figure BDA0001843787290000124
w and h are the image width and the image height of the th image data, respectively, (x, y) are the center coordinates, and n is the magnification.
As shown in fig. 10, the th to fourth conditions respectively indicate the degree of closeness between the target region and the four edges of the enlarged th image data.
Thus, when there are non-compliances, the border may be selected to cut (i.e., the smallest value is selected) next to the corresponding edge.
In other words, the case where the permissible size is not satisfied is four, that is, the target area is clipped to the left edge of the enlarged th image data when the th condition is not satisfied, the target area is clipped to the right edge of the enlarged th image data when the second condition is not satisfied, the target area is clipped to the top edge of the enlarged th image data when the third condition is not satisfied, or the target area is clipped to the bottom edge of the enlarged th image data when the fourth condition is not satisfied.
And when all the conditions are met, it indicates that the enlarged th image data is large enough to keep the cursor a at the center of the target area.
It should be understood, of course, that although fig. 10 illustrates the cropping of th image data, those skilled in the art may choose to use the above image processing method to crop the image data of the thermal imaging lens and display it in a picture-in-picture image without departing from the spirit and principles of the present invention.
To sum up, the embodiment of the utility model provides an image processing system can let the user in aerial photography investigation in-process, to the local area magnification fixed multiple, the accurate target of looking over oneself and being interested in provides global image simultaneously, makes the user can hold and pay close attention to holistic image shooting situation.
In addition, the real-time transmission and synthesis mode of the picture-in-picture image can conveniently enable a user to check aerial image data in the intelligent terminal in real time, and the problem of asynchronization of multiple paths of image data can be avoided.
Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention. The computer software may be stored in a computer readable storage medium, and when executed, may include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a read-only memory or a random access memory.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit it; within the idea of the invention, also technical features in the above embodiments or in different embodiments can be combined, steps can be implemented in any order, and there are many other variations of the different aspects of the invention as described above, which are not provided in detail for the sake of brevity; although the present invention has been described in detail with reference to the foregoing embodiments, it should be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the present invention.

Claims (4)

  1. An image processing system of , comprising:
    a cropping module for cropping an obtained target area in th image data, the th image data being image data from a high-resolution visible light lens;
    the image processing system comprises a cutting module, a picture-in-picture synthesizing module, an encoding module and a processing module, wherein the cutting module is used for cutting a target area in a picture, and the picture-in-picture synthesizing module is connected with the cutting module and is used for superposing an th picture and a second picture to generate a picture-in-picture image;
    the cutting module, the picture-in-picture synthesizing module and the encoding module are realized through a hardware circuit.
  2. 2. The image processing system according to claim 1,
    the picture-in-picture synthesis module is also used for superposing a third picture on the th picture to generate a picture-in-picture image containing the th picture, the second picture and the third picture, wherein third image data are displayed in the third picture, and the third image data are image data from a large-field-of-view lens.
  3. A drone of , comprising a fuselage body, an aerial camera carried on the fuselage body, and an image processing system according to claim 1 or 2,
    the aerial camera comprises a high-resolution visible light lens, a large-view lens and an infrared thermal imaging lens; the image processing system is installed in the machine body main body, is connected with the aerial camera and is used for outputting corresponding picture-in-picture images according to user instructions.
  4. 4. The drone of claim 3, further comprising a mass storage device; the mass storage device is used for independently storing image data obtained by shooting through different lenses of the aerial camera.
CN201821757917.6U 2018-10-26 2018-10-26 Image processing system and unmanned aerial vehicle Active CN210007799U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201821757917.6U CN210007799U (en) 2018-10-26 2018-10-26 Image processing system and unmanned aerial vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201821757917.6U CN210007799U (en) 2018-10-26 2018-10-26 Image processing system and unmanned aerial vehicle

Publications (1)

Publication Number Publication Date
CN210007799U true CN210007799U (en) 2020-01-31

Family

ID=69299008

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201821757917.6U Active CN210007799U (en) 2018-10-26 2018-10-26 Image processing system and unmanned aerial vehicle

Country Status (1)

Country Link
CN (1) CN210007799U (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113837195A (en) * 2021-10-19 2021-12-24 维沃移动通信有限公司 Image processing method, device, equipment and storage medium
WO2023077351A1 (en) * 2021-11-04 2023-05-11 合肥英睿系统技术有限公司 Picture-in-picture display method and apparatus, and electronic device and readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113837195A (en) * 2021-10-19 2021-12-24 维沃移动通信有限公司 Image processing method, device, equipment and storage medium
WO2023077351A1 (en) * 2021-11-04 2023-05-11 合肥英睿系统技术有限公司 Picture-in-picture display method and apparatus, and electronic device and readable storage medium

Similar Documents

Publication Publication Date Title
CN109151402B (en) Image processing method and image processing system of aerial camera and unmanned aerial vehicle
US20190246104A1 (en) Panoramic video processing method, device and system
CN108702444B (en) Image processing method, unmanned aerial vehicle and system
US11644839B2 (en) Systems and methods for generating a real-time map using a movable object
US11876951B1 (en) Imaging system and method for unmanned vehicles
US11840357B2 (en) Method and device for dual-light image integration, and unmanned aerial vehicle
CN110366745B (en) Information processing device, flight control instruction method, program, and recording medium
US20200084424A1 (en) Unmanned aerial vehicle imaging control method, unmanned aerial vehicle imaging method, control terminal, unmanned aerial vehicle control device, and unmanned aerial vehicle
CN103905761A (en) Image-processing system, image-processing method and program
JP6225147B2 (en) Controller terminal and wireless aircraft control method
US10602064B2 (en) Photographing method and photographing device of unmanned aerial vehicle, unmanned aerial vehicle, and ground control device
US9418299B2 (en) Surveillance process and apparatus
CN210007799U (en) Image processing system and unmanned aerial vehicle
WO2017166714A1 (en) Method, device, and system for capturing panoramic image
US20230360171A1 (en) Voronoi Cropping of Images for Post Field Generation
JP2017067834A (en) A taken image display device of unmanned aircraft, taken image display method, and taken image display program
CN110519540A (en) A kind of image processing method, device, equipment and storage medium
WO2020179869A1 (en) Information-processing device and information-processing program
CN112585939A (en) Image processing method, control method, equipment and storage medium
JP7467402B2 (en) IMAGE PROCESSING SYSTEM, MOBILE DEVICE, IMAGE PROCESSING METHOD, AND COMPUTER PROGRAM
KR20210106422A (en) Job control system, job control method, device and instrument
WO2021237625A1 (en) Image processing method, head-mounted display device, and storage medium
WO2021083150A1 (en) Zoom method and device, aircraft, flight system and storage medium
WO2021115192A1 (en) Image processing device, image processing method, program and recording medium
WO2021079636A1 (en) Display control device, display control method and recording medium

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 518055 Guangdong city of Shenzhen province Nanshan District Xili Street Xueyuan Road No. 1001 Chi Yuen Building 9 layer B1

Patentee after: Shenzhen daotong intelligent Aviation Technology Co.,Ltd.

Address before: 518055 Guangdong city of Shenzhen province Nanshan District Xili Street Xueyuan Road No. 1001 Chi Yuen Building 9 layer B1

Patentee before: AUTEL ROBOTICS Co.,Ltd.