KR101723028B1 - Image processing system for integrated management of image information changing in real time - Google Patents

Image processing system for integrated management of image information changing in real time Download PDF

Info

Publication number
KR101723028B1
KR101723028B1 KR1020160123246A KR20160123246A KR101723028B1 KR 101723028 B1 KR101723028 B1 KR 101723028B1 KR 1020160123246 A KR1020160123246 A KR 1020160123246A KR 20160123246 A KR20160123246 A KR 20160123246A KR 101723028 B1 KR101723028 B1 KR 101723028B1
Authority
KR
South Korea
Prior art keywords
yawing
image
motor
unit
pitching
Prior art date
Application number
KR1020160123246A
Other languages
Korean (ko)
Inventor
정공운
Original Assignee
서광항업 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 서광항업 주식회사 filed Critical 서광항업 주식회사
Priority to KR1020160123246A priority Critical patent/KR101723028B1/en
Application granted granted Critical
Publication of KR101723028B1 publication Critical patent/KR101723028B1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64CAEROPLANES; HELICOPTERS
    • B64C39/00Aircraft not otherwise provided for
    • B64C39/02Aircraft not otherwise provided for characterised by special use
    • B64C39/024Aircraft not otherwise provided for characterised by special use of the remote controlled vehicle type, i.e. RPV
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • B64C2201/108
    • B64C2201/127

Abstract

The present invention relates to an image processing system that integrally manages image information that changes in real time, and is equipped with a low-resolution wide-area camera and a high-resolution tracking camera on an unmanned airplane, , It is possible to track a moving object on the ground or in the sea. If a suspicious object is found by using real-time changing image information, So that identification and tracking of a moving object and a real-time changing object are effectively performed.

Description

BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to an image processing system,

[0001] The present invention relates to an image processing system that integrates and manages image information that changes in real time among technologies in the field of image processing systems, and more specifically, to an image processing system that integrates and manages real- And more particularly, to an image processing system that integrally manages image information that changes in real time so that real-time identification and tracking of moving objects on the ground or the sea are effectively performed.

Generally, tracking of vehicles moving on the ground, such as stolen vehicles, escape vehicles, or wild animals, has been used from time to time in vehicles or helicopters.

However, in the case of tracking using a vehicle, it is often impossible to track the amount of traffic on the road.

In addition, if the helicopter is tracked, it can be traced regardless of the traffic volume on the road. However, since the helicopter is stored in a separate airfield in Korea, the helicopter is taken off It takes a long time to track, so that there is a problem that tracking is delayed.

On the other hand, in our country, illegal fishing of Chinese ships in the West Sea is getting serious enough to worry about depletion of fishery resources as well as reduction of our catches.

Therefore, the seafloor has been operating a maritime front-end consisting of a large-sized vessel, a high-speed assault boat, and a helicopter to control illegal fishing. However, since illegal fishing vessels as well as Korean vessels are mixed, I am having difficulties.

Therefore, techniques have been developed for taking an unmanned airplane to shoot a moving vehicle, a wild animal moving in the ground, or an illegal fishing vessel moving at sea, and to post-process the captured data for effective tracking.

As an example of this, Korean Patent No. 10-1417498 (Registered on Apr. 21, 2014) "Image processing apparatus and method using an unmanned aerial vehicle acquisition image" (hereinafter referred to as "prior art") is an input image A retrieving unit for retrieving the information of the object by comparing the descriptor with the database; and a search unit for searching the information of the object by comparing the descriptor with the database, And a preprocessing unit for enhancing the ease and accuracy of the descriptor creation by performing pre-processing on the illegal operation vessel and the vessel, thereby enabling illegal operation control to be effectively performed.

However, the prior art has limitations in the accurate identification of the object due to the resolution of the image taken by the camera mounted on the UAV, and it may be considered to use a camera with high resolution. However, in this case, As a result, if illegal fishing vessels and our vessels are mixed with hundreds to thousands of vessels, there is a limit to identify them. This means that there is a limit to the number of illegal fishing vessels, Even if it is used to track an object, there are limits to identifying it because many vehicles are running along the roads that the stolen vehicle or escape vehicle travels.

Therefore, it is required to develop a technology that can easily identify illegal fishing vessels and vessels that are moving in the sea as well as objects moving on the ground such as stolen vehicles, escape vehicles or wild animals, and effectively track them.

Korean Patent No. 10-1417498 (Registered on Jul. 02, 2014) "Image Processing Apparatus and Method Using Unmanned Aerial Vehicle Acquired Image"

Accordingly, an object of the present invention is to provide an image processing apparatus and a method for realizing the real time identification and tracking of moving objects on the ground or the sea by integrally managing image information changing in real time on moving objects in the ground or sea, To provide an image processing system.

According to an aspect of the present invention, there is provided an image processing system for integrally managing image information that changes in real time, the image processing system including an aircraft body, a plurality of propeller supports coupled to edges of the aircraft body, A plurality of propellers mounted on outer ends of the plurality of propeller supports, respectively, a plurality of propellers coupled to motor shafts of the propeller motors, a landing mechanism installed at a lower portion of the aircraft body, A control box having a built-in horizontal sensing sensor and a control unit, an unmanned aerial vehicle including a GPS antenna for confirming a flight position, an altimeter for checking flight altitude, a remote control receiver for remote control, and an information transmission antenna; A yawing operation unit for yawing by the yawing motor, a rolling motor mounted on the yawing operation unit, a yawing operation unit for yawing the yawing operation unit, A wide-angle camera gimbals including a rolling operation band that is rolled by the rolling motor, a pitching motor mounted on the rolling operation band, and a pitching operation band that is pitch-operated by the pitching motor; A full HD class wide-area camera mounted on a pitching operation stand of the wide-area camera gimbals; A yawing operation unit for yawing by the yawing motor, a rolling motor mounted on the yawing operation unit, a yawing operation unit for yawing the yawing operation unit, A tracking operation gimbal including a rolling operation band that is rolled by the rolling motor, a pitching motor that is mounted on the rolling operation band, and a pitching operation band that is pitching operation by the pitching motor; A UHD class tracking camera mounted on a pitching operation unit of the tracking camera gimbals; An extraction unit for extracting a foreground region by separating a background and a foreground from an input image captured and provided by the wide area camera and the tracking camera; A first processor for generating an integral image of the foreground region, a second processor for applying the integral image to an approximate hector detector to extract at least one feature point, and a second processor for applying the SURF algorithm to the at least one feature point, A generating unit including a third processing unit for generating the second processing unit; A retrieval unit for retrieving information of the object by comparing the descriptor with a database; And a preprocessor for performing preprocessing including at least one of noise removal and image brightness remover for the foreground region to enhance ease and accuracy of the descriptor generation, to provide.

According to the image processing system of the present invention that manages real-time changing image information, a wide-area camera of low resolution and a high-resolution tracking camera are mounted on the unmanned airplane, and real- When a suspicious object is found as a subject to be tracked by using real-time changing image information of moving objects in the ground or sea by integrated management of changing image information, tracking with a high-resolution tracking camera is performed, Identification and tracking of objects and objects that change in real time are effective.

1 to 11 show a preferred embodiment of an image processing system for integrally managing changed image information in real time according to the present invention,
1 is a block diagram of an image processing apparatus for identifying an object from an image captured by a UAV;
2 is a block diagram of a generating unit included in an image processing apparatus for identifying an object from an image taken by an unmanned aerial vehicle,
3 is a perspective view of a unmanned aerial vehicle equipped with a wide-area camera and a tracking camera,
4 is an exploded perspective view showing a mounting structure of a wide-area camera and a tracking camera,
5 is a drawing of an embodiment for extracting an object from an image processing apparatus for identifying an object from an image taken by an unmanned aerial vehicle,
FIG. 6 is a view of an embodiment for performing brightness remapping in a preprocessing process of an image processing apparatus for identifying an object from an image taken by an unmanned aerial vehicle,
FIG. 7 is a drawing of an embodiment for performing extraction of minutiae points in an image processing apparatus for identifying an object from an image taken by an unmanned air vehicle;
8 is a view of an embodiment for measuring similarity using multiple thresholds in an image processing apparatus for identifying an object from an image taken by an unmanned aerial vehicle,
9 is a diagram of an embodiment for measuring similarity using a tree in an image processing apparatus for identifying an object from an image taken by an unmanned aerial vehicle,
10 is a flowchart of an image processing method for identifying an object from an image taken by an unmanned air vehicle,
11 is a flowchart showing a preprocessing method of an image processing method for identifying an object from an image taken by an unmanned aerial vehicle.

Hereinafter, an image processing system for integrally managing image information changing in real time according to the present invention will be described in detail with reference to the preferred embodiments illustrated in the accompanying drawings.

1 to 11 show a preferred embodiment of an image processing system for integrally managing changed image information in real time according to the present invention.

The image processing system for integrally managing image information changing in real time according to the present embodiment includes a wide area camera 300 mounted on the unmanned air vehicle 10 and an image processing device 300 for identifying an object from the image captured by the tracking camera 400 (100).

The image processing apparatus 100 includes an extraction unit 110, a preprocessing unit 120, a generation unit 130, and a search unit 140.

The extraction unit 110 can extract a foreground region by separating a background and a foreground from an input image captured and transmitted by the wide-area camera 300 and the tracking camera 400 of the UAV 10, You can normalize to a predefined size to be at the same size as the size stored in this database.

The generating unit 130 may extract a feature of the foreground region to generate a descriptor, and the searching unit 140 may search the information of the object by comparing the descriptor with a database.

The image processing apparatus 100 may be provided with a search system capable of analyzing an image taken by an unmanned airplane to identify a ship that is illegally operated in a stolen vehicle, an escape vehicle, wildlife, or the Korean peninsula.

An image photographed using the wide area camera 300 and the tracking camera 400 mounted on the unmanned air vehicle 10 can be transmitted to the control vehicle or the control vessel.

The UAV 10 includes an aircraft body 11, a plurality of (four in the figure) propeller supports 12 coupled radially to the edges of the aircraft body 11, a plurality of propeller supports 12 A plurality of (four in the figure) propeller motors 13 mounted on the outer ends of the propeller motors 13 and a plurality of propellers 14 And a landing mechanism 15 installed at a lower portion of the aircraft main body 11 can be used as a so-called 'drone'.

The wide area camera 300 is mounted on the lower surface of the aircraft body 11 via a wide area camera gimbals 500 and the tracking camera 400 is mounted through the tracking camera gimbals 600.

As the wide area camera 300, a digital camera capable of capturing a full HD class moving picture can be used, and as the tracking camera 400, a digital camera capable of capturing a UHD class moving picture or a still image can be used.

The wide-area camera gimbals 500 and the tracking camera gimbals 600 are connected to the underside of the aircraft main body 11 and to the underside of the loading and unloading benches 510 and 610, The yawing operation units 530 and 630 are operated by the yawing motors 520 and 620 and the yawing motors 520 and 620 and the rolling motors 540 and 640 mounted on the yawing operation units 530 and 630 A pair of rolling motors 560 and 660 mounted on the rolling motors 550 and 650 and a plurality of rolling motors 540 and 640 for rolling motions by the rolling motors 540 and 640, And pitching operation units 570 and 670 that are pitch-operated by the wide-angle cameras 560 and 660 and on which the wide-angle camera 300 and the tracking camera 400 are mounted.

In order to allow the wide-angle camera 300 and the tracking camera 400 to photograph the vertical downward direction even when the unmanned airplane 10 is not horizontal during the flight, And a control box 20 in which a detection sensor S and a control unit are incorporated.

The control unit built in the control box 20 controls the yawing motor 520, the rolling motor 540 and the pitching motor 560 according to a detection signal of the horizontal sensing sensor S, The yawing operation and the rolling operation of the rolling operation base 540 and the pitching operation of the pitching operation base 570 always make the lower surface of the pitching operation base 570 on which the wide-angle camera 300 is mounted always horizontal, So that the camera 300 can always take a picture in the vertical downward direction.

In addition, the control unit built in the control box 20 is configured to control the propeller motor 13 according to the remote control of the operator received by the remote control receiver 50.

The tracking camera gimbals 600 supporting the tracking camera 400 are not directly controlled by the horizontal sensing sensor S and the control unit installed in the control box 20, Can be configured to be controlled by the operator's remote control when an object found to be a stolen vehicle, escape vehicle or wild animal, or an illegal fishing vessel, is found.

The control vehicle or the control vessel can search the degree of the taken ship type based on the vehicle database, the wildlife database and the ship database image information which are constructed beforehand, and the retrieved information is transmitted to the operator of the unmanned airplane 10 So that the task can be efficiently performed.

The ATF 10 includes a GPS antenna 30 for confirming a flight position, an altimeter 40 for checking flight altitude, a remote control receiver 50 for remote control, and an information transmission antenna 60 (Not shown) to which the GPS antenna 30 is connected; GPS coordinate information by the GPS receiver; altitude information by the altimeter 40; An information transmission unit (not shown) for transmitting the image information by the camera 300 and the tracking camera 400 through the information transmission antenna 60 may be provided.

2, the generating unit 130 may generate an image of a foreground region extracted by the extracting unit 120 from an input image captured by the wide-angle camera 300 or the tracking camera 400 of the UAV 10, The descriptor can be generated by extracting the feature. The generating unit 130 may include a first processing unit 210, a second processing unit 220, and a second processing unit 230.

The first processing unit 210 may generate an integral image of the foreground region. The second processing unit 220 may extract the at least one feature point by applying the integral image to the approximate helix detector. Also, the third processor 230 may generate the descriptor using the SURF (Speed Up Robust Features) algorithm for the at least one feature point.

5 is a view showing an embodiment of extracting a foreground region in an image processing apparatus for identifying an object from an image captured by the wide-area camera 300 or the tracking camera 400 of the UAV 10.

As shown in FIG. 5 (a), in order to recognize an object in an image photographed by the UAV 10, there should be no feature point in the background (background portion). To do this, feature points must be extracted only from objects.

In this case, the GrabCut algorithm is used to remove the background part, and the foreground part can extract the object. The GrabCut algorithm is an algorithm for setting a rectangular window in the area of the object by the user rather than automatic object extraction.

Therefore, a rectangular window should be set in the foreground area that should be extracted by the user, such as a rectangle drawn by a line around the object in Fig. 5 (a). An object in the foreground region extracted using the GrabCut algorithm is shown in FIG. 5 (b).

6 is a diagram illustrating brightness remapping in a preprocessing process of an image processing apparatus 100 for identifying an object from an image captured by the wide-area camera 300 or the tracking camera 400 of the UAV 10.

As described above, the image processing apparatus 100 extracts a background region and a foreground region from an input image captured and provided by the wide-area camera 300 or the tracking camera 400 of the UAV 10, A generating unit 130 for generating a descriptor by extracting a feature of the foreground region and a searching unit 140 for searching for information of the object by comparing the descriptor with a database.

And a preprocessor 120 for performing preprocessing on the foreground region to enhance the ease and accuracy of the descriptor generation.

The preprocessing unit 120 may include at least one of noise elimination in the foreground region, and brightness mapping of the image.

The preprocessing unit 120 may perform auto enhancement, and the representative brightness of the foreground region may be estimated using such a method. As shown in FIG. 6A, when the representative brightness value of the estimated foreground region is, for example, 0.4 or less, brightness remapping can be performed.

The preprocessing unit 120 can improve the foreground area of FIG. 6A in which the representative brightness value is 0.4 or less to a bright image as shown in FIG. 6B using brightness remapping.

In the extracted object, feature points used for recognition are extracted. However, when the image of the object is dark, the recognition performance may be affected because the feature point extraction is small.

In order to minimize such a problem, the brightness of the extracted object image can be estimated and the bright image can be automatically and brightly remapped. Here, the brightness is estimated using a cumulative histogram (CI) and a reference cumulative histogram (CR) of the input image. In this case, the estimated range is 0.0 to 1.0, and the closer to 0.0, the darker the object is.

When the representative brightness of the image is less than 0.4, brightness can be improved in the brightness channel by using brightness remapping.

7 is a diagram for explaining a process of extracting feature points in the image processing apparatus 100 that identifies an object from an image captured by the wide-area camera 300 or the tracking camera 400 of the UAV 10.

After the preprocessing step by the preprocessing unit 120, for example, the background is removed, and the object as the background of the foreground is extracted to perform a brightness remapping in order to extract an excellent feature point. Then, as shown in FIG. 7, Can be extracted.

The feature points can be extracted based on a SURF algorithm that locally extracts feature points. The SURF algorithm has the advantages of faster speed and similar performance compared to the existing SIFT (Scale Invariant Feature Transform) algorithm.

FIG. 7 shows the entire image of the foreground region representing the extracted feature points using the SURF algorithm. Each extracted feature point may be composed of, for example, 64 descriptors, which are used for recognition.

In order to generate a descriptor as shown in FIG. 7, a feature point of a foreground region (or a moving object) should be extracted. The feature point may be extracted using an approximated Hessian detector after generating an integral image. The descriptor can be generated using the SURF algorithm for the extracted feature points using the approximated Hessian detector.

8 illustrates a method of measuring similarity using multiple thresholds in an image processing apparatus 100 that identifies an object from an image photographed by a wide-area camera 300 or a tracking camera 400 of the UAV 10 Similarity Measure). If one threshold value is used, a multi-threshold method can be used to improve the accuracy of the search because objects that are not related to the query image are searched.

It is possible to search using multiple thresholds, and search for a query image using, for example, three threshold values (0.001, 0.004, 0.0001). As the threshold value is lower (0.0001), the number of feature points increases, and when the threshold value is higher (0.001), the number of feature points decreases.

Multiple feature points can be used to extract multiple candidates according to each feature point and to extract common candidates.

As shown in FIG. 8A, the result of the matching for the results T1, T2, and T3 by the threshold value can be R1 (T1? T2? T3). 6 is a result of the number of lines, and if R2 is T1 ∩ T2 ∩ T3, the intersection result of R1 and R2 is finally R. Here, the data formed in the set R may represent an intersection result of an error result by the matching and a result by the number of lines.

In the case of FIG. 8 (b), the results of the matching error and the number of lines are shown in each threshold value, and the final redundancy result R can be 1 and 4.

9 is a diagram of an embodiment in which a similarity is measured using a tree in an image processing apparatus () that identifies an object from an image captured by the wide-area camera 300 or the tracking camera 400 of the UAV 10 .

The tree structure includes an internal structure tree 710 and a terminal node tree 720.

If the multiple threshold value is used, there is a possibility that none of the multi-execution result (R) images is output. This result may or may not be associated with (or similar to) the query image.

May be a case where a result of the multiple-threshold results is output in two results but not in one result is not selected as the multiple-performed result R in the database.

In this way, even if the result is derived using the multi-threshold method, the accuracy is degraded. Therefore, a tree structure for confirming the relevance to the query image may be used to improve the accuracy.

As shown in FIG. 9, the tree structure is divided into two cases of 'when there are duplicate results' and 'when there are no duplicate results' in the three threshold values, By verifying the relevance to the query image, the accuracy can be improved.

The tree can be divided into two cases: the result is present in the result (R) (ThResult-23! = 0) and the case is not present (ThResult-123 == 0).

If there is no result that the intersection of the result of ThResult-13 == 0: 0.001 and 0.0001 in the structure of the tree does not exist, the result of ThResult-23 == 0: If there are no duplicates in the 0.004 result and the 0.0001 result, the message "Not Matching" may be output because nothing exists.

Conversely, even if ThResult-123 to = 0 (duplicate results among the three thresholds exist) in FIG. 9, it is possible to obtain a result with high accuracy by verifying whether the result is correct rather than outputting the result directly .

10 is a flowchart of an image processing method for identifying an object from an image captured by the wide-area camera 300 or the tracking camera 400 of the UAV 10.

First, an input image, which is an object image photographed by a wide-area camera 300 of the UAV 10, is provided (811). The image data photographed by the wide-area camera 300 is transmitted to the image processing apparatus 100 through the image transmission antenna 60 provided in the UAV 10.

Since the object image photographed by the low-resolution wide-area camera 300 of the UAV 10 is photographed at a high altitude, image quality and image size problems may occur. Interlace may also occur depending on the performance of the wide-area camera 300.

Once the input image is input, the extracting unit 110 may extract the foreground region by separating the foreground and background from the input image (812). Here, the foreground region can be extracted using the GrabCut algorithm.

The GrabCut algorithm may be a process of removing backgrounds from one image and extracting only objects for comparison with the database as described above.

In this case, by using the GrabCut algorithm, if a user designates an area of an object in one image, it is possible to extract the foreground area different from the background in the area. In addition, since the size of the input image may be different from that of the image in the database, the size of the extracted foreground region can be normalized to a predetermined size.

For a better search performance of the foreground image, there may be a preprocessing process (813) that can automatically improve the foreground area with the background removed. The preprocessing process is a process of improving the image because it is difficult to extract feature points when the image is dark.

This preprocessing process will be described in detail later.

The generation unit 130 generates an integral image of the foreground region for the foreground image after the preprocessing process and applies the integral image to the approximated Hessian detector to extract at least one feature point to generate a SURF algorithm for at least one feature point (814). ≪ / RTI >

The descriptor may be compared with the database by the search unit 140 so that the degree of similarity with the database may be measured (815).

When an object suspected to be a tracking object is found through the above process, the operator operates the high-resolution tracking camera 400 using the remote controller and simultaneously operates the yawing motor 620 of the tracking camera gimbals 600, the rolling motor 640 And the pitching motor 660 to control the tracking camera 400 to be tracked by the yawing operation of the yawing operation table 630 and the rolling operation of the rolling operation table 650 and the pitching operation of the pitching operation table 670 Take a picture of a suspicious object.

An input image that is an object image captured by the tracking camera 400 is provided (821). The image photographed by the tracking camera 400 is transmitted to the image processing apparatus 100 through the information transmission antenna 60 provided in the UAV 10.

Once the input image is input, the extracting unit 110 may extract the foreground region by separating the foreground and background from the input image (822). Here, the foreground region can be extracted using the GrabCut algorithm.

The GrabCut algorithm may be a process of removing backgrounds from one image and extracting only objects for comparison with the database as described above.

In this case, by using the GrabCut algorithm, if a user designates an area of an object in one image, it is possible to extract the foreground area different from the background in the area. In addition, since the size of the input image may be different from that of the image in the database, the size of the extracted foreground region can be normalized to a predetermined size.

For better retrieval performance of the foreground image, there may be a preprocessing process that automatically improves the foreground area with background removed (823). The preprocessing process is a process of improving the image because it is difficult to extract feature points when the image is dark.

This preprocessing process will be described in detail later.

The generation unit 130 generates an integral image of the foreground region for the foreground image after the preprocessing process and applies the integral image to the approximated Hessian detector to extract at least one feature point to generate a SURF algorithm for at least one feature point May be used to generate the descriptor (824).

The descriptor may be compared with the database by the search unit 140 so that the similarity with the database may be measured (825).

As described above, an object moving on the ground, such as a stolen vehicle, an escape vehicle, a moving object such as a wild animal, or an illegal fishing vessel in the sea is first captured and transmitted by the wide-area camera 300 of low resolution, When a suspected object is found as an object to be tracked according to an image of the wide-area camera 300, the object is secondarily captured by a high-resolution tracking camera 400 and transmitted, Tracking of an object and an object that changes in real time becomes efficient.

The similarity measurement can extract the matched result using the multi-threshold method described above. In the case of using the multiple threshold method, a tree can be constructed for both the case where there are duplicate results in a plurality of threshold values and the case where there are no duplicate results, and the accuracy of the matching result is increased by verifying the image matching using the tree .

11 is a flowchart showing a preprocessing method of an image processing method for identifying an object from an image captured by the wide-area camera 300 or the tracking camera 400 of the UAV 10.

Since the image photographed through the wide-area camera 300 or the tracking camera 400 of the unmanned airplane 10 is photographed at a high altitude, problems such as image quality and size arise. In the present invention, The automatic image enhancement process may require a preprocessing process.

Upon receiving the input image as described above, the foreground region can be extracted 920 using the GrabCut algorithm from the input image. The extracted foreground region is normalized to a size previously designated by the extraction unit 110 (930).

After the background and the foreground are separated from the input image captured and provided by the wide-area camera 300 or the tracking camera 400 of the UAV 10, an automatic image enhancement process is performed on the extracted foreground region (940) .

The automatic image enhancement is a process performed because it is difficult to extract feature points for the foreground region when the foreground region is dark. First, the representative brightness for the extracted foreground region is estimated (950).

When the representative brightness value of the estimated foreground region is 0.4 or less, the preprocessing unit 120 can brighten the image of the foreground region by performing brightness remapping (960).

The coordinate information acquired by the GPS antenna 30 mounted on the UAV 10 and the altitude information acquired by the altimeter 40 are transmitted to the image processing apparatus 100 so that the tracking The coordinates of the object can be confirmed, so that the tracking can be performed more effectively.

That is, when the distance between the center of the image input to the image processing apparatus 100 and the object to be tracked is calculated in consideration of the scale of the image, the distance between the altitude information and the object is calculated, The coordinates of the object to be tracked can be confirmed.

The apparatus described above may be implemented as a hardware component, a software component, and / or a combination of hardware components and software components. For example, the apparatus and components described in the embodiments may be implemented within a computer system, such as, for example, a processor, a controller, an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, a field programmable array (FPA) A programmable logic unit (PLU), a microprocessor, or any other device capable of executing and responding to instructions. The processing device may execute an operating system (OS) and one or more software applications running on the operating system. The processing device may also access, store, manipulate, process, and generate data in response to execution of the software. For ease of understanding, the processing apparatus may be described as being used singly, but those skilled in the art will recognize that the processing apparatus may have a plurality of processing elements and / As shown in FIG. For example, the processing unit may comprise a plurality of processors or one processor and one controller. Other processing configurations are also possible, such as a parallel processor.

The software may include a computer program, code, instructions, or a combination of one or more of the foregoing, and may be configured to configure the processing device to operate as desired or to process it collectively or collectively Device can be commanded. The software and / or data may be in the form of any type of machine, component, physical device, virtual equipment, computer storage media, or device , Or may be permanently or temporarily embodied in a transmitted signal wave. The software may be distributed over a networked computer system and stored or executed in a distributed manner. The software and data may be stored on one or more computer readable recording media.

The method according to an embodiment may be implemented in the form of a program command that can be executed through various computer means and recorded in a computer-readable medium. The computer-readable medium may include program instructions, data files, data structures, and the like, alone or in combination. The program instructions to be recorded on the medium may be those specially designed and configured for the embodiments or may be available to those skilled in the art of computer software. Examples of computer-readable media include magnetic media such as hard disks, floppy disks and magnetic tape; optical media such as CD-ROMs and DVDs; magnetic media such as floppy disks; Magneto-optical media, and hardware devices specifically configured to store and execute program instructions such as ROM, RAM, flash memory, and the like. Examples of program instructions include machine language code such as those produced by a compiler, as well as high-level language code that can be executed by a computer using an interpreter or the like. The hardware devices described above may be configured to operate as one or more software modules to perform the operations of the embodiments, and vice versa.

It will be apparent to those skilled in the art that various modifications and variations can be made in the present invention without departing from the spirit or essential characteristics thereof. Therefore, the embodiments disclosed in the present invention are not intended to limit the scope of the present invention but to limit the scope of the technical idea of the present invention. The scope of protection of the present invention should be construed according to the following claims, and all technical ideas within the scope of equivalents should be construed as falling within the scope of the present invention.

10: unmanned aircraft 20: control box
30: GPS antenna 40: altimeter
50: remote control receiver 60: video wire antenna
100: image processing apparatus 110:
120: preprocessing unit 130:
140: searching unit 210: first processing unit
220: second processing section 230: third processing section
300: Wide area camera 400: Tracking camera
500: Wide-area camera gimbal 600: Tracking camera gimbal

Claims (1)

1. An image processing system for integrally managing image information changing in real time,
A plurality of propeller motors mounted on the outer ends of the plurality of propeller supporters, respectively, and a plurality of propeller motors mounted on the motor shaft of the propeller motor, A control box having a horizontal sensing sensor and a control unit installed on an upper surface of the main body of the aircraft, a GPS antenna for confirming a flight position, An altimeter, a remote control receiver for remote control, and an information transmission antenna;
A yawing operation unit for yawing by the yawing motor, a rolling motor mounted on the yawing operation unit, a yawing operation unit for yawing the yawing operation unit, A wide-angle camera gimbals including a rolling operation band that is rolled by the rolling motor, a pitching motor mounted on the rolling operation band, and a pitching operation band that is pitch-operated by the pitching motor;
A full HD class wide-area camera mounted on a pitching operation stand of the wide-area camera gimbals;
A yawing operation unit for yawing by the yawing motor, a rolling motor mounted on the yawing operation unit, a yawing operation unit for yawing the yawing operation unit, A tracking operation gimbal including a rolling operation band that is rolled by the rolling motor, a pitching motor that is mounted on the rolling operation band, and a pitching operation band that is pitching operation by the pitching motor;
A UHD class tracking camera mounted on a pitching operation unit of the tracking camera gimbals;
An extraction unit for extracting a foreground region by separating a background and a foreground from an input image captured and provided by the wide area camera and the tracking camera;
A first processor for generating an integral image of the foreground region, a second processor for applying the integral image to an approximate hector detector to extract at least one feature point, and a second processor for applying the SURF algorithm to the at least one feature point, A generating unit including a third processing unit for generating the second processing unit;
A retrieval unit for retrieving information of the object by comparing the descriptor with a database; And
And a preprocessor for performing preprocessing including at least one of noise removal and image brightness remover for the foreground region to enhance ease and accuracy of the descriptor generation,
The control unit built in the control box 20 controls the yawing motor 520, the rolling motor 540 and the pitching motor 560 according to the detection signal of the horizontal sensing sensor S, The lower end of the pitching operation base 570 on which the wide-area camera 300 is mounted is always horizontal by the yawing operation of the rolling operation unit 540 and the pitching operation of the pitching operation unit 570, (13) so as to control the propeller motor (13) in accordance with the remote control signal received by the remote control receiver (50)
Wherein the tracking camera gimbals (600) are configured to be controlled according to a remote control signal received by the remote control receiver.
KR1020160123246A 2016-09-26 2016-09-26 Image processing system for integrated management of image information changing in real time KR101723028B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020160123246A KR101723028B1 (en) 2016-09-26 2016-09-26 Image processing system for integrated management of image information changing in real time

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020160123246A KR101723028B1 (en) 2016-09-26 2016-09-26 Image processing system for integrated management of image information changing in real time

Publications (1)

Publication Number Publication Date
KR101723028B1 true KR101723028B1 (en) 2017-04-07

Family

ID=58583509

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020160123246A KR101723028B1 (en) 2016-09-26 2016-09-26 Image processing system for integrated management of image information changing in real time

Country Status (1)

Country Link
KR (1) KR101723028B1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20200062609A (en) * 2018-11-27 2020-06-04 소프트온넷(주) OPTIMIZATION SYSTEM AND METHOD FOR AIR CARGO LOADING ON ULDs

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20010000107A (en) * 2000-04-28 2001-01-05 이종법 System tracking and watching multi moving object
JP2006033793A (en) * 2004-06-14 2006-02-02 Victor Co Of Japan Ltd Tracking video reproducing apparatus
KR101417498B1 (en) 2012-12-21 2014-07-08 한국항공우주연구원 Video processing apparatus and method using the image from uav
KR101598411B1 (en) * 2015-10-20 2016-02-29 제주대학교 산학협력단 Air craft gimbal system for 3demensioins photographic survey

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20010000107A (en) * 2000-04-28 2001-01-05 이종법 System tracking and watching multi moving object
JP2006033793A (en) * 2004-06-14 2006-02-02 Victor Co Of Japan Ltd Tracking video reproducing apparatus
KR101417498B1 (en) 2012-12-21 2014-07-08 한국항공우주연구원 Video processing apparatus and method using the image from uav
KR101598411B1 (en) * 2015-10-20 2016-02-29 제주대학교 산학협력단 Air craft gimbal system for 3demensioins photographic survey

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
휴대 단말을 위하여 개선된 Speeded Up Robust Features(SURF) 알고리듬의 성능 측정 및 분석 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20200062609A (en) * 2018-11-27 2020-06-04 소프트온넷(주) OPTIMIZATION SYSTEM AND METHOD FOR AIR CARGO LOADING ON ULDs
KR102149357B1 (en) * 2018-11-27 2020-08-31 소프트온넷(주) OPTIMIZATION SYSTEM AND METHOD FOR AIR CARGO LOADING ON ULDs

Similar Documents

Publication Publication Date Title
Unlu et al. Using shape descriptors for UAV detection
US9177481B2 (en) Semantics based safe landing area detection for an unmanned vehicle
WO2020020472A1 (en) A computer-implemented method and system for detecting small objects on an image using convolutional neural networks
US9031285B2 (en) Detection of floating objects in maritime video using a mobile camera
KR101417498B1 (en) Video processing apparatus and method using the image from uav
Wang et al. Machine learning-based ship detection and tracking using satellite images for maritime surveillance
CN112364843A (en) Plug-in aerial image target positioning detection method, system and equipment
KR102069694B1 (en) Apparatus and method for recognition marine situation based image division
US9892340B2 (en) Method for classifying objects in an imaging surveillance system
Liu et al. Vehicle detection from aerial color imagery and airborne LiDAR data
CN115327568A (en) Unmanned aerial vehicle cluster real-time target identification method and system based on PointNet network and map construction method
EP3044734B1 (en) Isotropic feature matching
KR101723028B1 (en) Image processing system for integrated management of image information changing in real time
Delleji et al. An Improved YOLOv5 for Real-time Mini-UAV Detection in No Fly Zones.
CN112734788B (en) High-resolution SAR aircraft target contour extraction method, system, storage medium and equipment
CN112329729B (en) Small target ship detection method and device and electronic equipment
Kaimkhani et al. UAV with Vision to Recognise Vehicle Number Plates
CN113869163A (en) Target tracking method and device, electronic equipment and storage medium
CN113763408A (en) Method for rapidly identifying aquatic weeds in water through images in sailing process of unmanned ship
Kim et al. Object detection algorithm for unmanned surface vehicle using faster R-CNN
KR102135725B1 (en) Automatic landing control device and operation method of the same
Kerdvibulvech Hybrid model of human hand motion for cybernetics application
Majidi et al. Land Cover Boundary Extraction in Rural Aerial Videos.
Cafaro et al. Towards Enhanced Support for Ship Sailing
CN115690767B (en) License plate recognition method, license plate recognition device, unmanned aerial vehicle and storage medium

Legal Events

Date Code Title Description
E701 Decision to grant or registration of patent right
GRNT Written decision to grant