KR101723028B1 - Image processing system for integrated management of image information changing in real time - Google Patents
Image processing system for integrated management of image information changing in real time Download PDFInfo
- Publication number
- KR101723028B1 KR101723028B1 KR1020160123246A KR20160123246A KR101723028B1 KR 101723028 B1 KR101723028 B1 KR 101723028B1 KR 1020160123246 A KR1020160123246 A KR 1020160123246A KR 20160123246 A KR20160123246 A KR 20160123246A KR 101723028 B1 KR101723028 B1 KR 101723028B1
- Authority
- KR
- South Korea
- Prior art keywords
- yawing
- image
- motor
- unit
- pitching
- Prior art date
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64C—AEROPLANES; HELICOPTERS
- B64C39/00—Aircraft not otherwise provided for
- B64C39/02—Aircraft not otherwise provided for characterised by special use
- B64C39/024—Aircraft not otherwise provided for characterised by special use of the remote controlled vehicle type, i.e. RPV
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- B64C2201/108—
-
- B64C2201/127—
Abstract
The present invention relates to an image processing system that integrally manages image information that changes in real time, and is equipped with a low-resolution wide-area camera and a high-resolution tracking camera on an unmanned airplane, , It is possible to track a moving object on the ground or in the sea. If a suspicious object is found by using real-time changing image information, So that identification and tracking of a moving object and a real-time changing object are effectively performed.
Description
[0001] The present invention relates to an image processing system that integrates and manages image information that changes in real time among technologies in the field of image processing systems, and more specifically, to an image processing system that integrates and manages real- And more particularly, to an image processing system that integrally manages image information that changes in real time so that real-time identification and tracking of moving objects on the ground or the sea are effectively performed.
Generally, tracking of vehicles moving on the ground, such as stolen vehicles, escape vehicles, or wild animals, has been used from time to time in vehicles or helicopters.
However, in the case of tracking using a vehicle, it is often impossible to track the amount of traffic on the road.
In addition, if the helicopter is tracked, it can be traced regardless of the traffic volume on the road. However, since the helicopter is stored in a separate airfield in Korea, the helicopter is taken off It takes a long time to track, so that there is a problem that tracking is delayed.
On the other hand, in our country, illegal fishing of Chinese ships in the West Sea is getting serious enough to worry about depletion of fishery resources as well as reduction of our catches.
Therefore, the seafloor has been operating a maritime front-end consisting of a large-sized vessel, a high-speed assault boat, and a helicopter to control illegal fishing. However, since illegal fishing vessels as well as Korean vessels are mixed, I am having difficulties.
Therefore, techniques have been developed for taking an unmanned airplane to shoot a moving vehicle, a wild animal moving in the ground, or an illegal fishing vessel moving at sea, and to post-process the captured data for effective tracking.
As an example of this, Korean Patent No. 10-1417498 (Registered on Apr. 21, 2014) "Image processing apparatus and method using an unmanned aerial vehicle acquisition image" (hereinafter referred to as "prior art") is an input image A retrieving unit for retrieving the information of the object by comparing the descriptor with the database; and a search unit for searching the information of the object by comparing the descriptor with the database, And a preprocessing unit for enhancing the ease and accuracy of the descriptor creation by performing pre-processing on the illegal operation vessel and the vessel, thereby enabling illegal operation control to be effectively performed.
However, the prior art has limitations in the accurate identification of the object due to the resolution of the image taken by the camera mounted on the UAV, and it may be considered to use a camera with high resolution. However, in this case, As a result, if illegal fishing vessels and our vessels are mixed with hundreds to thousands of vessels, there is a limit to identify them. This means that there is a limit to the number of illegal fishing vessels, Even if it is used to track an object, there are limits to identifying it because many vehicles are running along the roads that the stolen vehicle or escape vehicle travels.
Therefore, it is required to develop a technology that can easily identify illegal fishing vessels and vessels that are moving in the sea as well as objects moving on the ground such as stolen vehicles, escape vehicles or wild animals, and effectively track them.
Accordingly, an object of the present invention is to provide an image processing apparatus and a method for realizing the real time identification and tracking of moving objects on the ground or the sea by integrally managing image information changing in real time on moving objects in the ground or sea, To provide an image processing system.
According to an aspect of the present invention, there is provided an image processing system for integrally managing image information that changes in real time, the image processing system including an aircraft body, a plurality of propeller supports coupled to edges of the aircraft body, A plurality of propellers mounted on outer ends of the plurality of propeller supports, respectively, a plurality of propellers coupled to motor shafts of the propeller motors, a landing mechanism installed at a lower portion of the aircraft body, A control box having a built-in horizontal sensing sensor and a control unit, an unmanned aerial vehicle including a GPS antenna for confirming a flight position, an altimeter for checking flight altitude, a remote control receiver for remote control, and an information transmission antenna; A yawing operation unit for yawing by the yawing motor, a rolling motor mounted on the yawing operation unit, a yawing operation unit for yawing the yawing operation unit, A wide-angle camera gimbals including a rolling operation band that is rolled by the rolling motor, a pitching motor mounted on the rolling operation band, and a pitching operation band that is pitch-operated by the pitching motor; A full HD class wide-area camera mounted on a pitching operation stand of the wide-area camera gimbals; A yawing operation unit for yawing by the yawing motor, a rolling motor mounted on the yawing operation unit, a yawing operation unit for yawing the yawing operation unit, A tracking operation gimbal including a rolling operation band that is rolled by the rolling motor, a pitching motor that is mounted on the rolling operation band, and a pitching operation band that is pitching operation by the pitching motor; A UHD class tracking camera mounted on a pitching operation unit of the tracking camera gimbals; An extraction unit for extracting a foreground region by separating a background and a foreground from an input image captured and provided by the wide area camera and the tracking camera; A first processor for generating an integral image of the foreground region, a second processor for applying the integral image to an approximate hector detector to extract at least one feature point, and a second processor for applying the SURF algorithm to the at least one feature point, A generating unit including a third processing unit for generating the second processing unit; A retrieval unit for retrieving information of the object by comparing the descriptor with a database; And a preprocessor for performing preprocessing including at least one of noise removal and image brightness remover for the foreground region to enhance ease and accuracy of the descriptor generation, to provide.
According to the image processing system of the present invention that manages real-time changing image information, a wide-area camera of low resolution and a high-resolution tracking camera are mounted on the unmanned airplane, and real- When a suspicious object is found as a subject to be tracked by using real-time changing image information of moving objects in the ground or sea by integrated management of changing image information, tracking with a high-resolution tracking camera is performed, Identification and tracking of objects and objects that change in real time are effective.
1 to 11 show a preferred embodiment of an image processing system for integrally managing changed image information in real time according to the present invention,
1 is a block diagram of an image processing apparatus for identifying an object from an image captured by a UAV;
2 is a block diagram of a generating unit included in an image processing apparatus for identifying an object from an image taken by an unmanned aerial vehicle,
3 is a perspective view of a unmanned aerial vehicle equipped with a wide-area camera and a tracking camera,
4 is an exploded perspective view showing a mounting structure of a wide-area camera and a tracking camera,
5 is a drawing of an embodiment for extracting an object from an image processing apparatus for identifying an object from an image taken by an unmanned aerial vehicle,
FIG. 6 is a view of an embodiment for performing brightness remapping in a preprocessing process of an image processing apparatus for identifying an object from an image taken by an unmanned aerial vehicle,
FIG. 7 is a drawing of an embodiment for performing extraction of minutiae points in an image processing apparatus for identifying an object from an image taken by an unmanned air vehicle;
8 is a view of an embodiment for measuring similarity using multiple thresholds in an image processing apparatus for identifying an object from an image taken by an unmanned aerial vehicle,
9 is a diagram of an embodiment for measuring similarity using a tree in an image processing apparatus for identifying an object from an image taken by an unmanned aerial vehicle,
10 is a flowchart of an image processing method for identifying an object from an image taken by an unmanned air vehicle,
11 is a flowchart showing a preprocessing method of an image processing method for identifying an object from an image taken by an unmanned aerial vehicle.
Hereinafter, an image processing system for integrally managing image information changing in real time according to the present invention will be described in detail with reference to the preferred embodiments illustrated in the accompanying drawings.
1 to 11 show a preferred embodiment of an image processing system for integrally managing changed image information in real time according to the present invention.
The image processing system for integrally managing image information changing in real time according to the present embodiment includes a
The
The
The generating
The
An image photographed using the
The UAV 10 includes an
The
As the
The wide-
In order to allow the wide-
The control unit built in the
In addition, the control unit built in the
The
The control vehicle or the control vessel can search the degree of the taken ship type based on the vehicle database, the wildlife database and the ship database image information which are constructed beforehand, and the retrieved information is transmitted to the operator of the
The ATF 10 includes a
2, the generating
The
5 is a view showing an embodiment of extracting a foreground region in an image processing apparatus for identifying an object from an image captured by the wide-
As shown in FIG. 5 (a), in order to recognize an object in an image photographed by the
In this case, the GrabCut algorithm is used to remove the background part, and the foreground part can extract the object. The GrabCut algorithm is an algorithm for setting a rectangular window in the area of the object by the user rather than automatic object extraction.
Therefore, a rectangular window should be set in the foreground area that should be extracted by the user, such as a rectangle drawn by a line around the object in Fig. 5 (a). An object in the foreground region extracted using the GrabCut algorithm is shown in FIG. 5 (b).
6 is a diagram illustrating brightness remapping in a preprocessing process of an
As described above, the
And a
The
The
The
In the extracted object, feature points used for recognition are extracted. However, when the image of the object is dark, the recognition performance may be affected because the feature point extraction is small.
In order to minimize such a problem, the brightness of the extracted object image can be estimated and the bright image can be automatically and brightly remapped. Here, the brightness is estimated using a cumulative histogram (CI) and a reference cumulative histogram (CR) of the input image. In this case, the estimated range is 0.0 to 1.0, and the closer to 0.0, the darker the object is.
When the representative brightness of the image is less than 0.4, brightness can be improved in the brightness channel by using brightness remapping.
7 is a diagram for explaining a process of extracting feature points in the
After the preprocessing step by the
The feature points can be extracted based on a SURF algorithm that locally extracts feature points. The SURF algorithm has the advantages of faster speed and similar performance compared to the existing SIFT (Scale Invariant Feature Transform) algorithm.
FIG. 7 shows the entire image of the foreground region representing the extracted feature points using the SURF algorithm. Each extracted feature point may be composed of, for example, 64 descriptors, which are used for recognition.
In order to generate a descriptor as shown in FIG. 7, a feature point of a foreground region (or a moving object) should be extracted. The feature point may be extracted using an approximated Hessian detector after generating an integral image. The descriptor can be generated using the SURF algorithm for the extracted feature points using the approximated Hessian detector.
8 illustrates a method of measuring similarity using multiple thresholds in an
It is possible to search using multiple thresholds, and search for a query image using, for example, three threshold values (0.001, 0.004, 0.0001). As the threshold value is lower (0.0001), the number of feature points increases, and when the threshold value is higher (0.001), the number of feature points decreases.
Multiple feature points can be used to extract multiple candidates according to each feature point and to extract common candidates.
As shown in FIG. 8A, the result of the matching for the results T1, T2, and T3 by the threshold value can be R1 (T1? T2? T3). 6 is a result of the number of lines, and if R2 is T1 ∩ T2 ∩ T3, the intersection result of R1 and R2 is finally R. Here, the data formed in the set R may represent an intersection result of an error result by the matching and a result by the number of lines.
In the case of FIG. 8 (b), the results of the matching error and the number of lines are shown in each threshold value, and the final redundancy result R can be 1 and 4.
9 is a diagram of an embodiment in which a similarity is measured using a tree in an image processing apparatus () that identifies an object from an image captured by the wide-
The tree structure includes an
If the multiple threshold value is used, there is a possibility that none of the multi-execution result (R) images is output. This result may or may not be associated with (or similar to) the query image.
May be a case where a result of the multiple-threshold results is output in two results but not in one result is not selected as the multiple-performed result R in the database.
In this way, even if the result is derived using the multi-threshold method, the accuracy is degraded. Therefore, a tree structure for confirming the relevance to the query image may be used to improve the accuracy.
As shown in FIG. 9, the tree structure is divided into two cases of 'when there are duplicate results' and 'when there are no duplicate results' in the three threshold values, By verifying the relevance to the query image, the accuracy can be improved.
The tree can be divided into two cases: the result is present in the result (R) (ThResult-23! = 0) and the case is not present (ThResult-123 == 0).
If there is no result that the intersection of the result of ThResult-13 == 0: 0.001 and 0.0001 in the structure of the tree does not exist, the result of ThResult-23 == 0: If there are no duplicates in the 0.004 result and the 0.0001 result, the message "Not Matching" may be output because nothing exists.
Conversely, even if ThResult-123 to = 0 (duplicate results among the three thresholds exist) in FIG. 9, it is possible to obtain a result with high accuracy by verifying whether the result is correct rather than outputting the result directly .
10 is a flowchart of an image processing method for identifying an object from an image captured by the wide-
First, an input image, which is an object image photographed by a wide-
Since the object image photographed by the low-resolution wide-
Once the input image is input, the extracting
The GrabCut algorithm may be a process of removing backgrounds from one image and extracting only objects for comparison with the database as described above.
In this case, by using the GrabCut algorithm, if a user designates an area of an object in one image, it is possible to extract the foreground area different from the background in the area. In addition, since the size of the input image may be different from that of the image in the database, the size of the extracted foreground region can be normalized to a predetermined size.
For a better search performance of the foreground image, there may be a preprocessing process (813) that can automatically improve the foreground area with the background removed. The preprocessing process is a process of improving the image because it is difficult to extract feature points when the image is dark.
This preprocessing process will be described in detail later.
The
The descriptor may be compared with the database by the
When an object suspected to be a tracking object is found through the above process, the operator operates the high-
An input image that is an object image captured by the tracking
Once the input image is input, the extracting
The GrabCut algorithm may be a process of removing backgrounds from one image and extracting only objects for comparison with the database as described above.
In this case, by using the GrabCut algorithm, if a user designates an area of an object in one image, it is possible to extract the foreground area different from the background in the area. In addition, since the size of the input image may be different from that of the image in the database, the size of the extracted foreground region can be normalized to a predetermined size.
For better retrieval performance of the foreground image, there may be a preprocessing process that automatically improves the foreground area with background removed (823). The preprocessing process is a process of improving the image because it is difficult to extract feature points when the image is dark.
This preprocessing process will be described in detail later.
The
The descriptor may be compared with the database by the
As described above, an object moving on the ground, such as a stolen vehicle, an escape vehicle, a moving object such as a wild animal, or an illegal fishing vessel in the sea is first captured and transmitted by the wide-
The similarity measurement can extract the matched result using the multi-threshold method described above. In the case of using the multiple threshold method, a tree can be constructed for both the case where there are duplicate results in a plurality of threshold values and the case where there are no duplicate results, and the accuracy of the matching result is increased by verifying the image matching using the tree .
11 is a flowchart showing a preprocessing method of an image processing method for identifying an object from an image captured by the wide-
Since the image photographed through the wide-
Upon receiving the input image as described above, the foreground region can be extracted 920 using the GrabCut algorithm from the input image. The extracted foreground region is normalized to a size previously designated by the extraction unit 110 (930).
After the background and the foreground are separated from the input image captured and provided by the wide-
The automatic image enhancement is a process performed because it is difficult to extract feature points for the foreground region when the foreground region is dark. First, the representative brightness for the extracted foreground region is estimated (950).
When the representative brightness value of the estimated foreground region is 0.4 or less, the
The coordinate information acquired by the
That is, when the distance between the center of the image input to the
The apparatus described above may be implemented as a hardware component, a software component, and / or a combination of hardware components and software components. For example, the apparatus and components described in the embodiments may be implemented within a computer system, such as, for example, a processor, a controller, an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, a field programmable array (FPA) A programmable logic unit (PLU), a microprocessor, or any other device capable of executing and responding to instructions. The processing device may execute an operating system (OS) and one or more software applications running on the operating system. The processing device may also access, store, manipulate, process, and generate data in response to execution of the software. For ease of understanding, the processing apparatus may be described as being used singly, but those skilled in the art will recognize that the processing apparatus may have a plurality of processing elements and / As shown in FIG. For example, the processing unit may comprise a plurality of processors or one processor and one controller. Other processing configurations are also possible, such as a parallel processor.
The software may include a computer program, code, instructions, or a combination of one or more of the foregoing, and may be configured to configure the processing device to operate as desired or to process it collectively or collectively Device can be commanded. The software and / or data may be in the form of any type of machine, component, physical device, virtual equipment, computer storage media, or device , Or may be permanently or temporarily embodied in a transmitted signal wave. The software may be distributed over a networked computer system and stored or executed in a distributed manner. The software and data may be stored on one or more computer readable recording media.
The method according to an embodiment may be implemented in the form of a program command that can be executed through various computer means and recorded in a computer-readable medium. The computer-readable medium may include program instructions, data files, data structures, and the like, alone or in combination. The program instructions to be recorded on the medium may be those specially designed and configured for the embodiments or may be available to those skilled in the art of computer software. Examples of computer-readable media include magnetic media such as hard disks, floppy disks and magnetic tape; optical media such as CD-ROMs and DVDs; magnetic media such as floppy disks; Magneto-optical media, and hardware devices specifically configured to store and execute program instructions such as ROM, RAM, flash memory, and the like. Examples of program instructions include machine language code such as those produced by a compiler, as well as high-level language code that can be executed by a computer using an interpreter or the like. The hardware devices described above may be configured to operate as one or more software modules to perform the operations of the embodiments, and vice versa.
It will be apparent to those skilled in the art that various modifications and variations can be made in the present invention without departing from the spirit or essential characteristics thereof. Therefore, the embodiments disclosed in the present invention are not intended to limit the scope of the present invention but to limit the scope of the technical idea of the present invention. The scope of protection of the present invention should be construed according to the following claims, and all technical ideas within the scope of equivalents should be construed as falling within the scope of the present invention.
10: unmanned aircraft 20: control box
30: GPS antenna 40: altimeter
50: remote control receiver 60: video wire antenna
100: image processing apparatus 110:
120: preprocessing unit 130:
140: searching unit 210: first processing unit
220: second processing section 230: third processing section
300: Wide area camera 400: Tracking camera
500: Wide-area camera gimbal 600: Tracking camera gimbal
Claims (1)
A plurality of propeller motors mounted on the outer ends of the plurality of propeller supporters, respectively, and a plurality of propeller motors mounted on the motor shaft of the propeller motor, A control box having a horizontal sensing sensor and a control unit installed on an upper surface of the main body of the aircraft, a GPS antenna for confirming a flight position, An altimeter, a remote control receiver for remote control, and an information transmission antenna;
A yawing operation unit for yawing by the yawing motor, a rolling motor mounted on the yawing operation unit, a yawing operation unit for yawing the yawing operation unit, A wide-angle camera gimbals including a rolling operation band that is rolled by the rolling motor, a pitching motor mounted on the rolling operation band, and a pitching operation band that is pitch-operated by the pitching motor;
A full HD class wide-area camera mounted on a pitching operation stand of the wide-area camera gimbals;
A yawing operation unit for yawing by the yawing motor, a rolling motor mounted on the yawing operation unit, a yawing operation unit for yawing the yawing operation unit, A tracking operation gimbal including a rolling operation band that is rolled by the rolling motor, a pitching motor that is mounted on the rolling operation band, and a pitching operation band that is pitching operation by the pitching motor;
A UHD class tracking camera mounted on a pitching operation unit of the tracking camera gimbals;
An extraction unit for extracting a foreground region by separating a background and a foreground from an input image captured and provided by the wide area camera and the tracking camera;
A first processor for generating an integral image of the foreground region, a second processor for applying the integral image to an approximate hector detector to extract at least one feature point, and a second processor for applying the SURF algorithm to the at least one feature point, A generating unit including a third processing unit for generating the second processing unit;
A retrieval unit for retrieving information of the object by comparing the descriptor with a database; And
And a preprocessor for performing preprocessing including at least one of noise removal and image brightness remover for the foreground region to enhance ease and accuracy of the descriptor generation,
The control unit built in the control box 20 controls the yawing motor 520, the rolling motor 540 and the pitching motor 560 according to the detection signal of the horizontal sensing sensor S, The lower end of the pitching operation base 570 on which the wide-area camera 300 is mounted is always horizontal by the yawing operation of the rolling operation unit 540 and the pitching operation of the pitching operation unit 570, (13) so as to control the propeller motor (13) in accordance with the remote control signal received by the remote control receiver (50)
Wherein the tracking camera gimbals (600) are configured to be controlled according to a remote control signal received by the remote control receiver.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020160123246A KR101723028B1 (en) | 2016-09-26 | 2016-09-26 | Image processing system for integrated management of image information changing in real time |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020160123246A KR101723028B1 (en) | 2016-09-26 | 2016-09-26 | Image processing system for integrated management of image information changing in real time |
Publications (1)
Publication Number | Publication Date |
---|---|
KR101723028B1 true KR101723028B1 (en) | 2017-04-07 |
Family
ID=58583509
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020160123246A KR101723028B1 (en) | 2016-09-26 | 2016-09-26 | Image processing system for integrated management of image information changing in real time |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR101723028B1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20200062609A (en) * | 2018-11-27 | 2020-06-04 | 소프트온넷(주) | OPTIMIZATION SYSTEM AND METHOD FOR AIR CARGO LOADING ON ULDs |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20010000107A (en) * | 2000-04-28 | 2001-01-05 | 이종법 | System tracking and watching multi moving object |
JP2006033793A (en) * | 2004-06-14 | 2006-02-02 | Victor Co Of Japan Ltd | Tracking video reproducing apparatus |
KR101417498B1 (en) | 2012-12-21 | 2014-07-08 | 한국항공우주연구원 | Video processing apparatus and method using the image from uav |
KR101598411B1 (en) * | 2015-10-20 | 2016-02-29 | 제주대학교 산학협력단 | Air craft gimbal system for 3demensioins photographic survey |
-
2016
- 2016-09-26 KR KR1020160123246A patent/KR101723028B1/en active IP Right Grant
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20010000107A (en) * | 2000-04-28 | 2001-01-05 | 이종법 | System tracking and watching multi moving object |
JP2006033793A (en) * | 2004-06-14 | 2006-02-02 | Victor Co Of Japan Ltd | Tracking video reproducing apparatus |
KR101417498B1 (en) | 2012-12-21 | 2014-07-08 | 한국항공우주연구원 | Video processing apparatus and method using the image from uav |
KR101598411B1 (en) * | 2015-10-20 | 2016-02-29 | 제주대학교 산학협력단 | Air craft gimbal system for 3demensioins photographic survey |
Non-Patent Citations (1)
Title |
---|
휴대 단말을 위하여 개선된 Speeded Up Robust Features(SURF) 알고리듬의 성능 측정 및 분석 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20200062609A (en) * | 2018-11-27 | 2020-06-04 | 소프트온넷(주) | OPTIMIZATION SYSTEM AND METHOD FOR AIR CARGO LOADING ON ULDs |
KR102149357B1 (en) * | 2018-11-27 | 2020-08-31 | 소프트온넷(주) | OPTIMIZATION SYSTEM AND METHOD FOR AIR CARGO LOADING ON ULDs |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Unlu et al. | Using shape descriptors for UAV detection | |
US9177481B2 (en) | Semantics based safe landing area detection for an unmanned vehicle | |
WO2020020472A1 (en) | A computer-implemented method and system for detecting small objects on an image using convolutional neural networks | |
US9031285B2 (en) | Detection of floating objects in maritime video using a mobile camera | |
KR101417498B1 (en) | Video processing apparatus and method using the image from uav | |
Wang et al. | Machine learning-based ship detection and tracking using satellite images for maritime surveillance | |
CN112364843A (en) | Plug-in aerial image target positioning detection method, system and equipment | |
KR102069694B1 (en) | Apparatus and method for recognition marine situation based image division | |
US9892340B2 (en) | Method for classifying objects in an imaging surveillance system | |
Liu et al. | Vehicle detection from aerial color imagery and airborne LiDAR data | |
CN115327568A (en) | Unmanned aerial vehicle cluster real-time target identification method and system based on PointNet network and map construction method | |
EP3044734B1 (en) | Isotropic feature matching | |
KR101723028B1 (en) | Image processing system for integrated management of image information changing in real time | |
Delleji et al. | An Improved YOLOv5 for Real-time Mini-UAV Detection in No Fly Zones. | |
CN112734788B (en) | High-resolution SAR aircraft target contour extraction method, system, storage medium and equipment | |
CN112329729B (en) | Small target ship detection method and device and electronic equipment | |
Kaimkhani et al. | UAV with Vision to Recognise Vehicle Number Plates | |
CN113869163A (en) | Target tracking method and device, electronic equipment and storage medium | |
CN113763408A (en) | Method for rapidly identifying aquatic weeds in water through images in sailing process of unmanned ship | |
Kim et al. | Object detection algorithm for unmanned surface vehicle using faster R-CNN | |
KR102135725B1 (en) | Automatic landing control device and operation method of the same | |
Kerdvibulvech | Hybrid model of human hand motion for cybernetics application | |
Majidi et al. | Land Cover Boundary Extraction in Rural Aerial Videos. | |
Cafaro et al. | Towards Enhanced Support for Ship Sailing | |
CN115690767B (en) | License plate recognition method, license plate recognition device, unmanned aerial vehicle and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
E701 | Decision to grant or registration of patent right | ||
GRNT | Written decision to grant |