WO2021235634A1 - 골프 코스의 디봇 탐지 시스템 및 이를 이용한 탐지 방법 - Google Patents
골프 코스의 디봇 탐지 시스템 및 이를 이용한 탐지 방법 Download PDFInfo
- Publication number
- WO2021235634A1 WO2021235634A1 PCT/KR2020/017120 KR2020017120W WO2021235634A1 WO 2021235634 A1 WO2021235634 A1 WO 2021235634A1 KR 2020017120 W KR2020017120 W KR 2020017120W WO 2021235634 A1 WO2021235634 A1 WO 2021235634A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- divot
- image
- module
- golf course
- reading
- Prior art date
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 51
- 238000010801 machine learning Methods 0.000 claims abstract description 38
- 238000004891 communication Methods 0.000 claims abstract description 18
- 238000012545 processing Methods 0.000 claims description 27
- 238000000034 method Methods 0.000 claims description 17
- 244000025254 Cannabis sativa Species 0.000 claims description 10
- 239000002689 soil Substances 0.000 claims description 8
- 241001465754 Metazoa Species 0.000 claims description 6
- 238000010586 diagram Methods 0.000 description 10
- 238000013528 artificial neural network Methods 0.000 description 9
- 241000273930 Brevoortia tyrannus Species 0.000 description 6
- 230000000306 recurrent effect Effects 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 238000000605 extraction Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000003595 spectral effect Effects 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000015572 biosynthetic process Effects 0.000 description 2
- 125000004122 cyclic group Chemical group 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000010191 image analysis Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000010295 mobile communication Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000008439 repair process Effects 0.000 description 2
- 238000003786 synthesis reaction Methods 0.000 description 2
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 2
- 241000196324 Embryophyta Species 0.000 description 1
- 238000010521 absorption reaction Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 230000001151 other effect Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000000149 penetrating effect Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30181—Earth observation
- G06T2207/30188—Vegetation; Agriculture
Definitions
- the present invention relates to a divot detection system for a golf course and a detection method using the same, and more particularly, to a high reading rate using the result of primary reading of a divot object using a multispectral image and secondary reading using an RGB image.
- a divot detection system for a golf course that learns a divot object and detects a divot object from a multispectral image taken of a golf course based on the learned result, and a detection method using the same.
- a drone In general, a drone is an airplane or helicopter-shaped vehicle that flies along a pre-set route through wireless communication or flies while a person directly controls it with the naked eye. , gradually developed as a reconnaissance aircraft with the development of wireless technology, and was also used for reconnaissance and surveillance by penetrating deep into the enemy's inland.
- machine learning or machine learning is a field of artificial intelligence among computer science that has evolved from the study of pattern recognition and computer learning theory.
- Machine learning is a technology that studies and builds algorithms and systems that learn based on empirical data, make predictions, and improve their own performance.
- Machine learning algorithms build specific models to make predictions or decisions based on input data, rather than executing strictly set static program instructions.
- the RGB image has the advantage of being able to identify the divot with the naked eye due to its high resolution, but there is a problem that the accuracy of detecting the divot is lowered in an area where it is difficult to distinguish visually.
- the human eye can detect the divot through the RGB image or even if an object detection algorithm is used in the RGB image, the divot cannot be detected.
- the present invention has been derived to solve the above problems, and the present invention is a divot object with a high reading rate using the result of primary reading of a divot object using a multispectral image and secondary reading using an RGB image
- An object of the present invention is to provide a divot detection system for a golf course and a detection method using the same for learning and detecting a divot object with high accuracy in a multispectral image of a golf course based on the learned result.
- a golf course divot detection system includes: an unmanned aerial vehicle equipped with a multi-spectral camera and an RGB camera and photographing a golf course; and a machine learning module for learning a divot object using the primary reading result of the multi-spectral image taken by the unmanned aerial vehicle and the secondary reading result of the RGB image, a communication module for receiving the multi-spectral image photographed by the unmanned aerial vehicle; and a course keeper server including a detection module for detecting a divot object from the received multispectral image and an output module for displaying map coordinates of the detected divot object.
- the machine learning module may include: an image processing module for separating objects matching at least one of a predefined color, size, and shape from a multispectral image of a golf course;
- Objects that satisfy all three criteria among the separated objects are first read with a divot and stored in a machine learning DB, and objects matching two or less criteria among the separated objects are displayed on the RGB image to transmit the RGB image to the expert system through the communication module, and may include a reading module for storing the object secondary to the divot or similar divot in the expert system to the machine learning DB.
- the pseudo divot may be any one of an area dug up by wild animals to expose soil, an area lost due to a landslide, and an area where grass is partially destroyed.
- the reading module may not display, on the RGB image, an image previously analyzed as not a divot among objects matching two or less of a color, a size, and a shape.
- the image analyzed as not the divot may be any one of a drain hole, fallen leaves, a pond, and a bunker.
- a divot detection method of a golf course is a method of detecting a divot of a course keeper server for golf course management. learning the divot object using the secondary reading result; receiving, by the communication module, a multi-spectral image of the golf course; detecting, by a detection module, a divot object from the received multispectral image; and outputting, by the output module, map coordinates of the detected divot object.
- the divot object learning step may include: separating, by an image processing module, objects matching at least one of a predefined color, size, and shape from a multispectral image of a golf course; a first reading step in which the reading module defines an object that satisfies all three of the separated objects as a divot and stores it in a machine learning DB; displaying, by the reading module, objects matching two or less of the separated objects on the RGB image, and transmitting the RGB image to an expert system through the communication module; and when the expert system notifies the reading module of an object additionally annotated as a divot or similar divot, the reading module may include a secondary reading step of storing the additionally annotated object in the machine learning DB. have.
- the pseudo divot may be any one of an area dug up by wild animals to expose soil, an area lost due to a landslide, and an area where grass is partially destroyed.
- the reading module may not display, on the RGB image, an image previously analyzed as not a divot among objects matching two or less of a color, a size, and a shape.
- the image analyzed as not the divot may be any one of a drain hole, fallen leaves, a pond, and a bunker.
- the divot detection system of a golf course provides the following effects.
- the present invention has the effect of learning the divot object at a high reading rate using the result of primary reading of the divot object using a multispectral image and secondary reading using the RGB image.
- the present invention detects a divot object from a multispectral image taken of a golf course based on a result learned at a high reading rate, there is an effect of increasing the accuracy of the divot object detection.
- FIG. 1 is a schematic diagram illustrating a divot detection system of a golf course according to an embodiment of the present invention
- Figure 2 is a block diagram illustrating the configuration of the unmanned air and course keeper server illustrated in Figure 1;
- Figure 3 is a block diagram illustrating the configuration of the machine learning module illustrated in Figure 2;
- 5 is an RGB image of a golf course taken with an RGB camera.
- FIG. 6 is a flowchart illustrating a divot detection system of a golf course according to a second embodiment of the present invention.
- FIG. 7 is a conceptual diagram illustrating the operation principle of a CNN.
- FIG. 8 is a conceptual diagram illustrating an operation algorithm of an RNN.
- MODULE means a unit that processes a specific function or operation, which may mean hardware or software or a combination of hardware and software.
- first and second may be used to describe various elements, but the elements should not be limited by the terms. The above terms are used only for the purpose of distinguishing one component from another.
- FIG. 1 is a schematic diagram illustrating a divot detection system of a golf course according to a first embodiment of the present invention.
- the unmanned aerial vehicle 100 obtains a multi-spectral image and an RGB image by aerially photographing the vegetation area 10, and the multi-spectral image and The RGB image is transmitted to the course keeper server 200 .
- the course keeper server 200 first reads the multi-spectral image received from the unmanned aerial vehicle 100 based on a predetermined standard, and when detailed reading is required, the divot object is read secondly using the RGB image corresponding to the multi-spectral image. can learn
- the course keeper server 200 receives the multi-spectral image of the vegetation zone 10 for detecting the divot from the unmanned aerial vehicle 100, the divot from the received multi-spectral image based on the learning modeling of the pre-learned divot object By detecting the object, the map coordinates for the corresponding divot object can be provided to the manager.
- the vegetation zone 10 is a zone in which a number of managed plants are distributed, and may be a golf course, a garden, a park, etc. of a golf course, and for the convenience of explanation below, the vegetation zone 10 is defined as a golf course, to explain
- a divot described in this embodiment may mean a piece of grass that is torn by a golf club or a piece of grass when a golf ball is hit in a golf course.
- FIG. 2 is a block diagram illustrating the configuration of the unmanned air and course keeper server 200 illustrated in FIG. 1 .
- a system for detecting a golf course divot includes an unmanned aerial vehicle 100 and a course keeper server 200 .
- the unmanned aerial vehicle 100 is for aerial photography of a golf course, and may include an unmanned aerial vehicle or a drone.
- the unmanned aerial vehicle 100 may include any one of a manned vehicle, a satellite, a hot air balloon, and a cctv. That is, the unmanned aerial vehicle 100 may be replaced with any device as long as it can photograph the golf course at a certain height.
- the unmanned aerial vehicle 100 includes a multi-spectral camera 110 and an RGB camera 120 .
- the multi-spectral camera 110 acquires a multi-spectral image by photographing a golf course using a multi-spectral sensor.
- the multi-spectral image taken by the multi-spectral camera 110 is to extract image data within various wavelengths, so that it is possible to see what the human eye cannot see, and thereby to grasp the status of the vegetation environment such as soil and moisture. It can be mainly used.
- Multispectral images are generally obtained from 10 or fewer discrete bands in the visible and mid-infrared regions, and since the inverses stored in the data are in a separate form, independent data corresponding to each pixel is generated.
- the multi-spectral camera 110 has a different image that can be extracted for each spectral band.
- the blue region (450-510 nm) emphasizes the atmosphere or 50 m deep water
- the green region (530-590 nm) emphasizes trees. It emphasizes vegetation such as , and can be used to distinguish a boundary from a tree in the red area (640 to 670).
- the multispectral image acquired by the multispectral camera 110 can easily find a divot object that cannot be found in the RGB image.
- the multispectral image extracts image data within various wavelengths, the color difference is clearer than that of the RGB image, and accordingly, the divot object that cannot be detected in the RGB image where the divot is small or shaded is detected. It is possible to detect in multispectral images.
- the multispectral image can be easily repaired because it is possible to detect a point with a poor vegetation index (eg, sick grass or dead grass) even if it is not a divot object.
- a poor vegetation index eg, sick grass or dead grass
- the multispectral camera 110 may acquire a hyperspectral image using a hyperspectral sensor instead of the multispectral sensor.
- Hyperspectral image data is usually imaged in more than 100 consecutive bands over the visible, near-infrared, mid-infrared, and thermal-infrared regions, and the entire spectrum is extracted from each pixel.
- a hyperspectral sensor acquires tens to hundreds of spectral information in a continuous and narrow wavelength range of an object on the ground corresponding to each pixel of an image by dividing incident light.
- the multi-spectral camera 110 using the hyperspectral sensor captures the inherent optical properties of each material and the absorption and reflection characteristics of the material, and the data on this is mainly used for identification of land cover, vegetation, and water quality. .
- the RGB camera 120 acquires RGB image data through a Red Green Blue (RGB) sensor.
- the RGB method is a method of expressing images or images by mixing red, green, and blue.
- the RGB image photographed by the RGB camera 120 has a high resolution and is easy to check with the naked eye.
- the course keeper server 200 learns the divot object using the result of reading the multi-spectral image and the RGB image photographed by the unmanned aerial vehicle 100, and multi-spectral photographed by the unmanned aerial vehicle 100 based on the previously learned modeling.
- the divot object is detected from the image, and the map coordinates of the detected divot object are displayed and provided to the manager.
- the course keeper server 200 includes a machine learning module 210 , a communication module 220 , a detection module 230 , and an output module 240 .
- the machine learning module 210 receives the multi-spectral image and the RGB image photographed by the unmanned aerial vehicle 100 through the communication module 220, matches the multi-spectral image with a predetermined criterion to first read the divot object, If detailed reading is required, the divot object can be learned by secondary reading using the RGB image corresponding to the multispectral image.
- FIG. 3 is a block diagram illustrating the configuration of the machine learning module 210 illustrated in FIG. 2 .
- the machine learning module 210 includes an image processing module 211 and a reading module 212 .
- the image processing module 211 separates objects matching at least one of three predefined criteria, i.e., color, size, and shape, as divot suspect objects in the multi-spectral image of the golf course taken by the unmanned aerial vehicle 100 . .
- the image processing module 211 may set the predefined criterion as a range rather than a single value.
- the image processing module 211 may separate an object corresponding to a spectral band of 540 nm to 550 nm of a yellowish color series in the multispectral image as a divot suspicious object.
- the image processing module 211 may separate an object corresponding to 100 mm to 300 mm in the multi-spectral image as a divot suspicious object.
- the image processing module 211 may separate the object corresponding to the ratio of the major and minor diameters of 1:0.5 to 0.7 for the singular object in the multispectral image as the divot suspect object.
- the direction of the major diameter may be a direction corresponding to the progress direction of the colp course.
- the predefined criteria for separating the suspected divot object in the image processing module 211 have been defined and described in three ways, but four or more may be defined.
- the reading module 212 first reads an object (see FIG. 4 ) matching all three criteria among the suspected divot objects separated by the image processing module 211 as a divot and stores it in the machine learning DB.
- the reading module 212 determines that detailed reading is necessary for objects matching two or less criteria among the divot suspicious objects separated by the image processing module 211, and displays the objects on the RGB image (Fig. 5), and transmits the RGB image to the expert system through the communication module 220 .
- the pseudo-divot may be a place where the soil is exposed by a wild animal, lost due to a landslide, or where the grass is partially destroyed.
- the reading module 212 generates a divot object first read as a divot from the multispectral image and an object secondly reads as a divot or similar divot by an expert system in an RGB image as learning modeling, and then creates a machine learning DB save to
- the object secondarily read as a divot or similar divot by the expert system in the RGB image is utilized as basic data when the reading module 212 first reads the divot from the multispectral image, thereby increasing the accuracy in the primary reading.
- the learning modeling stored in the machine learning DB by the reading module 212 can be used to detect a divot object in the taken multi-spectral image when the unmanned aerial vehicle 100 shoots a golf course with the multi-spectral camera 110 later. .
- the reading module 212 matches two or less of three predefined criteria (color, size, shape) and transmits it to the expert system. Then, the object analyzed as not a divot is learning modeling corresponding to the condition can be created and stored in the machine learning DB.
- the object analyzed as not a divot by the reading module 212 may be any one of a drain hole, fallen leaves, a pond, and a bunker.
- the reading module 212 includes three predefined Objects matching two or less of the criteria (color, size, shape) of are compared with images pre-analyzed as non-divots, and objects matching images pre-analyzed as non-divots are not displayed on the RGB image.
- the reading module 212 is suspected of being a divot object and separated from the image processing module 211, but objects such as a drain hole, fallen leaves, a pond, and a bunker are determined not to be a divot and are not displayed in the RGB image in the expert system.
- objects such as a drain hole, fallen leaves, a pond, and a bunker are determined not to be a divot and are not displayed in the RGB image in the expert system.
- the communication module 220 of the course keeper server 200 may communicate with the unmanned aerial vehicle 100 and the expert system.
- the communication module 220 may communicate with the unmanned aerial vehicle 100 to receive a multispectral image and an RGB image from the unmanned aerial vehicle 100 , or may transmit an RGB image to an expert system.
- the communication module 220 is a mobile communication protocol such as 2G, 3G, 4G, 5G, Wibro (Wireless broadband), Wimax (World Interoperability for Microwave Access), HSDPA (High Speed Downlink Packet Access), such as It is possible to communicate with the unmanned aerial vehicle 100 and the expert system through a mobile communication protocol, but is not limited thereto.
- a mobile communication protocol such as 2G, 3G, 4G, 5G, Wibro (Wireless broadband), Wimax (World Interoperability for Microwave Access), HSDPA (High Speed Downlink Packet Access), such as It is possible to communicate with the unmanned aerial vehicle 100 and the expert system through a mobile communication protocol, but is not limited thereto.
- the detection module 230 receives a multi-spectral image of the golf course taken by the multi-spectral camera 110 to detect the divot from the unmanned aerial vehicle 100 . And the detection module 230 may detect the divot object from the received multi-spectral image based on the pre-learned modeling.
- the detection module 230 may use an image analysis algorithm such as a Convolution Neural Network (CNN) or Recurrent Neural Network (RNN) when detecting a divot object from a multispectral image.
- CNN Convolution Neural Network
- RNN Recurrent Neural Network
- FIGS. 7 and 8 a CNN or RNN used to detect a divot in the detection module 230 will be described using FIGS. 7 and 8 .
- FIG. 7 is a conceptual diagram illustrating the operation principle of a CNN.
- CNN Convolution Neural Network
- the feature extraction layer is composed of a plurality of convolutional layers, and the convolutional layer of any one stage is a new feature map (feature map) through the convolution operation of the input data (image data, feature map) and the synthesis kernel that plays the role of a filter. map) and input the generated feature map to the next convolutional layer.
- Each convolutional layer is composed of weights and biases, which are parameters whose values change according to learning, and the values of the synthesis kernel are automatically adjusted during the learning process to extract useful information for object detection such as color, shape, and contour.
- connection layer is composed of a flatten layer that transforms the multidimensional feature map extracted from the feature extraction layer into one dimension and a multi-layer perceptron (MLP).
- MLP multi-layer perceptron
- the detection module 230 performs the feature extraction layer of CNN on the multispectral image, but performs the entire connection layer by inputting the flattening layer to which the sensor fusion is applied to the multi-layer perceptron. After re-evaluating the reliability, the object is finally detected as a feature with high reliability.
- FIG. 8 is a conceptual diagram illustrating an operation algorithm of an RNN.
- a recurrent neural network determines the state of the object, that is, the state of the object by analyzing sequentially input time series data.
- RNN is a basic structure with recurrent weight (W) from the hidden layer to itself.
- RNN is a neural network algorithm that includes a Recuurent Weight that points to itself by forming a Directed Cycle in the connection between units.
- RNN is a combination of MLP and cyclic edge (W), and when time series data is input, past output values are also input through cyclic edges. Accordingly, in that the output value of the RNN at time t is affected by the output value at the previous time point t-1, it can be defined as a network of a feedback structure including 'memory' information that stores past values.
- the output module 240 of the course keeper server 200 displays the map coordinates for the divot object detected by the detection module 230 .
- the output module 240 may provide map coordinates to a mobile terminal, a desktop, or a tablet PC.
- a separate mark and map coordinates may be displayed and provided on the RGB image so that the administrator can intuitively know the location where the divot object is generated.
- FIG. 6 is a flowchart illustrating a divot detection system of a golf course according to a second embodiment of the present invention.
- the divot detection method of a golf course is a divot detection method performed by a course keeper server for golf course management, the step of learning a divot object (S100), multi-spectral Receiving an image (S200), detecting the divot object (S300), and outputting the map coordinates of the divot object (S400) include.
- the step (S100) of learning the divot object is a step (S100) of learning the divot object using the primary reading result of the multi-spectral image taken by the machine learning module and the secondary reading result of the RGB image.
- the machine learning module receives the multispectral image and the RGB image taken from the unmanned aerial vehicle through the communication module, matches the multispectral image with a predetermined criterion to first read the divot object, , if detailed reading is required, the divot object can be learned by secondary reading using the RGB image corresponding to the multispectral image.
- the step of learning the divot object (S100) is the step of separating the objects (S110), the first reading step (S120) of storing it in the machine learning DB, the step of transmitting it to the expert system (S130), and the second step of storing it in the machine learning DB It may include a difference reading step (S140).
- Separating the objects ( S110 ) is a step ( S110 ) of the image processing module separating objects matching at least one of a predefined color, size, and shape from a multispectral image of a golf course.
- the image processing module separates objects that match at least one of three predefined criteria, ie, color, size, and shape, as divot suspect objects in the multispectral image of the golf course taken by the unmanned aerial vehicle.
- the image processing module may set the predefined criteria as a range rather than as a single value.
- the image processing module may separate an object corresponding to a spectral band of 540 nm to 550 nm of an ocher color series as a divot suspicious object in the multispectral image. And the image processing module can separate the object corresponding to 100mm ⁇ 300mm in the multispectral image as a divot suspicious object. In addition, the image processing module can separate an object corresponding to a ratio of 1:0.5 to 0.7 in the multispectral image as a divot suspect object.
- the predefined criteria for separating the suspected divot object in the image processing module have been defined and described in three ways, but four or more may be defined.
- an object satisfying all three criteria among the objects in which the reading module is separated is defined as a divot and stored in the machine learning DB.
- the reading module determines that detailed reading is necessary for objects matching two or less criteria among the divot suspicious objects separated from the image processing module, and displays the objects on the RGB image, , and transmits the RGB image to the expert system through the communication module.
- the pseudo-divot may be a place where the soil is exposed by a wild animal, lost due to a landslide, or where the grass is partially destroyed.
- the reading module In the second reading step (S140) of storing in the machine learning DB, when an object additionally identified as a divot or a similar divot in the expert system is notified to the reading module, the reading module generates the additionally determined object as a learning modeling It is stored in the machine learning DB.
- an object secondarily read as a divot or similar divot by an expert system in an RGB image is utilized as basic data when the reading module first reads a divot from a multispectral image, thereby increasing the accuracy in the primary reading.
- the learning modeling stored in the machine learning DB by the reading module can be used to detect the divot object in the taken multi-spectral image when the unmanned aerial vehicle shoots a golf course with a multi-spectral camera later.
- the reading module matches two or less of three predefined criteria (color, size, shape) and transmits it to the expert system. You can create a model and store it in a machine learning DB.
- the object analyzed as not a divot by the reading module may be any one of a drain hole, fallen leaves, a pond, and a bunker.
- the reading module uses three predefined criteria (color, size) , shape) may be compared with an image pre-analyzed as not a divot for an object matching two or less, and an object matching an image pre-analyzed as not a divot may not be displayed on the RGB image.
- the reading module was separated from the image processing module as it was suspected of being a divot object, but objects such as drains, fallen leaves, ridges, and bunkers are not displayed in the RGB image, thereby reducing the processing time that occurs for secondary reading in the expert system. There is an advantage.
- the step of receiving the multi-spectral image (S200) is a step of receiving a multi-spectral image of the golf course from the unmanned aerial vehicle so that the detection module can detect the divot from the multi-spectral image of the golf course based on the previously learned modeling. am.
- the step of detecting the divot object (S300) is performed.
- the divot object is detected in the multi-spectral image received from the unmanned aerial vehicle.
- the detection module may detect the divot object from the received multispectral image based on the pre-learned modeling.
- the detection module may use an image analysis algorithm such as a Convolution Neural Network (CNN) or Recurrent Neural Network (RNN) when detecting a divot object from a multispectral image. Since the contents of the Convolution Neural Network (CNN) or the Recurrent Neural Network (RNN) have been described through the first embodiment, a redundant description will be omitted.
- CNN Convolution Neural Network
- RNN Recurrent Neural Network
- step S400 of outputting the map coordinates of the divot object the output module displays the map coordinates of the divot object detected by the detection module.
- the output module may provide map coordinates to a mobile terminal, a desktop, or a tablet PC.
- a separate mark and map coordinates may be displayed and provided on the RGB image so that the administrator can intuitively know the location where the divot object is generated.
- All or part of the functions of the present invention described above are implemented as software translated into machine language alone or in combination with a series of program instructions, data files, data structures, etc., or such software is stored in a computer-readable recording medium.
- a computer-readable recording medium include magnetic media such as hard disks, floppy disks and magnetic tapes, optical media such as CD-ROMs and DVDs, and floppy disks. magneto-optical media, and hardware devices specially configured to store and carry out program instructions, such as ROM, RAM, flash memory, USB memory, and the like.
- Examples of program instructions include high-level language codes that can be executed by a computer using an interpreter or the like, in addition to machine language codes such as those generated by a compiler.
- the hardware device may be configured to operate as one or more software modules to perform the operations of the present invention.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Medical Informatics (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Mathematical Physics (AREA)
- Artificial Intelligence (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (10)
- 다중분광 카메라와 RGB 카메라를 탑재하고 골프 코스를 촬영하는 무인항공기; 및상기 무인항공기가 촬영한 다중분광 이미지의 1차 판독 결과와 RGB 이미지의 2차 판독 결과를 이용하여 디봇 객체를 학습하는 머신러닝 모듈, 상기 무인항공기가 촬영한 다중분광 이미지를 수신하는 통신 모듈, 상기 수신된 다중분광 이미지로부터 디봇 객체를 탐지하는 탐지 모듈 및 상기 탐지된 디봇 객체의 지도 좌표를 디스플레이하는 출력 모듈을 포함하는 코스 키퍼 서버를 포함하는 골프 코스의 디봇 탐지 시스템.
- 제1항에 있어서,상기 머신러닝 모듈은,골프 코스의 다중분광 이미지에서 미리 정의된 색상, 크기 및 형상 중 적어도 하나 이상 매칭되는 객체들을 분리하는 이미지 처리 모듈;상기 분리된 객체들 중 3가지 기준 모두를 만족하는 객체를 디봇으로 1차 판독하여 머신러닝 DB에 저장하고,상기 분리된 객체들 중 2개 이하의 기준이 매칭되는 객체들을 상기 RGB 이미지 상에 표시하여 상기 통신 모듈을 통해 상기 RGB 이미지를 전문가 시스템으로 전송하며,상기 전문가 시스템에서 디봇 또는 유사 디봇으로 2차 판독된(annotated) 객체를 상기 머신러닝 DB에 저장하는 판독 모듈을 포함하는 골프 코스의 디봇 탐지 시스템.
- 제2항에 있어서,상기 유사 디봇은, 야생동물이 파헤쳐 토양이 드러난 구역, 산사태로 유실된 구역, 부분적으로 잔디가 사멸한 구역 중 어느 하나인 것을 특징으로 하는 골프 코스의 디봇 탐지 시스템.
- 제2항에 있어서,상기 판독 모듈은,색상, 크기 및 형상 중 2개 이하가 매칭되는 객체들 중 디봇이 아닌 것으로 미리 분석된 이미지와 매칭되는 것은 상기 RGB 이미지 상에 표시하지 않는 것을 특징으로 하는 골프 코스의 디봇 탐지 시스템.
- 제4항에 있어서,상기 디봇이 아닌 것으로 분석된 이미지는,배수구, 낙엽, 수리지, 벙커 중 어느 하나인 것을 특징으로 하는 골프 코스의 디봇 탐지 시스템.
- 골프 코스 관리를 위한 코스 키퍼 서버의 디봇 탐지 방법에 있어서,머신러닝 모듈이 골프코스를 촬영한 다중분광 이미지의 1차 판독 결과와 RGB 이미지의 2차 판독 결과를 이용하여 디봇 객체를 학습하는 단계;통신 모듈이 골프코스를 촬영한 다중분광 이미지를 수신하는 단계;탐지 모듈이 상기 수신된 다중분광 이미지로부터 디봇 객체를 탐지하는 단계; 및출력 모듈이 상기 탐지된 디봇 객체의 지도 좌표를 출력하는 단계를 포함하는 골프 코스의 디봇 탐지 방법.
- 제6항에 있어서,상기 디봇 객체 학습 단계는,이미지 처리 모듈이 골프 코스의 다중분광 이미지에서 미리 정의된 색상, 크기 및 형상 중 적어도 하나 이상 매칭되는 객체들을 분리하는 단계;판독 모듈이 상기 분리된 객체들 중 3가지 모두를 만족하는 객체를 디봇으로 정의하여 머신러닝 DB에 저장하는 1차 판독 단계;상기 판독 모듈이 상기 분리된 객체들 중 2개 이하가 매칭되는 객체들을 상기 RGB 이미지 상에 표시하고, 상기 통신 모듈을 통해 상기 RGB 이미지를 전문가 시스템으로 전송하는 단계; 및상기 전문가 시스템에서 디봇 또는 유사 디봇으로 추가 판명(annotated)된 객체를 상기 판독 모듈에 통지하면, 상기 판독 모듈은 상기 추가 판명된 객체를 상기 머신러닝 DB에 저장하는 2차 판독 단계를 포함하는 골프 코스의 디봇 탐지 방법.
- 제7항에 있어서,상기 유사 디봇은, 야생동물이 파헤쳐 토양이 드러난 구역, 산사태로 유실된 구역, 부분적으로 잔디가 사멸한 구역 중 어느 하나인 것을 특징으로 하는 골프 코스의 디봇 탐지 방법.
- 제7항에 있어서,상기 판독 모듈은,색상, 크기 및 형상 중 2개 이하가 매칭되는 객체들 중 디봇이 아닌 것으로 미리 분석된 이미지와 매칭되는 것은 상기 RGB 이미지 상에 표시하지 않는 것을 특징으로 하는 골프 코스의 디봇 탐지 방법.
- 제9항에 있어서,상기 디봇이 아닌 것으로 분석된 이미지는,배수구, 낙엽, 수리지, 벙커 중 어느 하나인 것을 특징으로 하는 골프 코스의 디봇 탐지 방법.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2022570299A JP7493841B2 (ja) | 2020-05-20 | 2020-11-27 | ゴルフコースのディボット探知システム及びこれを用いた探知方法 |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR20200060369 | 2020-05-20 | ||
KR10-2020-0060369 | 2020-05-20 | ||
KR1020200161575A KR102583303B1 (ko) | 2020-05-20 | 2020-11-26 | 골프 코스의 디봇 탐지 시스템 및 이를 이용한 탐지 방법 |
KR10-2020-0161575 | 2020-11-26 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021235634A1 true WO2021235634A1 (ko) | 2021-11-25 |
Family
ID=78698111
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2020/017120 WO2021235634A1 (ko) | 2020-05-20 | 2020-11-27 | 골프 코스의 디봇 탐지 시스템 및 이를 이용한 탐지 방법 |
Country Status (3)
Country | Link |
---|---|
JP (1) | JP7493841B2 (ko) |
KR (1) | KR102583303B1 (ko) |
WO (1) | WO2021235634A1 (ko) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102528034B1 (ko) * | 2021-12-09 | 2023-05-18 | 주식회사 유에프오에스트로넛 | 스마트 디봇 보수 시스템 및 방법 |
KR102512529B1 (ko) * | 2022-11-04 | 2023-03-21 | 주식회사 유오케이 | 골프장 운영 관리 장치 및 방법 |
KR102620094B1 (ko) * | 2023-02-06 | 2024-01-03 | (주) 다음기술단 | 골프장 페어웨이 디봇 보수장치 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101772210B1 (ko) * | 2017-04-20 | 2017-08-28 | 주식회사 일도엔지니어링 | 드론촬영 데이터와 지아이에스(gis) 분석 기법을 이용한 골프장 스프링클러 관리장치 |
KR101860548B1 (ko) * | 2018-02-27 | 2018-05-23 | 주식회사 일도엔지니어링 | 드론의 식생 촬영 데이터와 gis 분석 기법을 이용한 골프장 잔디 관리 장치 |
US20180150726A1 (en) * | 2016-11-29 | 2018-05-31 | Google Inc. | Training and/or using neural network models to generate intermediary output of a spectral image |
KR20180076753A (ko) * | 2016-12-28 | 2018-07-06 | 주식회사 엘렉시 | 이상패턴 감지 시스템 및 방법 |
KR102069780B1 (ko) * | 2018-09-10 | 2020-01-23 | 김두수 | 잔디관리 시스템 |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2018130065A (ja) * | 2017-02-15 | 2018-08-23 | 株式会社 神崎高級工機製作所 | ディボット修復システム |
-
2020
- 2020-11-26 KR KR1020200161575A patent/KR102583303B1/ko active IP Right Grant
- 2020-11-27 WO PCT/KR2020/017120 patent/WO2021235634A1/ko active Application Filing
- 2020-11-27 JP JP2022570299A patent/JP7493841B2/ja active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180150726A1 (en) * | 2016-11-29 | 2018-05-31 | Google Inc. | Training and/or using neural network models to generate intermediary output of a spectral image |
KR20180076753A (ko) * | 2016-12-28 | 2018-07-06 | 주식회사 엘렉시 | 이상패턴 감지 시스템 및 방법 |
KR101772210B1 (ko) * | 2017-04-20 | 2017-08-28 | 주식회사 일도엔지니어링 | 드론촬영 데이터와 지아이에스(gis) 분석 기법을 이용한 골프장 스프링클러 관리장치 |
KR101860548B1 (ko) * | 2018-02-27 | 2018-05-23 | 주식회사 일도엔지니어링 | 드론의 식생 촬영 데이터와 gis 분석 기법을 이용한 골프장 잔디 관리 장치 |
KR102069780B1 (ko) * | 2018-09-10 | 2020-01-23 | 김두수 | 잔디관리 시스템 |
Also Published As
Publication number | Publication date |
---|---|
JP2023526390A (ja) | 2023-06-21 |
KR20210143634A (ko) | 2021-11-29 |
JP7493841B2 (ja) | 2024-06-03 |
KR102583303B1 (ko) | 2023-09-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021235634A1 (ko) | 골프 코스의 디봇 탐지 시스템 및 이를 이용한 탐지 방법 | |
US10943114B2 (en) | Method for aerial imagery acquisition and analysis | |
WO2020040391A1 (ko) | 결합심층네트워크에 기반한 보행자 인식 및 속성 추출 시스템 | |
WO2021091021A1 (ko) | 화재 검출 시스템 | |
TW201921316A (zh) | 偵測航拍影像內物體之非暫態電腦可讀取媒體及系統,以及航拍影像中物體偵測之方法 | |
KR102098259B1 (ko) | 무인항공기를 이용한 산림병해충 의심목 선별 시스템 | |
CN112149513A (zh) | 基于深度学习的工业制造现场安全帽佩戴识别系统和方法 | |
CN109255286A (zh) | 一种基于yolo深度学习网络框架的无人机光学快速检测识别方法 | |
CN113486697B (zh) | 基于空基多模态图像融合的森林烟火监测方法 | |
Yimyam et al. | The Development of an Alerting System for Spread of Brown planthoppers in Paddy Fields Using Unmanned Aerial Vehicle and Image Processing Technique | |
WO2020141888A1 (ko) | 사육장 환경 관리 장치 | |
CN117574317A (zh) | 一种基于星空地多模态数据融合的山火监测方法和装置 | |
CN113378754A (zh) | 一种工地裸土监测方法 | |
CN114358178A (zh) | 一种基于YOLOv5算法的机载热成像野生动物物种分类方法 | |
WO2022196999A1 (ko) | 과실 수량 측정 시스템 및 그 방법 | |
Mou et al. | Spatial relational reasoning in networks for improving semantic segmentation of aerial images | |
CN115994953A (zh) | 电力现场安监追踪方法及系统 | |
CN110517435A (zh) | 一种便携式即时防火预警及信息采集处理预警系统及方法 | |
Kalmukov et al. | Methods for Automated Remote Sensing and Counting of Animals | |
CN114119713A (zh) | 一种基于人工智能与无人机遥感的林地空秃检测方法 | |
TW202125324A (zh) | 航拍影像自動物體偵測之方法及系統 | |
CN112699745A (zh) | 一种用于火灾现场的被困人员定位方法 | |
Hung et al. | Vision-based shadow-aided tree crown detection and classification algorithm using imagery from an unmanned airborne vehicle | |
WO2023106704A1 (ko) | 스마트 디봇 보수 시스템 및 방법 | |
RU2193308C2 (ru) | Способ подсчета теплокровных животных с летательного аппарата |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20937015 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2022570299 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20937015 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20937015 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 07.07.2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20937015 Country of ref document: EP Kind code of ref document: A1 |