WO2021125395A1 - Procédé pour déterminer une zone spécifique pour une navigation optique sur la base d'un réseau de neurones artificiels, dispositif de génération de carte embarquée et procédé pour déterminer la direction de module atterrisseur - Google Patents

Procédé pour déterminer une zone spécifique pour une navigation optique sur la base d'un réseau de neurones artificiels, dispositif de génération de carte embarquée et procédé pour déterminer la direction de module atterrisseur Download PDF

Info

Publication number
WO2021125395A1
WO2021125395A1 PCT/KR2019/018118 KR2019018118W WO2021125395A1 WO 2021125395 A1 WO2021125395 A1 WO 2021125395A1 KR 2019018118 W KR2019018118 W KR 2019018118W WO 2021125395 A1 WO2021125395 A1 WO 2021125395A1
Authority
WO
WIPO (PCT)
Prior art keywords
candidate
images
observation area
regions
determining
Prior art date
Application number
PCT/KR2019/018118
Other languages
English (en)
Korean (ko)
Inventor
이훈희
정다운
최한림
류동영
주광혁
Original Assignee
한국항공우주연구원
한국과학기술원
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 한국항공우주연구원, 한국과학기술원 filed Critical 한국항공우주연구원
Publication of WO2021125395A1 publication Critical patent/WO2021125395A1/fr

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64GCOSMONAUTICS; VEHICLES OR EQUIPMENT THEREFOR
    • B64G1/00Cosmonautic vehicles
    • B64G1/22Parts of, or equipment specially adapted for fitting in or to, cosmonautic vehicles
    • B64G1/24Guiding or controlling apparatus, e.g. for attitude control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G5/00Traffic control systems for aircraft, e.g. air-traffic control [ATC]
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G5/00Traffic control systems for aircraft, e.g. air-traffic control [ATC]
    • G08G5/02Automatic approach or landing aids, i.e. systems in which flight data of incoming planes are processed to provide landing data

Definitions

  • the present invention relates to a method for determining a specific area for optical navigation based on an artificial neural network, an onboard map generation device, and a method for determining the direction of a lander, and more specifically, to a satellite navigation system such as the moon, Mars, and Jupiter.
  • a method and computer program for determining a robust singularity region based on deep learning to use optical navigation on a non-existent planet, an apparatus for generating an onboard map including the determined singularity region, and the onboard map thus generated It relates to a method to determine the direction of a lander using
  • an aircraft In order to move or land to a desired position on a planet, an aircraft such as a spaceship or a lander must know its own position.
  • An optical navigation method for estimating the current position by storing a picture of the planet's surface in the form of a map, and comparing a picture of the planet's surface at the current location with the stored map may be used.
  • the craters formed on the surface of the planet are used in optical navigation as special areas such as landmarks, but when the sun is low as in the poles of the planet, the shadows of the mountainous terrain are long.
  • the crater may not be identified. Also, craters may not exist in some areas.
  • Another technical object of the present invention is to provide an apparatus for generating an onboard map mounted in a memory to perform optical navigation in an aircraft flying around a planet or a lander intended to land on a landing point of a planet.
  • Another technical object of the present invention is to provide a method for determining the direction of a lander by using an onboard map and a camera image in order for the lander to land at a preset landing point of a planet.
  • the method for determining a singular region is performed by a computing device, and includes the steps of determining an observation area and a plurality of candidate areas within the observation area, the observation area labeling each of a plurality of images in which at least a portion of the image is displayed, and convolution based on the plurality of images labeled with the position information to search each of the plurality of candidate regions in each of the plurality of images Training an artificial neural network based on a neural network, evaluating the searched performance of each of the plurality of candidate areas based on the trained artificial neural network, and based on the searched performance of each of the plurality of candidate areas and determining at least some of the plurality of candidate regions as singular regions.
  • the plurality of candidate areas may be distributed so as not to overlap each other in the observation area.
  • the plurality of candidate areas may have a preset shape and may be arranged adjacent to each other in the observation area.
  • the plurality of candidate regions may be circular having a first diameter, and may be arranged adjacent to each other in the observation region.
  • the plurality of images of the observation area may include satellite images photographed by at least one artificial satellite flying along an orbit having a period on the observation area.
  • the plurality of images of the observation area may include images photographed by an aircraft flying over the observation area.
  • the plurality of images of the observation area may include images of the observation area from the earth.
  • the plurality of images of the observation area are synthesized images generated based on a modeled topography of the observation area, a position of the sun with respect to the observation area, and a pose of a camera photographing the observation area may include
  • the method for determining the singular area may further include determining a landing point at which the lander will land and a landing trajectory for landing at the landing point.
  • the observation area may be determined according to the landing trajectory.
  • the method for determining the singular region may further include enlarging or reducing the plurality of images based on an altitude of the landing trajectory on the observation area and a spatial resolution of a camera mounted on the lander. .
  • the location information may include location coordinates of a photographing area displayed in each of the plurality of images.
  • Image coordinates of each of the plurality of candidate regions in each image may be extracted based on the coordinates of the location of the capturing region labeled in the respective images.
  • the location information may include image coordinates of the plurality of candidate regions in each image.
  • the labeling includes: additionally labeling the position of the sun with respect to the observation area at a viewpoint corresponding to each of the plurality of images and a pose of the camera with respect to the observation area on each of the plurality of images may include.
  • the training of the artificial neural network may include inputting a first image from among the images labeled with the location information to the artificial neural network, as an output of the artificial neural network, in the first image. receiving estimated image coordinates of the plurality of candidate regions, and image coordinates of the plurality of candidate regions labeled in the first image and estimated image coordinates of the plurality of candidate regions in the first image It may include training the artificial neural network so that the difference between the two is minimized.
  • the training of the artificial neural network may include receiving, as an output of the artificial neural network, estimated sizes of the plurality of candidate regions in the first image, and sizes of the plurality of candidate regions.
  • the method may further include training the artificial neural network to minimize a difference between the values and the estimated sizes of the plurality of candidate regions in the first image.
  • evaluating the searched performance of each of the plurality of candidate areas may include evaluating the searched performance of a first candidate area among the plurality of candidate areas.
  • Evaluating the search performance of the first candidate region may include inputting each of a plurality of test images in which at least a part of the observation region appears to the trained artificial neural network, as an output of the trained artificial neural network, the receiving estimated image coordinates of the plurality of candidate regions in each of a plurality of test images, and calculating an F 1 score based on an output of the trained artificial neural network, and calculating the F 1 score based on the calculated F 1 score
  • the method may include determining classification performance of the first candidate region.
  • the evaluating the search performance of the first candidate region may include image coordinates of the first candidate region labeled in each test image and estimation of the first candidate region in each test image. and determining the accuracy performance of the first candidate region based on the position difference between the image coordinates.
  • the evaluating the search performance of the first candidate region may include: based on the position of the sun with respect to the observation region labeled in each of the plurality of test images, according to the azimuth and elevation of the sun.
  • the method may further include displaying the classification performance and the accuracy performance of the first candidate region as a classification performance graph and an accuracy performance graph, respectively.
  • the determining of at least some of the plurality of candidate regions as singular regions may include determining the singular regions based on a classification performance graph and an accuracy performance graph for each of the plurality of candidate regions. may include.
  • the method for determining the singular area may further include determining a landing point at which the lander will land and a landing trajectory for landing at the landing point.
  • the determining of at least some of the plurality of candidate regions as singular regions may include: based on a classification performance graph and an accuracy performance graph for each of the plurality of candidate regions, and a first landing date and time of the lander determining at least some of the candidate regions as first singular regions; generating a first onboard map including the first singular regions within the observation region; and a classification performance graph for each of the plurality of candidate regions.
  • determining at least some of the plurality of candidate regions as second singular regions based on an accuracy performance graph, and a second landing date and time of the lander, and a first comprising the second singular regions within the observation region 2 generating an onboard map.
  • the computer program according to an aspect of the present invention is stored in a medium to execute the above-described specific region determination methods using a computing device.
  • an onboard map generating apparatus includes a memory for storing a plurality of images showing at least a part of an observation area, and an artificial neural network based on a convolutional neural network, and at least It contains one processor.
  • the at least one processor determines the observation area and a plurality of candidate areas within the observation area, labels location information on each of the plurality of images, and selects each of the plurality of candidate areas from each of the plurality of images.
  • the artificial neural network is trained based on the plurality of images labeled with the location information, and searched performance of each of the plurality of candidate regions is evaluated based on the trained artificial neural network, and the plurality of candidates and determine at least some of the plurality of candidate regions as singular regions based on the searched performance of each of the regions, and generate the onboard map including the singular regions within the observation region.
  • the method for determining the direction of the lander according to an aspect of the present invention is performed by a processor of the lander, and while flying according to the landing trajectory, a preset observation area at a preset altitude generating an observation region image by photographing; searching for singular regions within the observation region of an onboard map stored in a memory in the observation region image; and determining a direction of the probe based on a difference in arrangement directions of the singular regions.
  • the onboard map may include: determining an observation area and a plurality of candidate areas within the observation area; labeling location information on each of a plurality of images in which at least a part of the observation area appears; training a convolutional neural network-based artificial neural network based on the plurality of images labeled with the location information to search for each of the candidate regions of evaluating search performance; determining at least some of the plurality of candidate areas as the singular regions based on the searched performance of each of the plurality of candidate regions; and identifying the singular regions within the observation region. It is generated by a computing device that performs the step of generating the onboard map including.
  • An onboard map that is loaded into memory can be created to perform optical navigation on a vehicle flying around a planet or a lander attempting to land on a planet's landing site. Since the moon has no atmosphere, there is little change in terrain, and the onboard map can be used semi-permanently.
  • the onboard map and camera image can be used to accurately determine the lander's direction.
  • FIG. 1 exemplarily shows an orbiter revolving around the moon and a lander landing at a landing point on the moon in accordance with the present invention.
  • Figure 2 exemplarily shows a lander landing on the lunar landing site in accordance with the present invention.
  • FIG. 3 schematically shows the internal configuration of a lander according to the present invention.
  • FIG. 5 schematically shows a computing device for determining singular regions based on images of an observation region of the moon and generating an on-board map including the singular regions according to the present invention.
  • 6A to 6C exemplarily show configurations of candidate areas in an observation area according to various embodiments of the present disclosure.
  • FIG. 7 schematically shows an internal configuration of a computing device according to the present invention.
  • FIG. 8 is an exemplary block diagram of a control unit according to embodiments of the present invention.
  • FIG. 9 is a block diagram of a data learning unit according to an embodiment of the present invention.
  • FIG. 10 is a block diagram of a data recognition unit according to an embodiment of the present invention.
  • Some embodiments may be described in terms of functional block configurations and various processing steps. Some or all of these functional blocks may be implemented in various numbers of hardware and/or software configurations that perform specific functions.
  • the functional blocks of the present disclosure may be implemented by one or more microprocessors, or by circuit configurations for a given function.
  • the functional blocks of the present disclosure may be implemented in various programming or scripting languages.
  • the functional blocks of the present disclosure may be implemented as an algorithm running on one or more processors.
  • a function performed by a functional block of the present disclosure may be performed by a plurality of functional blocks, or functions performed by a plurality of functional blocks in the present disclosure may be performed by one functional block.
  • the present disclosure may employ prior art for electronic configuration, signal processing, and/or data processing, and the like.
  • An aircraft flying using optical terrain-referenced absolute navigation needs proper landmark information along the flight path.
  • the vehicle flies close to a planet such as the moon
  • the crater can be selected as an intuitive landmark for optical navigation.
  • it is difficult to reliably detect the craters due to the large shaded area, and the craters may not function as landmarks for optical navigation.
  • the present invention proposes a method for determining singular areas that can be used as landmarks of optical navigation even in rough terrain.
  • the present invention proposes a method for determining a flight plan using singular regions determined as landmarks of optical navigation.
  • the present invention creates an onboard map including specific regions and proposes a method of flying around the moon or landing on the moon using this.
  • a convolutional neural network (CNN)-based object detector In order to determine a good landmark, a convolutional neural network (CNN)-based object detector is trained to distinguish similar landmark candidates from each other even under various lighting environments.
  • a convolutional neural network is used to predict the detectability of landmarks along flight paths on any day within a year. A date with a high probability of detection may be determined, and a mission plan may be determined with reference to this date.
  • the present invention determines singular regions that can function as landmarks for optical navigation on the surface of planets such as the moon, Mars, and Jupiter, where navigation satellites do not exist, and an onboard map mounted on an airship or lander based on the determined singular regions is to create
  • extraterrestrial planets such as the moon, Mars, and Jupiter are collectively referred to as a 'moon' for easy understanding of the present invention.
  • the present invention is not used only for exploration around the moon or landing on the moon, and may be applied to other planets such as Mars or Jupiter within the equivalent scope of the present invention.
  • a pose is a concept including a position and a direction.
  • the pose of the lander includes the position of the lander and the orientation of the lander.
  • FIG. 1 exemplarily shows an orbiter revolving around the moon and a lander landing at a landing point on the moon in accordance with the present invention.
  • the orbiter SC flies along the orbit OR around the moon M.
  • the orbiter SC may load the on-board map generated according to the present invention in a memory, and the processor of the orbiter SC performs the flight direction and/or attitude ( attitude) can be determined. If the determined flight direction and/or attitude differs from the planned flight direction and/or attitude, it may self-correct.
  • the lander LL may land at a landing point LS of the moon M along a predetermined landing trajectory LT.
  • the lander LL may photograph predetermined observation areas OA1 , OA2 , and OA3 while descending to the landing point LS.
  • the lander LL may load the on-board map generated according to the present invention in a memory, and the processor of the lander LL may store surface images of the observation areas OA1, OA2, OA3 and the observation areas OA1, OA2, The flight direction and/or attitude may be determined based on the onboard map of OA3).
  • the lander LL can self-correct its landing direction or attitude so that it can land at the landing point LS.
  • the onboard map stores information about specific objects (or regions) that can serve as landmarks used for optical topographic reference navigation. Singular objects (or regions) are determined for each of the observation regions OA1 , OA2 , OA3 .
  • the specific objects (or areas) may vary depending on the date of landing.
  • first and second onboard maps each including first and second unique objects (or regions) different from each other according to the expected landing date are generated, respectively, and the lander ( LL) can be stored in the memory.
  • the first onboard map may be circumferentially prepared, and the second onboard map may be preliminary.
  • the singular object refers to an object (or region) that can be distinguished from other objects (or regions) even in variously changing environments in images captured by the observation regions OA1, OA2, and OA3.
  • a singular object actually corresponds to a specific area with a known location within the observation area (OA1, OA2, OA3), and can be understood as a set of specific pixels corresponding to this specific area in the image taken of the observation area (OA1, OA2, OA3).
  • a singular object is referred to as a singular region within the observation areas OA1, OA2, and OA3.
  • a singular area can be distinguished from other areas by the topography of that area.
  • a singular object is another object under various lighting conditions determined by the position (altitude and azimuth) of the sun in the corresponding observation area (OA1, OA2, OA3) of the moon. can be distinguished from
  • the processor mounted on the lander (LL) searches for singular objects within the observation areas (OA1, OA2, OA3) recorded on the onboard map through correlation analysis or RANSAC algorithm in images captured in the observation areas (OA1, OA2, OA3). By doing so, the attitude of the lander LL can be estimated.
  • observation areas OA1 , OA2 , and OA3 are illustrated as being three, but this is only exemplary, and may be four or more, and may be two or less.
  • observation area OA any one of the observation areas OA1, OA2, and OA3, it is denoted as the observation area OA.
  • Figure 2 exemplarily shows a lander landing on the lunar landing site in accordance with the present invention.
  • the lander LL may descend to the landing point LS at an altitude of, for example, 15.24 km along the landing trajectory LT.
  • the landing point LS may be adjacent to the south pole. For example, it may be 89.98 degrees south latitude and 0.02 degrees east longitude. To this end, the landing could begin at 74.958 degrees south latitude and 0.02 degrees east longitude.
  • the lander LL can reach the landing point LS while flying in the exact south direction.
  • Pre-set observation areas OA1 and OA3 that can be photographed according to the landing trajectory LT may be photographed.
  • the first observation area OA1 is an area that passes when flying at an altitude of about 10 km according to the landing trajectory LT
  • the third observation area OA3 is an area that passes when flying at an altitude of about 3 km according to the landing trajectory LT. It could be a passing area.
  • the observation areas OA1 and OA3 may be located between 74.958 degrees south latitude and 89.98 degrees south latitude, and may be located at 0.02 degrees east longitude.
  • the first observation area OA1 may be located at, for example, 85.39 degrees south latitude.
  • the polar region of the moon (M) has a high possibility of water presence because the altitude of the sun (S) is low, and the pole region of the moon (M) has mountain peaks that can always communicate with the earth, so it is worth exploring compared to other regions is known to be high.
  • the altitude ⁇ of the sun S may be 2 degrees or less, for example, between 0.330 degrees and 1.898 degrees.
  • the azimuth ⁇ of the sun S may be between 0 degrees and 360 degrees.
  • FIG. 3 schematically shows the internal configuration of a lander according to the present invention.
  • the lander 10 includes a processor 11 , a memory 12 , a camera 13 , a sensor 14 , and a thruster 15 .
  • Lander 10 may further include components such as star trackers, devices according to a planned mission, and the like.
  • the lander 10 may include only some of the above-described components depending on the design.
  • the lander 10 may be, for example, a lunar lander LL shown in FIG. 2 , and may fly along a landing trajectory LT to land at a landing point LS of the moon M.
  • the processor 11 is responsible for overall control of the lander 10 , and may control the lander 10 so that the lander 10 can fly along the landing trajectory LT and land at the landing point LS.
  • the memory 12 stores instructions for operating the processor 11 .
  • information about the landing trajectory LT is stored in the memory 12 .
  • the memory 12 includes singular areas within the observation areas OA to determine whether the lander 10 is flying in the planned direction to land at the landing point LS or is maintaining the planned attitude at the landing trajectory LT.
  • An onboard map containing may be stored.
  • the lander 10 may start to land in order to land at the landing point LS while orbiting around the moon M.
  • the landing plan determines the set landing date
  • other landing dates may be prepared in advance in case of failure to land on that landing date.
  • the landing date on which the preferential landing attempt is made is referred to as the first landing date
  • the preliminary landing date prepared in advance in case of failure to land on the first landing date is referred to as the second landing date.
  • third and fourth landing dates may be predetermined.
  • the landing date does not mean only the corresponding date, but includes the landing date and time including the landing time of the corresponding date.
  • the memory 12 stores a first onboard map of observation areas OA including first singular areas selected according to the solar altitude of the first landing date, and a first onboard map selected according to the solar altitude of the second landing date.
  • a first onboard map of the observation regions O including the two singular regions may be stored.
  • the processor 11 may use the first onboard map on the first landing date and use the second onboard map on the second landing date.
  • the observation area OA of the first onboard map and the observation area OA of the second onboard map may not be identical to each other.
  • the first singular regions of the first onboard map and the second singular regions of the second onboard map may be the same, some may be the same, or all may be different.
  • the camera 13 may be mounted on the lander 10 to photograph the surface of the moon M.
  • the camera 13 is mounted at a preset position on the lander 10 and may be oriented in a preset direction.
  • the camera 13 may be set to point in the direction of the center of the moon M, or may point in a fixed direction with respect to the lander 10 .
  • the camera 13 may be a two-dimensional camera and may generate a black-and-white image. According to another example, the camera 13 may be a one-dimensional line camera. The camera 13 may be a color camera that generates a color image. The angle of view of the camera 13 may be, for example, 80 degrees, and the aspect ratio of the image generated by the camera 13 may be, for example, 1. The resolution of the camera 13 may be, for example, 416 x 416.
  • the camera 13 may photograph the earth's surface of the moon M under the control of the processor 11 , and may provide an image of the earth's surface to the processor 11 .
  • the sensor 14 may detect the state of the lander 10 .
  • the sensor 14 may include at least one of an inertial sensor, an acceleration sensor, a gravity sensor, an altitude sensor, and a temperature sensor.
  • the sensor 14 may detect the altitude of the lander 10 .
  • the sensor 14 may provide the sensed value to the processor 11 .
  • the thruster 15 may provide a force for changing the attitude or flight direction of the lander 10 under the control of the processor 11 .
  • the processor 11 operates the camera 13 when it is determined that the lander 10 is located on the observation area OA according to the landing procedure instructions stored in the memory 12 . It is possible to obtain an image 13p of the surface of the moon M by doing this.
  • the processor 11 may determine that the lander 10 is located on the observation area OA by sensing the altitude of the lander 10 through the sensor 14 .
  • the processor 11 loads the onboard map 12m of the corresponding observation area OA stored in the memory 12, the onboard map 12m of the observation area OA and the ground surface image 13p of the observation area OA) can be compared.
  • the processor 11 may search for singular areas SRO in the observation area OA recorded on the onboard map 12m from the surface image 13p of the observation area OA through correlation analysis or RANSAC algorithm. .
  • the processor 11 determines the current flight direction or attitude of the lander 10 . can be decided
  • the processor 11 may include a pre-trained convolutional neural network-based object detection model, and by inputting a ground image 13p and an onboard map 12m to the object detection model, the lander 10 may determine the current flight direction or attitude of
  • the processor 11 compares the sensed current flight direction or attitude with a direction or attitude according to a preset landing plan, and if they are different, using the thruster 15 to correct the attitude or flight direction of the lander 10 can Accordingly, the lander 10 can land accurately at the predetermined landing point LS according to the predetermined landing plan.
  • FIG. 5 schematically shows a computing device for determining singular regions based on images of an observation region of the moon and generating an onboard map including the singular regions according to the present invention.
  • a plurality of images 200 are input to the computing device 100 , and the computing device 100 generates an onboard map 300 based on the plurality of images 200 .
  • the images 200 are images of the observation area OA of the moon M. Some of the images 200 may include the entire observation area OA. According to another example, some of the images 200 may include only a portion of the observation area OA. That is, only a part of the observation area OA may appear in the image 200 . All of the magnifications of the images 200 may be constant or different from each other.
  • the images 200 may include training images for training an object detection model based on a convolutional neural network in the computing device 100 , and test images for determining specific regions by testing the object detection model.
  • the images 200 may further include verification images for verifying the validity of the object detection model.
  • an object detection model based on a convolutional neural network may be referred to as an artificial neural network or a lunanet.
  • the observation area OA is determined based on a predetermined landing point LS and a landing trajectory LT for landing at the landing point LS. Landing dates for landing at the landing point LS may also be predetermined. The exact position of the observation area OA is known in advance.
  • the observation area OA may be temporarily determined. When a preset number of singular areas SRO does not exist in the observation area OA, the observation area OA may be changed. According to another example, when a preset number of singular areas SRO does not exist in the observation area OA, the configuration of the candidate areas (CRO of FIG. 6 ) may be changed.
  • the images 200 may include satellite images captured by at least one artificial satellite flying along an orbit having a period on the observation area OA.
  • the images 200 may include images captured by an aircraft flying over the observation area OA.
  • the images 200 may include images obtained by photographing the observation area OA on the Earth.
  • the images 200 are generated based on the modeled topography of the observation area OA, the position of the sun with respect to the observation area OA, and a pose of a camera photographing the observation area OA.
  • Composite images may be included.
  • the number of synthesized images generated for the observation area OA may be, for example, hundreds to tens of thousands.
  • the modeled topography of the observation area OA is captured by satellite images taken by at least one artificial satellite flying along an orbit having a period on the observation area OA, and by a vehicle flying over the observation area OA.
  • the image may be generated based on at least one of captured images and images of the observation area OA on the Earth.
  • the position of the sun with respect to the observation area OA may be set within a preset range based on the position of the observation area OA.
  • the composite images may be generated while changing the position of the sun with respect to the observation area OA within a preset range.
  • the pose of the camera capturing the observation area OA may be set within a preset range based on the landing trajectory LT. Synthetic images may be generated while changing a pose of a camera that captures the observation area OA within a preset range. For example, while changing the position of the camera with respect to the observation area OA, the synthesized images may be generated while also changing the direction of the camera facing the observation area OA.
  • Synthetic images may be generated based on a 3D topography in which the observation area OA is measured. Synthetic images may be generated based on the surface reflectivity and material of the modeled terrain. Synthetic images may be generated based on a modeled atmospheric model. The composite images may be generated based on at least one of a resolution of a camera, a field of view, and a lens.
  • the synthesized images include the actual three-dimensional topography or modeled topography of the observation area (OA), the modeled surface reflectance and material, the modeled atmospheric model, the position of the sun with respect to the observation area (OA), and the observation area ( OA) may be generated based on at least a part of a pose position and orientation direction, resolution, field of view range, and lens of a camera photographing the OA).
  • the computing device 100 may determine the observation area OA and a plurality of candidate areas (CRO of FIG. 6 ) within the observation area OA.
  • the computing device 100 generates an object detection model based on the images 200 , and uses the object detection model to create a singular area with particularly high search performance compared to other candidate areas CRO within the observation area OA. (SRO of FIG. 4 ) may be determined.
  • the computing device 100 may generate the onboard map 300 including the singular areas SRO determined as above in the observation area OA.
  • the onboard map 300 is stored in the memory 12 of the lander 10 of FIG. 3, for example, as described above, and can be used to accurately land the lander 10 at the landing point LS of the moon M. .
  • 6A to 6C exemplarily show configurations of candidate areas in an observation area OA according to various embodiments of the present disclosure.
  • candidate areas CRO1 , CRO2 , ..., CROi in the observation area OA are illustrated.
  • the candidate regions CRO1 , CRO2 , ..., CROi may be referred to as candidate regions CRO.
  • the candidate areas CRO may be selected by a user within the observation area OA.
  • the candidate areas CRO may be distributed so as not to overlap each other within the observation area OA.
  • the candidate regions CRO are illustrated as rectangles in FIG. 6A , these are exemplary and may be circular, triangular, hexagonal, or irregular polygons. At least some of the candidate regions CRO may be inclined.
  • the candidate regions CRO may have different shapes.
  • the user Since the user knows the exact location of the observation area OA, the user also knows the exact location of the candidate areas CRO.
  • the candidate areas CRO are rectangular, at least one of a center position, a horizontal length, a vertical length, and a slope may be determined to specify the candidate area CRO.
  • the number of candidate areas CRO may be 4 or less or 6 or more.
  • the number of candidate regions CRO may be greater than or equal to a preset number.
  • the preset number may be three or more so that the attitude of the lander can be determined based on the arrangement of the specific regions searched for in the image of the ground surface captured by the lander. As the preset number is set to be larger, the accuracy of the attitude of the lander increases, but all of them may not be detected in the image of the earth's surface.
  • candidate areas CRO1 , CRO2 , ..., CROi in the observation area OA are illustrated.
  • the candidate regions CRO1 , CRO2 , ..., CROi may be referred to as candidate regions CRO.
  • the candidate areas CRO may all have a preset shape and may be arranged adjacent to each other in the observation area OA.
  • the candidate regions CRO are illustrated as having a regular hexagon, but this is exemplary and may have other shapes such as a square, a triangle, a rhombus, and the like.
  • the candidate areas CRO may be arranged so that their boundaries contact each other within the observation area OA. However, an empty space may exist between the candidate regions CRO. An area other than the candidate areas CRO among the observation area OA may be referred to as a background area. When the candidate regions CRO are rectangular, the background region may not exist.
  • Candidate areas CRO may not be set in the edge area of the observation area OA and may be left as a background area. Even if the lander attempts to capture the observation area OA, only a part of the observation area OA may be photographed depending on the attitude of the lander. That is, a part of the edge area of the observation area OA may not be photographed. In consideration of this point, it may be advantageous that the singular areas SRO are not located in the edge area of the observation area OA. To this end, candidate areas CRO may not be set in the edge area of the observation area OA.
  • the candidate regions CRO may all have the same size and the same shape.
  • the candidate regions CRO may be specified only by the central position and one length.
  • One length may be set as one of a distance between a center point and a vertex, a length of one side, or a distance between opposing vertices.
  • the number of candidate regions CRO may be less or more.
  • a preset number of singular regions SRO is not found, and at least one of the position, number, shape, and arrangement of the candidate regions CRO is changed. Thereafter, the computing device 100 may again find a preset number of specific regions SRO from among the changed candidate regions CRO.
  • candidate areas CRO1 , CRO2 , ..., CROi in the observation area OA are illustrated.
  • the candidate regions CRO1 , CRO2 , ..., CROi may be referred to as candidate regions CRO.
  • the candidate areas CRO are all circular having a preset radius, and may be arranged adjacent to each other in the observation area OA.
  • the candidate areas CRO may be arranged so that their boundaries contact each other within the observation area OA.
  • the candidate regions CRO may be arranged to be spaced apart from each other.
  • Candidate areas CRO may not be set in the edge area of the observation area OA.
  • the candidate regions CRO may all have the same radius. In this case, the candidate regions CRO may be specified only by the central position and the radial length.
  • the number of candidate regions CRO may be less or more.
  • a preset number of singular regions SRO is not found, and at least one of the position, number, shape, and arrangement of the candidate regions CRO is changed. Thereafter, the computing device 100 may again find a preset number of specific regions SRO from among the changed candidate regions CRO.
  • the positions of the candidate regions CRO may be horizontally moved by the radial length of the candidate regions CRO.
  • the number of candidate regions CRO may be changed by increasing or decreasing the radial length of the candidate regions CRO.
  • the shape of the candidate regions CRO may be changed to a hexagon, a rectangle, a triangle, etc. as shown in FIG. 6B .
  • a separation distance between the candidate regions CRO may be adjusted.
  • the candidate areas CRO are set by the user in the observation area OA. Since the user knows the exact location of the observation area OA, the user also knows the exact location of the candidate areas CRO.
  • FIG. 7 schematically shows an internal configuration of a computing device according to the present invention.
  • the computing device 100 includes a control unit 110 , a memory 120 , and a database (DB, 130 ).
  • the controller 110 , the memory 120 , and the DB 130 may exchange data with each other through a bus.
  • the computing device 100 may further include additional components such as, for example, a communication module, an input/output device, and a storage device, in addition to the components illustrated in FIG. 7 .
  • additional components such as, for example, a communication module, an input/output device, and a storage device, in addition to the components illustrated in FIG. 7 .
  • the controller 110 typically controls the overall operation of the computing device 100 .
  • the controller 110 may perform basic arithmetic, logic, and input/output operations, and execute, for example, program code stored in the memory 120 , for example, an object detection model based on a convolutional neural network.
  • the controller 110 may be at least one processor.
  • computing device 100 is shown as one device, computing device 100 may be more than one computing device.
  • the controller 110 may be two or more processors.
  • the memory 120 is a recording medium readable by the processor 110 and may include a non-volatile mass storage device such as a RAM, a ROM, and a disk drive.
  • the memory 120 may store an operating system and at least one program or application code.
  • data for implementing an object detection model based on a convolutional neural network for searching each of the candidate regions CRO in each of the images 100 , a program code for training the object detection model, and an object trained in this way A program code for evaluating the searched performance of each of the candidate regions using the detection model, a program code for determining the singular regions based on the searched performance of each of the candidate regions, and an onboard map including the determined singular regions are generated
  • a program code, etc. for performing the operation may be stored.
  • the DB 130 is a recording medium readable by the processor 110 , and may include a non-volatile large-capacity recording device such as a disk drive.
  • the DB 130 may store training images used for training the object detection model and test images input to the object detection model to evaluate the search performance of candidate regions. Both the training images and the test images may be included in the images 200 of FIG. 5 .
  • the images 200 may be stored in the DB 130 as they are without processing, or may be stored in a processed form suitable for training and testing an object detection model. For example, only a partial area having a preset resolution including the observation area OA among the images 200 may be stored in the DB 130 .
  • accurate location information and shooting date information of a region appearing in each of the images 200 may be connected to each of the images 200 and stored.
  • the location information may be location coordinates of four corners of a region appearing in each of the images 200 .
  • the location information may include coordinates and directions of a central location of a region appearing in each of the images 200 , and a horizontal length and a vertical length of the corresponding region.
  • the position of the sun and pose information of the camera at the corresponding shooting date and time may be stored in the DB 130 instead of the shooting date and time of each of the images 200 .
  • the controller 110 may determine the observation area OA and a plurality of candidate areas (CRO of FIG. 6 ) within the observation area OA.
  • the controller 110 may label location information on each of the plurality of images 200 in which at least a part of the observation area OA is displayed.
  • the controller 110 trains an artificial neural network based on a convolutional neural network based on a plurality of images 200 labeled with location information to search each of a plurality of candidate regions CRO in each of the plurality of images 200 . can do.
  • the controller 110 may evaluate the searched performance of each of the plurality of candidate regions CRO based on the trained artificial neural network.
  • the controller 110 may determine at least some of the plurality of candidate areas CRO as the singular areas SRO based on the search target performance of each of the plurality of candidate areas CRO.
  • the controller 110 may generate an onboard map 300 of FIG. 5 including the singular areas SRO in the observation area OA.
  • the controller 110 may determine a landing point LS at which the lander 10 will land and a landing trajectory LT for landing at the landing point LS.
  • the observation area OA may be determined according to the landing trajectory LT.
  • the landing trajectory LT or the landing date may be determined based on the singular areas SRO in the observation area OA.
  • the control unit 110 will be described in more detail with reference to FIG. 8 .
  • FIG. 8 is an exemplary block diagram of a control unit according to embodiments of the present invention.
  • the controller 110 may include a data learner 111 , a data recognizer 112 , a performance map generator 113 , and an onboard map generator 114 .
  • the memory 120 may store an object detection model 121 based on a convolutional neural network.
  • the data learner 111 may learn a criterion for searching each of the candidate regions in each of the training images.
  • the data learning unit 111 may learn a criterion regarding which data to use in order to search for each of the candidate regions in each of the training images.
  • the data learner 111 may train the object detection model 121 using training images labeled with location information.
  • the data learning unit 111 performs machine learning on the object detection model 121 using training images labeled with location information to search for each candidate region in each of the training images input to the object detection model 121 . standards can be learned.
  • the data learning unit 111 may select training images from images stored in the DB 130 .
  • the images stored in the DB 130 may be images obtained by at least partially capturing the observation area OA.
  • the images may include at least some of satellite images, surface-captured images, and synthetic images.
  • the data recognizer 112 may search for each of the candidate regions in each of the test images.
  • the data recognizer 112 may search for each of the candidate regions in each of the test images by using the pre-trained object detection model 121 .
  • the data recognizer 112 may output estimated image coordinates of each of the candidate regions in each of the test images. In each of the test images, the actual position of each of the candidate regions and the estimated image coordinates may be compared.
  • the data recognizer 112 may search for candidate regions in each of the test images by inputting test images to the object detection model 121 trained by the data learner 111 .
  • the data recognition unit 112 may post-process the output of the object detection model 121 to more intuitively understand the searched performance of each of the candidate regions.
  • a result value output by the object detection model 121 after receiving the test images may be used to update the object detection model 121 .
  • At least one of the data learning unit 111 and the data recognition unit 112 may be manufactured in the form of at least one hardware chip and mounted on a device.
  • at least one of the data learning unit 111 and the data recognition unit 112 may be manufactured in the form of a dedicated hardware chip for artificial intelligence (AI), or an existing general-purpose processor (eg, CPU).
  • AI artificial intelligence
  • CPU general-purpose processor
  • it may be manufactured as a part of an application processor) or a graphics-only processor (eg, GPU) and mounted on the various devices described above.
  • the data learning unit 111 and the data recognition unit 112 may be mounted on one device or may be mounted on separate devices, respectively.
  • one of the data learning unit 111 and the data recognition unit 112 may be included in the computing device 100 , and the other may be included in another computing device connected to the computing device 100 through a network.
  • the data learning unit 111 and the data recognition unit 112 may provide the learning model information built by the data learning unit 111 to the data recognition unit 112 through a wired or wireless connection, and the data recognition unit 112 ) may be provided to the data learning unit 111 as additional learning data.
  • At least one of the data learning unit 111 and the data recognition unit 112 may be implemented as a software module.
  • the software module is a computer-readable, non-transitory, non-transitory It may be stored in a readable recording medium (non-transitory computer readable media).
  • at least one software module may be provided by an operating system (OS) or may be provided by a predetermined application. A part of the at least one software module may be provided by an operating system (OS), and the other part may be provided by a predetermined application.
  • the performance map generator 113 may evaluate the searched performance of each of the candidate regions using the object detection model 121 .
  • the performance map generator 113 may generate a classification performance map and an accuracy performance map of each of the candidate areas in order to indicate the searched performance of each of the candidate areas.
  • the onboard map generator 114 may determine specific regions having high search performance based on the search target performance of each of the candidate regions.
  • the onboard map generator 114 may generate an onboard map including specific regions.
  • FIG. 9 is a block diagram of the data learning unit 111 according to an embodiment of the present invention.
  • the data learning unit 111 includes a data acquiring unit 111-1, a preprocessing unit 111-2, a training data selection unit 111-3, a model learning unit 111-4, and a model.
  • the evaluation unit 111 - 5 may be included.
  • the data acquisition unit 111-1 may acquire data necessary to search for each of the candidate regions in each of the images.
  • the data acquisition unit 111-1 may acquire training images from images showing a part of the observation area stored in the DB 130 .
  • the data acquisition unit 111-1 may acquire location information stored in relation to each of the training images.
  • the data acquisition unit 111-1 may acquire position information of the sun and/or pose information of the camera stored in relation to each of the training images.
  • the preprocessor 111 - 2 may preprocess the acquired data so that data necessary for searching each candidate region in each of the images may be used.
  • the preprocessing unit 111-2 may process the obtained data into a preset format so that the model learning unit 111-4 trains the object detection model 121 for searching each candidate region in each of the images. have.
  • the preprocessor 111 - 2 may enlarge or reduce the training images.
  • the preprocessor 111 - 2 may reduce or enlarge the magnifications of the training images to the same preset magnification.
  • the landing site at which the lander will land and the landing trajectory for landing at the landing site may be determined. As the lander flies along the landing orbit, a camera for photographing the observation area and a location for photographing the observation area may also be determined.
  • the magnification of the surface image generated by photographing the observation area from the lander is predetermined.
  • the preprocessor 112 - 2 may reduce or enlarge the training image so that the magnification of the training image is the same as the magnification of the ground image.
  • the magnification of the image refers to the size (eg, the length or area of a square) of the observation area indicated by one pixel of the image.
  • the accuracy of the object detection model 121 may be guaranteed even in the lander.
  • the preprocessor 111 - 2 may label each of the training images with location information.
  • the location information may include location coordinates of a photographing area displayed in each of the training images.
  • the preprocessor 111 - 2 may extract image coordinates of each of the candidate regions from each of the training images based on the coordinates of the location of the shooting region displayed in each of the training images.
  • the preprocessor 111 - 2 may label each training image with image coordinates of each of the candidate regions extracted from each training image.
  • image coordinates of candidate regions in each of the training images may be stored in the DB 130 in association with each of the training images, and the preprocessor 111-2 may be used. may label the image coordinates of each of the candidate regions for each training image.
  • the preprocessor 111 - 2 may label the position of the sun with respect to the observation area corresponding to each of the training images and the pose of the camera for the observation area on each of the training images.
  • the training image is a satellite image of the observation area
  • the position of the sun with respect to the observation area at the time of photographing the observation area, and the pose of the camera photographing the observation area are labeled in each of the training images.
  • the training image is a synthetic image generated by rendering
  • the position of the sun with respect to the observation region and the pose of the camera with respect to the observation region input to the rendering model may be labeled in each of the training images.
  • an identification number and image coordinates of each of the candidate regions in the first training image may be labeled in the first training image.
  • the position of the sun with respect to the observation area and the pose of the camera with respect to the observation area corresponding to the first training image may be labeled in the first training image.
  • parameters for specifying each of the candidate regions may be labeled in each of the training images together with an identification number of the candidate regions.
  • FIGS. 6B and 6C when the candidate regions all have the same shape and the same size, the size and shape of each of the training images may not be labeled.
  • the labeling function of the preprocessor 111 - 2 may be automatically performed by the control unit 110 .
  • the labeling function of the preprocessor 111-2 may be performed by the controller 110 through a user's manual operation.
  • the learning data selection unit 111-3 may select data necessary for learning from among the pre-processed data.
  • the selected data may be provided to the model learning unit 111-4.
  • the learning data selector 111-3 may select data required for learning from among the preprocessed data according to a preset criterion for searching each of the candidate regions in each of the training images.
  • the preset reference may be prepared by learning by the model learning unit 111-4, which will be described later.
  • the model learning unit 111-4 may learn a criterion for searching each of the candidate regions in each of the training images.
  • the model learning unit 111-4 may learn a criterion for which training data should be used in order to search for each of the candidate regions in each of the training images.
  • the object detection model 121 may be pre-built based on an artificial neural network.
  • the object detection model 121 may be pre-built by receiving basic learning data (eg, sample data, etc.).
  • the object detection model 121 may be constructed in consideration of the field of application of the recognition model, the purpose of learning, or the computer performance of the device.
  • the object detection model 121 is, for example, a learning model based on a neural network, for example, a Deep Neural Network (DNN), a Convolution Neural Network (CNN), a Recurrent Neural Network (RNN), or a Bidirectional Recurrent (BRDNN).
  • DNN Deep Neural Network
  • CNN Convolution Neural Network
  • RNN Recurrent Neural Network
  • BBDNN Bidirectional Recurrent
  • a model such as a Deep Neural Network may be used, but is not limited thereto.
  • the model learning unit 111-4 may train the object detection model 121 using, for example, a learning algorithm including error back-propagation or gradient descent.
  • the model learning unit 111-4 may train the object detection model 121 through supervised learning using, for example, learning data as an input value.
  • the model learning unit 111-4 may train the object detection model 121 through, for example, reinforcement learning using feedback on whether a result of searching each candidate region in each of the training images according to learning is correct.
  • the model learning unit 111-4 may define a loss function based on the result of the object detection model 121 receiving the training image and location information labeled on the training image.
  • candidate regions may be searched for in the first training image.
  • the result of the object detection model 121 may include an estimated identification number, an estimated image coordinate, and an estimated size of each of the candidate regions searched for in the first training image.
  • the result of the object detection model 121 may include estimation accuracy of each of the candidate regions searched for in the first training image.
  • the model learning unit 111-4 may output an estimation result having an estimation accuracy higher than a preset estimation accuracy.
  • the model learning unit 111-4 may output a preset number of estimation results based on estimation accuracy.
  • the loss function may be defined based on a difference between the position information labeled on the training image and the estimation result.
  • a loss function may be defined based on a position error and a size error of each of the candidate regions.
  • the position error may be a difference between the actual image coordinates labeled in each of the candidate regions and the image coordinates estimated for each of the candidate regions.
  • the size error may be a difference between an actual size determined for each of the candidate regions and a size estimated for each of the candidate regions.
  • the size error may be a difference between an actual radius of the candidate regions and a radius estimated for each of the candidate regions.
  • the estimation result of the object detection model 121 may include estimated image coordinates of candidate regions in the first training image.
  • the model learning unit 111-4 trains the object detection model 121 such that a difference between the image coordinates of the candidate regions labeled in the first training image and the estimated image coordinates of the candidate regions in the first training image is minimized. can do.
  • the estimation result of the object detection model 121 may include estimated sizes of candidate regions in the first training image.
  • the model learner 111-4 may train the object detection model 121 to minimize a difference between the sizes of the candidate regions and the estimated sizes of the candidate regions in the first training image.
  • the model learning unit 111-4 may repeat the learning of the object detection model 121 using the training images so that the loss function value is smaller than a preset reference value.
  • the model learning unit 111-4 may store the learned object detection model 121 .
  • the model learning unit 111-4 may store the object detection model 121 in the memory 120 of the computing device 100 .
  • the memory 120 storing the object detection model 121 may also store, for example, commands or data related to at least one other component of the device.
  • the memory 120 may store software and/or programs.
  • a program may include, for example, a kernel, middleware, an application programming interface (API) and/or an application program (or "application”), and the like.
  • the model evaluation unit 111-5 inputs verification data to the object detection model 121, and when the recognition result output from the verification data does not satisfy a predetermined criterion, causes the model learning unit 111-4 to learn again.
  • the verification data may be preset data for evaluating the learning model.
  • the verification data may be verification images selected from images stored in the DB 130 and labeled with location information.
  • the model evaluator 111 - 5 may calculate a loss function value based on an estimation result of the learned object detection model 121 with respect to the verification data.
  • the model evaluator 111 - 5 may evaluate the learned object detection model 121 as unsuitable when the loss function value exceeds a preset threshold.
  • At least one of -5) may be manufactured in the form of at least one hardware chip and mounted on a device.
  • at least one of the data acquisition unit 111-1, the preprocessor 111-2, the training data selection unit 111-3, the model learning unit 111-4, and the model evaluation unit 111-5 One may be manufactured in the form of a dedicated hardware chip for artificial intelligence (AI), or it may be manufactured as part of an existing general-purpose processor (eg, CPU or application processor) or graphics-only processor (eg, GPU) as described above. It may be mounted on various devices.
  • AI artificial intelligence
  • the data acquisition unit 111-1, the preprocessor 111-2, the training data selection unit 111-3, the model learning unit 111-4, and the model evaluation unit 111-5 are one device. It may be mounted on the , or may be respectively mounted on separate devices. For example, some of the data acquisition unit 111-1, the preprocessor 111-2, the training data selection unit 111-3, the model learning unit 111-4, and the model evaluation unit 111-5 may be included in the device, and the rest may be included in the server.
  • At least one of the data acquisition unit 111-1, the preprocessor 111-2, the training data selection unit 111-3, the model learning unit 111-4, and the model evaluation unit 111-5 is It can be implemented as a software module.
  • At least one of the data acquisition unit 111-1, the preprocessor 111-2, the training data selection unit 111-3, the model learning unit 111-4, and the model evaluation unit 111-5 is a software module
  • the software module When implemented as (or, a program module including instructions), the software module may be stored in a computer-readable non-transitory computer readable medium.
  • at least one software module may be provided by an operating system (OS) or may be provided by a predetermined application. A part of the at least one software module may be provided by an operating system (OS), and the other part may be provided by a predetermined application.
  • OS operating system
  • OS operating system
  • FIG. 10 is a block diagram of the data recognition unit 112 according to an embodiment of the present invention.
  • the data recognition unit 112 includes a data acquisition unit 112-1, a preprocessor 112-2, a recognition data selection unit 112-3, and a recognition result providing unit ( 112-4) and a model updater 112-5.
  • the data acquisition unit 112-1 may acquire data necessary to search for each of the candidate regions in each of the images.
  • the data acquisition unit 112-1 may acquire test images from images showing a part of the observation area stored in the DB 130 .
  • the data acquisition unit 112-1 may acquire location information stored in relation to each of the test images.
  • the data acquisition unit 112-1 may acquire position information of the sun and/or pose information of the camera stored in relation to each of the test images.
  • the preprocessor 112 - 2 may preprocess the obtained data so that data necessary for searching each candidate region in each of the test images may be used.
  • the preprocessing unit 112-2 may process the acquired data into a preset format so that the recognition result providing unit 112-4 may search for each candidate region in each of the images using the acquired data.
  • the preprocessor 112 - 2 may enlarge or reduce the test images.
  • the preprocessor 112 - 2 may reduce or enlarge the magnifications of the test images to the same preset magnification.
  • the preprocessor 112 - 2 may reduce or enlarge the test image so that the magnification of the test image is the same as the magnification of the ground image.
  • the preprocessor 112 - 2 may label location information on each of the test images.
  • the location information may include location coordinates of a photographing area displayed in each of the test images.
  • the preprocessor 112 - 2 may extract image coordinates of each of the candidate regions from each of the test images based on the coordinates of the location of the photographing area displayed in each of the test images.
  • the preprocessor 112 - 2 may label each test image with image coordinates of each of the candidate regions extracted from each test image.
  • image coordinates of the candidate regions in each of the test images may be stored in the DB 130 in association with each of the test images, and the preprocessor 112 - 2 ) may label the image coordinates of each of the candidate regions for each test image.
  • the preprocessor 112 - 2 may label the position of the sun with respect to the observation area corresponding to each of the test images and the pose of the camera with respect to the observation area on each of the test images.
  • an identification number and image coordinates of each of the candidate regions in the first test image may be labeled on the first test image.
  • the position of the sun with respect to the observation area and the pose of the camera with respect to the observation area corresponding to the first test image may be labeled on the first test image.
  • the labeling function of the preprocessor 112 - 2 may be automatically performed by the control unit 110 .
  • the labeling function of the preprocessor 112-2 may be performed by the controller 110 through a user's manual operation.
  • the recognition data selection unit 112 - 3 may select data required to search for each of the candidate regions in each of the test images from among the pre-processed data.
  • the recognition data selector 112 - 3 may select some or all of the preprocessed data according to a preset criterion for searching each candidate region in each of the test images.
  • the recognition data selection unit 112-3 may select data according to a criterion set by the model learning unit 111-4 learning.
  • the recognition result providing unit 112 - 4 may search for each of the candidate regions in each of the test images by applying the selected data to the object detection model 121 .
  • the recognition result providing unit 112 - 4 may provide a recognition result according to the purpose of data recognition.
  • the recognition result providing unit 112 - 4 may apply the selected data to the object detection model 121 by using the data selected by the recognition data selecting unit 112 - 3 as an input value.
  • the recognition result may be determined by the object detection model 121 .
  • the identification number and location of each of the candidate regions searched in each of the test images may be estimated, and the estimation result may be a text, an image, or a command (eg, an application execution command, a module function execution command, etc.) ) may be provided.
  • the estimation result of the object detection model 121 is a candidate region in the first test image.
  • the estimation result of the object detection model 121 may include estimated sizes of candidate regions in the first training image.
  • the first candidate region may be searched for in the first test image, and in this case, the estimation result of the object detection model 121 may include estimated image coordinates of the first candidate region in the first test image.
  • the first candidate area found in the first test image may or may not be the first candidate area of the actual observation area. That is, the estimated image coordinates of the first candidate area may or may not correspond to the first candidate area in the actual first observation area.
  • the estimated image coordinates of the first candidate area may correspond to other candidate areas within the observation area or may correspond to a background area within the first observation area.
  • the estimation result of the object detection model 121 corresponds to an erroneous estimation.
  • the second candidate region may not be searched for in the first test image, and in this case, information on the second candidate region may not be included in the estimation result of the object detection model 121 .
  • the third candidate image may be searched for in two positions in the first test image.
  • the estimation result of the object detection model 121 is the first estimated image coordinates and the second estimated image coordinates of the third candidate region in the first test image. It may include estimated image coordinates.
  • At least one of two estimation results for the third candidate region of the object detection model 121 corresponds to an erroneous estimation.
  • the model updater 112 - 5 may update the object detection model 121 based on the evaluation of the recognition result provided by the recognition result provider 112 - 4 .
  • the model updating unit 112-5 provides the recognition result provided by the recognition result providing unit 112-4 to the model learning unit 111-4, so that the model learning unit 111-4 is
  • the object detection model 121 may be updated.
  • the data recognition unit 112 the data acquisition unit 112-1, the preprocessor 112-2, the recognition data selection unit 112-3, the recognition result providing unit 112-4 and the model update unit ( At least one of 112-5) may be manufactured in the form of at least one hardware chip and mounted in a device.
  • the data acquisition unit 112-1, the preprocessor 112-2, the recognition data selection unit 112-3, the recognition result providing unit 112-4, and the model update unit 112-5 At least one may be manufactured in the form of a dedicated hardware chip for artificial intelligence (AI), or may be manufactured as part of an existing general-purpose processor (eg, CPU or application processor) or graphics-only processor (eg, GPU). It may be mounted on one various device.
  • AI artificial intelligence
  • the data acquisition unit 112-1, the preprocessor 112-2, the recognition data selection unit 112-3, the recognition result providing unit 112-4, and the model update unit 112-5 are one It may be mounted on an electronic device, or may be respectively mounted on separate electronic devices.
  • the data acquisition unit 112-1, the preprocessor 112-2, the recognition data selection unit 112-3, the recognition result providing unit 112-4, and the model update unit 112-5 Some may be included in the computing device 100 , and some may be included in the server.
  • At least one of the data acquisition unit 112-1, the preprocessor 112-2, the recognition data selection unit 112-3, the recognition result providing unit 112-4, and the model update unit 112-5 may be implemented as a software module.
  • At least one of the data acquisition unit 112-1, the preprocessor 112-2, the recognition data selection unit 112-3, the recognition result providing unit 112-4, and the model update unit 112-5 is software.
  • the software module When implemented as a module (or a program module including instructions), the software module may be stored in a computer-readable non-transitory computer readable medium.
  • at least one software module may be provided by an operating system (OS) or may be provided by a predetermined application. A part of the at least one software module may be provided by an operating system (OS), and the other part may be provided by a predetermined application.
  • OS operating system
  • OS operating system
  • the performance map generator 113 may evaluate the searched performance of each of the candidate regions by using the estimation result of the object detection model 121 .
  • the searched performance is evaluated for each of the candidate regions.
  • the searched performance of the first candidate region may be evaluated based on the classification performance and accuracy performance of the first candidate region.
  • the classification performance of the first candidate region is calculated using an estimation result obtained when test images are input to the object detection model 121 .
  • the estimation result includes estimated image coordinates of candidate regions in each of the test images.
  • the classification performance of the first candidate regions may be calculated based on an F 1 score calculated based on estimated image coordinates of the first candidate regions. The higher the F 1 score, the better the classification performance.
  • the F 1 score is calculated based on the Precision and Recall values.
  • the F 1 score may be calculated as a harmonic average of the reciprocal of a precision value and a recall value.
  • the precision value can be calculated as the value (TP/ (TP + FP)) obtained by dividing the true positive (TP) by the sum (TP + FP) of the true positive (TP) and false positive (FP).
  • the recall value can be calculated as a value (TP/ (TP + FN)) obtained by dividing True Positive (TP) by the sum (TP + FN) of True Positive (TP) and False Negative (FN). .
  • true positivity means that the estimated position of the candidate area estimated by the object detection model 121 is included in the corresponding candidate area of the actual observation area. For example, when the first candidate area is found in the test image, the estimated position of the first candidate area is located in the first candidate area of the actual observation area.
  • False positive means that the estimated position of the candidate area estimated by the object detection model 121 is included in another candidate area of the actual observation area. For example, when the first candidate area is found in the test image, the estimated position of the first candidate area is located in a candidate area other than the first candidate area of the actual observation area (eg, the second candidate area).
  • the false negative means that the estimated position of the candidate region estimated by the object detection model 121 is included in the background region of the actual observation region. For example, when the first candidate area is found in the test image, the estimated position of the first candidate area is located in the background area of the actual observation area.
  • the classification performance of the first candidate region is the number of true positives (TP) of the test images in which the estimated image coordinates of the first candidate region in each of the test images are included in the actual first candidate region in the observation region;
  • the image coordinates may be determined based on the number of false voices (FN) of the non-existent test image.
  • the classification performance of the first candidate region may be calculated by 2 TP / ( 2 TP + FN + FP).
  • the accuracy performance of the first candidate regions may be calculated based on estimated image coordinates of the first candidate regions.
  • the accuracy performance of the first candidate region may be calculated based on the difference in position between the image coordinates of the first candidate region labeled in each test image and the estimated image coordinates of the first candidate region in each test image.
  • the accuracy performance of the first candidate region may be calculated as a root mean square error of position differences calculated for each of the test images. The lower the accuracy performance value, the better the accuracy performance.
  • the accuracy performance of the first candidate region is determined by determining that the estimated position of the first candidate region is the actual position of the first candidate region when the estimated image coordinates of the first candidate region are included in the actual first candidate region of the observation region. It can indicate how far apart the location is. If the estimated image coordinates of the first candidate region are not included in the actual first candidate region of the observation region, the position difference between the estimated position of the first candidate region and the actual position of the first candidate region is the accuracy performance of the first candidate region. may not affect
  • the performance map generator 113 may generate a classification performance map and an accuracy performance map of each of the candidate areas in order to indicate the searched performance of each of the candidate areas.
  • the classification performance map and the accuracy performance map may be referred to as a classification performance graph and an accuracy performance graph, respectively.
  • a classification performance map and an accuracy performance graph are generated for each of the candidate regions.
  • the classification performance map and the accuracy performance map of each of the candidate regions show that the classification performance and accuracy performance of each of the candidate regions according to the change in the azimuth and elevation of the sun relative to the observation region labeled in each of the test images. shows how it changes.
  • Each of the test images may be labeled with the position (azimuth and altitude) of the sun with respect to the observation area.
  • the classification performance map of the first candidate region calculates the classification performance value of the first candidate region in each of the test images, and the x-coordinate and y-coordinate determined by the azimuth and elevation of the sun labeled in each of the test images. It may be a graph in which the classification performance value of the first candidate region is displayed at the position of . In this graph, the x-axis corresponds to the azimuth of the sun and the y-axis corresponds to the elevation of the sun. An area in which a classification performance value is not displayed in the classification performance map may be displayed through interpolation using surrounding values.
  • the accuracy performance map of the first candidate region calculates the accuracy performance value of the first candidate region in each of the test images, and the x-coordinate and y-coordinate positions determined by the azimuth and elevation of the sun labeled in each of the test images It may be a graph in which the accuracy performance value of the first candidate region is displayed. In this graph, the x-axis corresponds to the azimuth of the sun and the y-axis corresponds to the elevation of the sun. An area in which the accuracy performance value is not displayed in the accuracy performance map may be displayed through interpolation using surrounding values.
  • the onboard map generator 114 may determine specific regions having high search performance based on the search target performance of each of the candidate regions.
  • the onboard map generator 114 may generate an onboard map including specific regions.
  • the onboard map generator 114 selects candidate regions having high search performance based on the classification performance map and the accuracy performance map of each of the candidate regions, and when the number of candidate regions is equal to or greater than a preset reference number, selects the selected candidate regions. It can be determined by specific regions.
  • an area in which a corresponding candidate area can be searched well with respect to the position of the sun may be determined.
  • An area in which classification performance and accuracy performance are equal to or greater than the reference value may be an area in which the corresponding candidate area is searched well.
  • Candidate regions in which this region is wide may be determined as singular regions.
  • the generated onboard map may be used semi-permanently.
  • the position of the sun with respect to the observation area is determined by the ephemeris.
  • the position (azimuth and elevation) of the sun determined by the ephemeris may be further displayed in the classification performance map and the accuracy performance map.
  • candidate regions having high classification performance and high accuracy performance on the corresponding date may be determined.
  • the classification performance value is higher than the preset classification reference value, it may be understood that the classification performance is high.
  • the accuracy performance value is lower than the preset accuracy reference value, it may be understood that the accuracy performance is high.
  • candidate regions having a classification performance value greater than a preset classification reference value and an accuracy performance value lower than a preset accuracy reference value may be selected.
  • the number of selected candidate regions may be compared with a preset reference number, and when the number of selected candidate regions is equal to or greater than the preset reference number, the selected candidate regions may be determined as singular regions.
  • the onboard map generator 114 may generate an onboard map including the determined specific regions.
  • the number of the selected candidate areas is smaller than the preset reference number, it is determined that no singular areas are detected according to the previously determined configuration of the candidate areas of the observation area, and the observation area is changed or the configuration of the candidate areas is changed.
  • a preliminary landing date may be determined.
  • Candidate regions having high classification performance and accuracy performance of the corresponding preliminary landing date may be selected.
  • the selected candidate regions may be determined as preliminary singular regions.
  • the onboard map generator 114 may generate a preliminary onboard map including the determined preliminary specific regions.
  • the onboard map including the singular regions generated by the onboard map generator 114 may be loaded in the memory of the lander of FIGS. 1 to 4 and used for optical navigation for accurately flying to the landing site.
  • the various embodiments described above may be implemented in the form of a computer program that can be executed through various components on a computer, and such a computer program may be recorded in a computer-readable medium.
  • the medium may be to continuously store the program executable by the computer, or to temporarily store the program for execution or download.
  • the medium may be various recording means or storage means in the form of a single or several hardware combined, it is not limited to a medium directly connected to any computer system, and may exist distributed on a network.
  • examples of the medium include a hard disk, a magnetic medium such as a floppy disk and a magnetic tape, an optical recording medium such as CD-ROM and DVD, a magneto-optical medium such as a floppy disk, and those configured to store program instructions, including ROM, RAM, flash memory, and the like.
  • examples of other media may include recording media or storage media managed by an app store that distributes applications, sites that supply or distribute various other software, and servers.
  • unit may be a hardware component such as a processor or circuit, and/or a software component executed by a hardware component such as a processor.
  • a hardware component such as a processor or circuit
  • a software component executed by a hardware component such as a processor.
  • part refers to components such as software components, object-oriented software components, class components, and task components, processes, functions, properties, It may be implemented by procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, database, data structures, tables, arrays and variables.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Chemical & Material Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

Divers modes de réalisation de l'invention concernent un procédé de détermination d'une zone spécifique, un procédé de génération d'une carte embarquée et un procédé de détermination d'une direction d'un module atterrisseur. Le procédé de détermination d'une zone spécifique est réalisé par un dispositif informatique et comprend les étapes consistant à : déterminer une région d'observation et une pluralité de zones candidates dans la région d'observation ; marquer des informations d'emplacement sur chaque image d'une pluralité d'images dans lesquelles au moins une partie de la région d'observation est représentée ; entraîner un réseau de neurones artificiels sur la base d'un réseau de neurones à convolution sur la base de la pluralité d'images marquées avec les informations d'emplacement, de telle sorte que chaque zone de la pluralité de zones candidates puisse être recherchée dans chaque image de la pluralité d'images ; évaluer une aptitude à la recherche de chaque zone de la pluralité de zones candidates sur la base du réseau de neurones artificiels entraîné ; et déterminer au moins une partie de la pluralité de zones candidates en tant que zones spécifiques sur la base de l'aptitude à la recherche de chaque zone de la pluralité de zones candidates.
PCT/KR2019/018118 2019-12-18 2019-12-19 Procédé pour déterminer une zone spécifique pour une navigation optique sur la base d'un réseau de neurones artificiels, dispositif de génération de carte embarquée et procédé pour déterminer la direction de module atterrisseur WO2021125395A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020190170209A KR102314038B1 (ko) 2019-12-18 2019-12-18 인공 신경망 기반으로 광학적 항법을 위하여 특이 영역을 결정하는 방법, 온보드 맵 생성 장치, 및 착륙선의 방향을 결정하는 방법
KR10-2019-0170209 2019-12-18

Publications (1)

Publication Number Publication Date
WO2021125395A1 true WO2021125395A1 (fr) 2021-06-24

Family

ID=76477467

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2019/018118 WO2021125395A1 (fr) 2019-12-18 2019-12-19 Procédé pour déterminer une zone spécifique pour une navigation optique sur la base d'un réseau de neurones artificiels, dispositif de génération de carte embarquée et procédé pour déterminer la direction de module atterrisseur

Country Status (2)

Country Link
KR (1) KR102314038B1 (fr)
WO (1) WO2021125395A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113821057A (zh) * 2021-10-14 2021-12-21 哈尔滨工业大学 一种基于强化学习的行星软着陆控制方法及系统和存储介质
CN115272769A (zh) * 2022-08-10 2022-11-01 中国科学院地理科学与资源研究所 基于机器学习的月球撞击坑自动提取方法和装置

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102441675B1 (ko) * 2022-02-23 2022-09-08 (주)나라스페이스테크놀로지 위성 영상의 합성 방법 및 시스템
KR102600979B1 (ko) * 2022-04-29 2023-11-10 한국과학기술원 뎁스 퓨전 기술에서의 띠행렬 압축을 이용한 켤레기울기 가속 장치
KR102563758B1 (ko) 2022-12-30 2023-08-09 고려대학교 산학협력단 3차원 모델을 활용한 시멘틱 세그멘테이션 학습 데이터 생성 장치

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150136225A (ko) * 2014-05-26 2015-12-07 에스케이텔레콤 주식회사 관심객체 검출을 위한 관심영역 학습장치 및 방법
KR20170088202A (ko) * 2016-01-22 2017-08-01 서울시립대학교 산학협력단 이종의 위성영상 융합가능성 평가방법 및 그 장치
US20170248969A1 (en) * 2016-02-29 2017-08-31 Thinkware Corporation Method and system for providing route of unmanned air vehicle
KR20190087266A (ko) * 2018-01-15 2019-07-24 에스케이텔레콤 주식회사 자율주행을 위한 고정밀 지도의 업데이트 장치 및 방법
KR20190100708A (ko) * 2018-02-21 2019-08-29 부산대학교 산학협력단 기계학습을 이용하여 구조물의 크랙을 탐지하는 무인항공기

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10546195B2 (en) * 2016-12-02 2020-01-28 Geostat Aerospace & Technology Inc. Methods and systems for automatic object detection from aerial imagery
JP6661522B2 (ja) * 2016-12-12 2020-03-11 株式会社日立製作所 衛星画像処理システム及び方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150136225A (ko) * 2014-05-26 2015-12-07 에스케이텔레콤 주식회사 관심객체 검출을 위한 관심영역 학습장치 및 방법
KR20170088202A (ko) * 2016-01-22 2017-08-01 서울시립대학교 산학협력단 이종의 위성영상 융합가능성 평가방법 및 그 장치
US20170248969A1 (en) * 2016-02-29 2017-08-31 Thinkware Corporation Method and system for providing route of unmanned air vehicle
KR20190087266A (ko) * 2018-01-15 2019-07-24 에스케이텔레콤 주식회사 자율주행을 위한 고정밀 지도의 업데이트 장치 및 방법
KR20190100708A (ko) * 2018-02-21 2019-08-29 부산대학교 산학협력단 기계학습을 이용하여 구조물의 크랙을 탐지하는 무인항공기

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113821057A (zh) * 2021-10-14 2021-12-21 哈尔滨工业大学 一种基于强化学习的行星软着陆控制方法及系统和存储介质
CN115272769A (zh) * 2022-08-10 2022-11-01 中国科学院地理科学与资源研究所 基于机器学习的月球撞击坑自动提取方法和装置

Also Published As

Publication number Publication date
KR102314038B1 (ko) 2021-10-19
KR20210078326A (ko) 2021-06-28

Similar Documents

Publication Publication Date Title
WO2021125395A1 (fr) Procédé pour déterminer une zone spécifique pour une navigation optique sur la base d'un réseau de neurones artificiels, dispositif de génération de carte embarquée et procédé pour déterminer la direction de module atterrisseur
WO2018124662A1 (fr) Procédé et dispositif électronique de commande de véhicule aérien sans pilote
WO2019132518A1 (fr) Dispositif d'acquisition d'image et son procédé de commande
WO2019151735A1 (fr) Procédé de gestion d'inspection visuelle et système d'inspection visuelle
WO2019017592A1 (fr) Dispositif électronique déplacé sur la base d'une distance par rapport à un objet externe et son procédé de commande
WO2017007166A1 (fr) Procédé et dispositif de génération d'image projetée et procédé de mappage de pixels d'image et de valeurs de profondeur
WO2022025441A1 (fr) Ensemble de capture d'image omnidirectionnelle et procédé exécuté par celui-ci
WO2016126083A1 (fr) Procédé, dispositif électronique et support d'enregistrement pour notifier des informations de situation environnante
WO2015199502A1 (fr) Appareil et procédé permettant de fournir un service d'interaction de réalité augmentée
WO2020046038A1 (fr) Robot et procédé de commande associé
WO2020171561A1 (fr) Appareil électronique et procédé de commande correspondant
WO2020189909A2 (fr) Système et procédé de mise en oeuvre d'une solution de gestion d'installation routière basée sur un système multi-capteurs 3d-vr
WO2023008791A1 (fr) Procédé d'acquisition de distance à au moins un objet situé dans une direction quelconque d'un objet mobile par réalisation d'une détection de proximité, et dispositif de traitement d'image l'utilisant
WO2016206107A1 (fr) Système et procédé de sélection d'un mode de fonctionnement d'une plate-forme mobile
WO2019047378A1 (fr) Procédé et dispositif de reconnaissance rapide de corps célestes et télescope
WO2023048380A1 (fr) Procédé d'acquisition de distance par rapport à au moins un objet positionné devant un corps mobile en utilisant une carte de profondeur de vue d'appareil de prise de vues, et appareil de traitement d'image l'utilisant
EP4320472A1 (fr) Dispositif et procédé de mise au point automatique prédite sur un objet
WO2018124500A1 (fr) Procédé et dispositif électronique pour fournir un résultat de reconnaissance d'objet
WO2019009624A1 (fr) Procédé et appareil de fourniture de services de carte mobile numérique pour une navigation en toute sécurité d'un véhicule aérien sans pilote
WO2023055033A1 (fr) Procédé et appareil pour l'amélioration de détails de texture d'images
WO2023063679A1 (fr) Dispositif et procédé de mise au point automatique prédite sur un objet
WO2022092451A1 (fr) Procédé de positionnement d'emplacement en intérieur utilisant un apprentissage profond
WO2020251151A1 (fr) Procédé et appareil d'estimation de la pose d'un utilisateur en utilisant un modèle virtuel d'espace tridimensionnel
WO2011055906A2 (fr) Procédé de reconnaissance de configuration d'étoiles et appareil de détection d'étoiles permettant de déterminer l'attitude d'un engin spatial
WO2021221333A1 (fr) Procédé pour prédire la position d'un robot en temps réel par l'intermédiaire d'informations cartographiques et d'une correspondance d'image, et robot

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19956745

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19956745

Country of ref document: EP

Kind code of ref document: A1