CN116628251A - Method, device, equipment and medium for searching moon surface safety area - Google Patents

Method, device, equipment and medium for searching moon surface safety area Download PDF

Info

Publication number
CN116628251A
CN116628251A CN202310727200.6A CN202310727200A CN116628251A CN 116628251 A CN116628251 A CN 116628251A CN 202310727200 A CN202310727200 A CN 202310727200A CN 116628251 A CN116628251 A CN 116628251A
Authority
CN
China
Prior art keywords
image
candidate
target
region
candidate landing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310727200.6A
Other languages
Chinese (zh)
Other versions
CN116628251B (en
Inventor
徐云飞
朱飞虎
王立
华宝成
郑岩
张运方
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Control Engineering
Original Assignee
Beijing Institute of Control Engineering
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Control Engineering filed Critical Beijing Institute of Control Engineering
Priority to CN202310727200.6A priority Critical patent/CN116628251B/en
Publication of CN116628251A publication Critical patent/CN116628251A/en
Application granted granted Critical
Publication of CN116628251B publication Critical patent/CN116628251B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/096Transfer learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Library & Information Science (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to the field of image processing technologies, and in particular, to a method, an apparatus, a device, and a medium for searching a moon surface security area. The method comprises the following steps: carrying out data fusion on the gray level image and the point cloud image acquired in real time to generate a target image; inputting the target image into a pre-trained terrain segmentation model to obtain a segmentation image marked with an obstacle region and a non-obstacle region; determining a centroid of each obstacle region in the segmented image to triangulate based on the centroid and determine a plurality of candidate landing sites; for each candidate landing site, performing: respectively taking the current candidate landing point as a center and a vertex, and performing square expansion search to obtain a candidate region of the current candidate landing point; based on the gray scale image and the candidate regions, a safety factor for each candidate landing site is determined to determine a target safety region for the lunar surface. According to the scheme, the searching efficiency can be improved, and the safety coefficient of the target safety area can also be improved.

Description

Method, device, equipment and medium for searching moon surface safety area
Technical Field
The embodiment of the invention relates to the technical field of image processing, in particular to a method, a device, equipment and a medium for searching a moon surface safety area.
Background
Navigation of soft landing of a spacecraft based on visual images is becoming a main navigation means for lunar landing, mars landing and celestial body detection increasingly. The existing safety obstacle avoidance algorithm needs to perform feature modeling on meteorites, stones, slopes and the like, and the searching efficiency and the safety coefficient of a safety area are low.
Therefore, a new lunar surface safety area searching method is needed.
Disclosure of Invention
In order to solve the problem that the searching efficiency and the safety coefficient of the existing moon surface safety area searching method are low, the embodiment of the invention provides a moon surface safety area searching method, device, equipment and medium.
In a first aspect, an embodiment of the present invention provides a method for searching a lunar surface security area, where the method includes:
generating a target image based on the gray level image and the point cloud image acquired in real time;
inputting the target image into a pre-trained terrain segmentation model to obtain a segmentation image marked with an obstacle region and a non-obstacle region; the terrain segmentation model is generated based on deep LabV3+ segmentation network training, and a feature extraction network in an encoder of the deep LabV3+ segmentation network is a lightweight MobileNet V2 network;
determining a centroid of each obstacle region in the segmented image to triangulate based on the centroid and determine a plurality of candidate landing sites;
for each of the candidate landing sites, performing: respectively taking the current candidate landing point as a center and a vertex, and performing square expansion search to obtain a candidate region of the current candidate landing point;
and determining a safety coefficient of each candidate landing site based on the gray level image and the candidate region to determine a target safety region of the lunar surface.
In a second aspect, an embodiment of the present invention further provides a search device for a lunar surface security area, where the device includes:
the preprocessing unit is used for generating a target image at the current moment based on the gray level image and the point cloud image acquired in real time;
the segmentation unit is used for inputting the target image into a pre-trained terrain segmentation model to obtain a segmented image marked with an obstacle region and a non-obstacle region; the terrain segmentation model is generated based on deep LabV3+ segmentation network training, and a feature extraction network in an encoder of the deep LabV3+ segmentation network is a lightweight MobileNet V2 network;
a determining unit, configured to determine a centroid of each obstacle region in the segmented image, so as to determine a plurality of candidate landing points based on triangulation of the centroid;
a search unit, configured to perform, for each of the candidate landing sites: respectively taking the current candidate landing point as a center and a vertex, and performing square expansion search to obtain a candidate region of the current candidate landing point;
and the computing unit is used for determining the safety coefficient of each candidate landing point based on the gray level image and the candidate area so as to determine the target safety area of the lunar surface.
In a third aspect, an embodiment of the present invention further provides a computing device, including a memory and a processor, where the memory stores a computer program, and the processor implements a method according to any embodiment of the present specification when executing the computer program.
In a fourth aspect, embodiments of the present invention also provide a computer-readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform a method according to any of the embodiments of the present specification.
The embodiment of the invention provides a method, a device, equipment and a medium for searching a moon surface safety area, which are characterized in that firstly, data fusion is carried out on a gray level image and a point cloud image which are acquired in real time to generate a target image; then, the grating line stripes in the circular grating disk of the target image encoder are characterized based on a pi-value digital sequence; then, inputting the target image into a pre-trained terrain segmentation model to obtain a segmentation image marked with an obstacle region and a non-obstacle region; the terrain segmentation model is generated based on deep LabV3+ segmentation network training, and a feature extraction network in an encoder of the deep LabV3+ segmentation network is a lightweight MobileNet V2 network; next, determining a centroid of each obstacle region in the segmented image to triangulate based on the centroid and determine a plurality of candidate landing sites; for each candidate landing site, performing: respectively taking the current candidate landing point as a center and a vertex, and performing square expansion search to obtain a candidate region of the current candidate landing point; finally, based on the gray scale image and the candidate areas, a safety coefficient of each candidate landing site is determined to determine a target safety area of the lunar surface. According to the scheme, the searching efficiency can be improved, and the safety coefficient of the target safety area can also be improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a method for searching a safe area on a lunar surface according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a square expanded search according to an embodiment of the present invention;
FIG. 3 is a hardware architecture diagram of a computing device according to one embodiment of the present invention;
fig. 4 is a block diagram of a search apparatus for a lunar surface safety area according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments, and all other embodiments obtained by those skilled in the art without making any inventive effort based on the embodiments of the present invention are within the scope of protection of the present invention.
As mentioned above, the existing safety obstacle avoidance algorithm needs to perform feature modeling on meteorites, stones, slopes and the like, and the searching efficiency and accuracy of a safety area are low.
In order to solve the technical problems, the inventor can consider that the three-dimensional point cloud image is utilized to make up the defect that the depth information is difficult to distinguish due to the change of illumination conditions of the two-dimensional image, so that the two-dimensional gray level image and the three-dimensional point cloud image (DEM image) acquired in real time are subjected to data fusion to generate a target image, and the input end of the light terrain segmentation model is changed into a gray level channel and a DEM channel with set proportion from an RGB channel, so that the segmentation accuracy of an obstacle region and a non-obstacle region is improved; the inventor also considers that the topographic segmentation model is generated by training a deep LabV3+ segmentation network with a lightweight MobileNet V2 network so as to improve the segmentation efficiency; in addition, after the candidate landing points are determined, square expansion search is performed on each candidate landing point by taking the current candidate landing point as a center and a vertex, so that a candidate region with a larger safety area can be searched, and the safety coefficient of the target safety region can be further improved.
Specific implementations of the above concepts are described below.
Referring to fig. 1, an embodiment of the present invention provides a method for searching a moon surface safety area, which includes:
step 100, carrying out data fusion on a gray level image and a point cloud image acquired in real time to generate a target image;
102, inputting a target image into a pre-trained terrain segmentation model to obtain a segmentation image marked with an obstacle region and a non-obstacle region; the terrain segmentation model is generated based on deep LabV3+ segmentation network training, and a feature extraction network in an encoder of the deep LabV3+ segmentation network is a lightweight MobileNet V2 network;
104, determining the centroid of each obstacle region in the segmented image, so as to perform triangulation based on the centroid and determine a plurality of candidate landing points;
step 106, for each candidate landing site, executing: respectively taking the current candidate landing point as a center and a vertex, and performing square expansion search to obtain a candidate region of the current candidate landing point;
step 108, determining the safety coefficient of each candidate landing site based on the gray level image and the candidate region to determine the target safety region of the lunar surface.
In the embodiment of the invention, firstly, data fusion is carried out on a gray level image and a point cloud image which are acquired in real time to generate a target image; then, the grating line stripes in the circular grating disk of the target image encoder are characterized based on a pi-value digital sequence; then, inputting the target image into a pre-trained terrain segmentation model to obtain a segmentation image marked with an obstacle region and a non-obstacle region; the terrain segmentation model is generated based on deep LabV3+ segmentation network training, and a feature extraction network in an encoder of the deep LabV3+ segmentation network is a lightweight MobileNet V2 network; next, determining a centroid of each obstacle region in the segmented image to triangulate based on the centroid and determine a plurality of candidate landing sites; for each candidate landing site, performing: respectively taking the current candidate landing point as a center and a vertex, and performing square expansion search to obtain a candidate region of the current candidate landing point; finally, based on the gray scale image and the candidate areas, a safety coefficient of each candidate landing site is determined to determine a target safety area of the lunar surface. According to the scheme, the searching efficiency can be improved, and the safety coefficient of the target safety area can also be improved.
For step 100:
in some embodiments, step 100 may include:
for each gray level image and point cloud image acquired at each moment, performing:
acquiring a gray level image and a point cloud image acquired at the current moment;
converting the depth value in the point cloud image into a gray value to obtain a converted point cloud image;
and fusing the gray level image at the current moment with the converted point cloud image according to the set channel proportion to generate a target image at the current moment.
In this embodiment, the image capturing device on the lunar lander may configure a gray image capturing camera and a laser three-dimensional imaging sensor, and then each time can capture a gray image and a point cloud image at the same time, and for each time, the steps are performed: firstly, converting a point cloud image acquired at the current moment into a depth image, converting a depth value into a gray value of a corresponding pixel point, and obtaining a converted point cloud image. Because the resolution ratio of the point cloud image is lower than that of the gray image, the point cloud image only plays an auxiliary role in providing depth information, and the extraction of the edge texture features of the lunar surface barrier mainly depends on the gray image with high resolution ratio, so that the point cloud image is in accordance with the gray image layer: the DEM image layer=2:1 is used for fusing and generating the target image, so that the two-dimensional gray information and the three-dimensional depth information are effectively fused, edge textures, gray levels and the like in the two-dimensional gray image can be used as learning main features, the three-dimensional depth information can be used as supplement, the defect that the gray image is difficult to accurately divide the obstacle region and the non-obstacle region due to overexposure or darkness is overcome, and the dividing accuracy is further improved.
For step 102:
in some embodiments, step 102 may include:
inputting a target image into a lightweight MobileNet V2 network in an encoder of a pre-trained terrain segmentation model to perform multi-scale feature extraction to obtain a multi-scale first feature extraction graph; the lightweight MobileNet V2 network comprises 1 convolution layer, 7 bottleneck layers, 1 convolution layer and 1 average pooling layer which are connected in sequence;
respectively sending the first multi-scale feature extraction graph to a coding network in an encoder and a feature extraction layer in a decoder to respectively obtain a coding result graph and a second feature extraction graph;
the feature fusion module performs feature fusion on the multi-scale second feature extraction graph and the encoding result graph output by the feature extraction layer to obtain a feature fusion graph;
and a decoding module of the decoder decodes and partitions the feature fusion map to obtain partitioned images marked with the barrier region and the non-barrier region.
In the embodiment, the terrain segmentation model is generated based on deep LabV3+ segmentation network training, the deep LabV3+ segmentation network comprises an encoder and a decoder, the encoder sequentially comprises a feature extraction network and an encoding network, in order to reduce network parameters and improve model calculation speed, the feature extraction network in the encoder is a lightweight MobileNet V2 network, and the lightweight MobileNet V2 network comprises 1 convolution layer, 7 bottleneck layers, 1 convolution layer and 1 average pooling layer which are sequentially connected; the decoder sequentially comprises a feature extraction layer, a feature fusion module and a decoding module. However, the lightweight MobileNetV2 network, while having significantly reduced computational effort, still has parametrically been redundant for spatially non-cooperative target local feature detection tasks. The deep labv3+ split network can be subjected to convolution channel compression processing, namely the number of convolution kernels in all convolution layers is uniformly multiplied by a reduction factor alpha (wherein alpha epsilon (0, 1)]) The network is compressed to reduce the calculation amount of the model on the premise of ensuring the segmentation precision, so that the on-orbit real-time processing becomes possible. For example, for a dimension D F ×D F If the input feature diagram of the (a) is operated by using convolution check of M channels, M channels are correspondingly obtainedD F '×D F ' output characteristic diagram, and the number of convolution kernel channels after introducing alpha becomes alpha M, correspondingly outputs alpha M D F '×D F ' feature map. Thus, the total calculation amount can be calculated from D K ×D K ×M×D F ×D F +M×N×D F ×D F The further reduction is as follows: d (D) K ×D K ×αM×D F ×D F +αM×N×D F ×D F
In addition, the deep LabV3+ segmented network is subjected to migration learning, the built deep LabV3+ segmented network is trained on an open-source big data set ImageNet (millions of pictures), and then the trained network parameters are used as initialization parameters to perform migration training on a lunar surface image training set applied in the method, so that overfitting of a terrain segmentation model is reduced, and generalization is enhanced.
For step 104:
if a large amount of computing resources are required to be consumed for computing the candidate landing sites pixel by pixel in the segmented image obtained in step 102, the real-time requirement is difficult to meet, so that the candidate landing sites are obtained on the basis of the triangular mesh by firstly performing topological modeling on the segmented image.
Specifically, the centroid of each obstacle region may be calculated first using the following formula:
wherein P is obst(i) (x c ,y c ) For the centroid coordinates of each obstacle region, x o And y o Respectively the abscissa and the ordinate of each pixel point in the barrier zone, N i I is the number of the barrier region for the number of pixels included in the barrier region.
In the present embodiment, the terrain segmentation model segments the lunar terrain into an obstacle region and a non-obstacle region, each obstacle region is defined as 1 connected domain, and the i-th connected domain (i.e., obstacle region) is defined by a point set P obst(i) (x, y) and (i=1, 2., (i=2.), M), M is the number of obstacle regions. The non-obstacle region is composed of a point set P flat (x, y) is the position of the pixel point in the image.
Then, the step of "triangulating based on centroid, determining a number of candidate landing sites" may include:
generating a plurality of triangular areas by taking the centroid of each three barrier areas as an endpoint;
determining two triangular areas sharing the same edge as target groups, and executing for each target group: determining the minimum angle value of six inner angles when the shared edge is taken as the unique diagonal line in a quadrilateral formed by two triangular areas in the target group; when the other diagonal is taken as the unique diagonal, screening out the two triangles of the target group when the minimum angle value of the six inner angles is larger than the minimum angle value when the shared edge is taken as the unique diagonal;
generating a circumcircle of each triangle area, judging whether the circumcircle of each triangle area contains the centroid of any other obstacle area, and if so, screening out the triangle area;
the outer centers of each triangle area remaining are found to determine each outer center as a candidate landing site.
In the present embodiment, a plurality of triangular regions are generated with the centroid of each three barrier regions as the end point, for example, 4 barrier regions are assumed, and the centroids of the barrier regions are A 1 、A 2 、A 3 And A 4 Can be formed intoTriangular regions, respectively DeltaA 1 A 2 A 3 、ΔA 1 A 3 A 4 、ΔA 1 A 2 A 4 And delta A 2 A 3 A 4 . Then, the generated triangle areas are screened twice, wherein in the quadrangle formed by every two adjacent triangle areas, when the minimum angle value of six inner angles is larger than the minimum angle value of six inner angles when the other diagonal is taken as the unique diagonal, the two triangles are screened for the first timeSieving; then, carrying out secondary screening on the remaining triangular areas to generate circumcircles of each remaining triangular area, judging whether the circumcircles of each triangular area contain centroids of any other barrier areas, and if so, screening out the triangular areas; finally, the outer centers of each triangle area remaining are found to determine each outer center as a candidate landing site. Therefore, the triangle area with a larger safety range can be primarily determined through twice screening, and the triangle ectocenter property can know that the candidate landing points can be far away from three vertexes of the corresponding triangle area as far as possible, namely far away from the obstacle area as far as possible by solving the ectocenter of each triangle area as the candidate landing points. The selection speed and reliability of the candidate landing sites can be improved.
For step 106:
in some embodiments, the step of "performing a square expansion search with the current candidate landing site as a center and a vertex, respectively, to obtain a candidate region of the current candidate landing site" may include:
generating a square with the side length as an initial set value by taking the current candidate landing point as the center; increasing the side length of the square, judging whether the point set in the square contains the pixel point of the barrier region, if not, continuing to increase the side length of the square until the point set in the square contains the pixel point of the barrier region, and taking the last side length value of the square as the target side length;
generating squares with side lengths as initial set values along a plurality of set directions by taking the current candidate landing points as vertexes, increasing the side length of each square, judging whether a point set in each square contains the pixel points of the barrier region, if not, continuing to increase the side length of the square until the point set in the square contains the pixel points of the barrier region, and taking the last side length value of the square as the target side length;
and comparing each target side length of the current candidate landing points, and determining the square with the maximum target side length as the candidate area of the current candidate landing points.
In the present practiceIn an embodiment, referring to FIG. 2, the current candidate landing site Q is first used s For the center, generating a square with a side length of 1, then increasing the side length r of the square, if the point set Q in the square range A (x,y)∈P flat And (x, y), namely, the point sets in the square range are all points of the non-obstacle region, let r=r+1, continuously judging whether the point set in the square range contains the pixel points of the obstacle region or not, and taking the last side length value of the square as the target side length when the point set in the square contains the pixel points of the obstacle region.
Then, with the current candidate landing site Q s Generating 4 squares with the side lengths being initial set values along 4 directions shown in fig. 4 respectively as vertexes, increasing the side length of each square, judging whether a point set in each square contains the pixel points of the barrier region, if not, continuing to increase the side length of the square until the point set in the square contains the pixel points of the barrier region, and taking the last side length value of the square as the target side length.
In the present embodiment, the candidate landing sites Q s Generating 1 square after center expansion search, generating 4 squares after vertex expansion search, comparing target lengths of the 5 squares, and taking the square with the maximum target side length as a current candidate landing point Q s Is a candidate region of (c).
It will be appreciated that, when candidate landing sites are used as vertices, a square expansion search may be performed in multiple directions, more than the 4 listed in this embodiment.
For step 108:
in some embodiments, the method comprises:
for each candidate landing site, performing: determining the safety coefficient of the current candidate landing point based on the target side length of the candidate area of the current candidate landing point, the maximum target side length of the candidate areas corresponding to all the candidate landing points and the mean square error of pixel values in the candidate areas of the current candidate landing point in the gray level image;
and determining a candidate area corresponding to the candidate landing points with the safety coefficient larger than the safety threshold as a target safety area of the lunar surface.
In this embodiment, the safety factor of the current candidate landing site may be calculated based on the following formula:
wherein prob is the safety coefficient of the current candidate landing point, r is the target side length of the candidate region of the current candidate landing point, r max And (3) for the maximum target side length in the candidate area corresponding to all the candidate landing points, sigma is the mean square error of the pixel values in the candidate area of the current candidate landing point in the gray level image.
In this embodiment, since the resolution of the point cloud image is low and the resolution of the gray image is high, in order to improve the calculation accuracy when calculating the security coefficient, the gray value of the gray image is used for determination when determining the mean square error of the pixel values in the candidate region of the current candidate landing point, instead of the gray value of the target image. In this embodiment, the safety coefficient is calculated by using the candidate region and the gray value change in the gray image, so that a region with a larger (wider) relative safety range and a lower (flatter) gray value change can be selected to a greater extent, and the safety reliability of the target safety region can be improved.
And finally, screening out candidate areas corresponding to the candidate landing points with the safety coefficient larger than 0.8, and taking the candidate areas as target safety areas on the lunar surface for selection of staff. The highest safety coefficient calculated by the scheme can reach more than 0.95, and the method has extremely high safety and reliability.
As shown in fig. 3 and 4, the embodiment of the invention provides a searching device for a moon surface safety area. The apparatus embodiments may be implemented by software, or may be implemented by hardware or a combination of hardware and software. In terms of hardware, as shown in fig. 3, a hardware architecture diagram of a computing device where a lunar surface security area searching apparatus provided in an embodiment of the present invention is located is shown, where in addition to a processor, a memory, a network interface, and a nonvolatile memory shown in fig. 3, the computing device where the embodiment is located may generally include other hardware, such as a forwarding chip responsible for processing a packet, and so on. Taking a software implementation as an example, as shown in fig. 4, as a device in a logic sense, the device is formed by reading a corresponding computer program in a nonvolatile memory into a memory by a CPU of a computing device where the device is located. The device for searching the moon surface safety area provided by the embodiment comprises:
the preprocessing unit 401 is configured to perform data fusion on a gray image and a point cloud image acquired in real time, and generate a target image;
a segmentation unit 402, configured to input a target image into a pre-trained terrain segmentation model, to obtain a segmented image marked with an obstacle region and a non-obstacle region; the terrain segmentation model is generated based on deep LabV3+ segmentation network training, and a feature extraction network in an encoder of the deep LabV3+ segmentation network is a lightweight MobileNet V2 network;
a determining unit 403, configured to determine a centroid of each obstacle region in the segmented image, so as to perform triangulation based on the centroids, and determine a plurality of candidate landing sites;
a search unit 404, configured to perform, for each candidate landing site: respectively taking the current candidate landing point as a center and a vertex, and performing square expansion search to obtain a candidate region of the current candidate landing point;
a calculation unit 405 for determining a safety factor of each candidate landing site based on the gray scale image and the candidate region to determine a target safety region of the lunar surface.
In one embodiment of the present invention, the preprocessing unit 401 is configured to perform:
for each gray level image and point cloud image acquired at each moment, performing:
acquiring a gray level image and a point cloud image acquired at the current moment;
converting the depth value in the point cloud image into a gray value to obtain a converted point cloud image;
and fusing the gray level image at the current moment with the converted point cloud image according to the set channel proportion to generate a target image at the current moment.
In one embodiment of the present invention, the segmentation unit 402 is configured to perform:
inputting a target image into a lightweight MobileNet V2 network in an encoder of a pre-trained terrain segmentation model to perform multi-scale feature extraction to obtain a multi-scale first feature extraction graph; the lightweight MobileNet V2 network comprises 1 convolution layer, 7 bottleneck layers, 1 convolution layer and 1 average pooling layer which are connected in sequence;
respectively sending the first multi-scale feature extraction graph to a coding network in an encoder and a feature extraction layer in a decoder to respectively obtain a coding result graph and a second feature extraction graph;
the feature fusion module performs feature fusion on the multi-scale second feature extraction graph and the encoding result graph output by the feature extraction layer to obtain a feature fusion graph;
and a decoding module of the decoder decodes and partitions the feature fusion map to obtain partitioned images marked with the barrier region and the non-barrier region.
In one embodiment of the present invention, the centroid of each obstacle region in the determining unit 403 may be calculated by the following formula:
wherein P is obst(i) (x c ,y c ) For the centroid coordinates of each obstacle region, x o And y o Respectively the abscissa and the ordinate of each pixel point in the barrier zone, N i I is the number of the barrier region for the number of pixels included in the barrier region.
In one embodiment of the present invention, the determining unit 403 is configured to, when performing triangulation based on centroids, determine a number of candidate landing sites:
generating a plurality of triangular areas by taking the centroid of each three barrier areas as an endpoint;
determining two triangular areas sharing the same edge as target groups, and executing for each target group: determining the minimum angle value of six inner angles when the shared edge is taken as the unique diagonal line in a quadrilateral formed by two triangular areas in the target group; when the other diagonal is taken as the unique diagonal, screening out the two triangles of the target group when the minimum angle value of the six inner angles is larger than the minimum angle value when the shared edge is taken as the unique diagonal;
generating a circumcircle of each triangle area, judging whether the circumcircle of each triangle area contains the centroid of any other obstacle area, and if so, screening out the triangle area;
the outer centers of each triangle area remaining are found to determine each outer center as a candidate landing site.
In one embodiment of the present invention, the search unit 404 is configured to, when performing a square expansion search with the current candidate landing site as a center and a vertex, respectively, to obtain a candidate region of the current candidate landing site:
generating a square with the side length as an initial set value by taking the current candidate landing point as the center; increasing the side length of the square, judging whether the point set in the square contains the pixel point of the barrier region, if not, continuing to increase the side length of the square until the point set in the square contains the pixel point of the barrier region, and taking the last side length value of the square as the target side length;
generating squares with side lengths as initial set values along a plurality of set directions by taking the current candidate landing points as vertexes, increasing the side length of each square, judging whether a point set in each square contains the pixel points of the barrier region, if not, continuing to increase the side length of the square until the point set in the square contains the pixel points of the barrier region, and taking the last side length value of the square as the target side length;
and comparing each target side length of the current candidate landing points, and determining the square with the maximum target side length as the candidate area of the current candidate landing points.
In one embodiment of the present invention, the computing unit 405 is configured to perform:
for each candidate landing site, performing: determining the safety coefficient of the current candidate landing point based on the target side length of the candidate area of the current candidate landing point, the maximum target side length of the candidate areas corresponding to all the candidate landing points and the mean square error of pixel values in the candidate areas of the current candidate landing point in the gray level image;
and determining a candidate area corresponding to the candidate landing points with the safety coefficient larger than the safety threshold as a target safety area of the lunar surface.
It will be appreciated that the structure illustrated in the embodiments of the present invention does not constitute a specific limitation on a search apparatus for a safe area of the lunar surface. In other embodiments of the invention, a lunar surface safety area searching apparatus may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The content of information interaction and execution process between the modules in the device is based on the same conception as the embodiment of the method of the present invention, and specific content can be referred to the description in the embodiment of the method of the present invention, which is not repeated here.
The embodiment of the invention also provides a computing device, which comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the searching method of the moon surface safety area in any embodiment of the invention when executing the computer program.
The embodiment of the invention also provides a computer readable storage medium, and the computer readable storage medium stores a computer program, and the computer program when executed by a processor causes the processor to execute the method for searching the moon surface safety area in any embodiment of the invention.
Specifically, a system or apparatus provided with a storage medium on which a software program code realizing the functions of any of the above embodiments is stored, and a computer (or CPU or MPU) of the system or apparatus may be caused to read out and execute the program code stored in the storage medium.
In this case, the program code itself read from the storage medium may realize the functions of any of the above-described embodiments, and thus the program code and the storage medium storing the program code form part of the present invention.
Examples of the storage medium for providing the program code include a floppy disk, a hard disk, a magneto-optical disk, an optical disk (e.g., CD-ROM, CD-R, CD-RW, DVD-ROM, DVD-RAM, DVD-RW, DVD+RW), a magnetic tape, a nonvolatile memory card, and a ROM. Alternatively, the program code may be downloaded from a server computer by a communication network.
Further, it should be apparent that the functions of any of the above-described embodiments may be implemented not only by executing the program code read out by the computer, but also by causing an operating system or the like operating on the computer to perform part or all of the actual operations based on the instructions of the program code.
Further, it is understood that the program code read out by the storage medium is written into a memory provided in an expansion board inserted into a computer or into a memory provided in an expansion module connected to the computer, and then a CPU or the like mounted on the expansion board or the expansion module is caused to perform part and all of actual operations based on instructions of the program code, thereby realizing the functions of any of the above embodiments.
It is noted that relational terms such as first and second, and the like, are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the above method embodiments may be implemented by hardware related to program instructions, and the foregoing program may be stored in a computer readable storage medium, where the program, when executed, performs steps including the above method embodiments; and the aforementioned storage medium includes: various media in which program code may be stored, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A method for searching a moon surface safety area, comprising:
carrying out data fusion on the gray level image and the point cloud image acquired in real time to generate a target image;
inputting the target image into a pre-trained terrain segmentation model to obtain a segmentation image marked with an obstacle region and a non-obstacle region; the terrain segmentation model is generated based on deep LabV3+ segmentation network training, and a feature extraction network in an encoder of the deep LabV3+ segmentation network is a lightweight MobileNet V2 network;
determining a centroid of each obstacle region in the segmented image to triangulate based on the centroid and determine a plurality of candidate landing sites;
for each of the candidate landing sites, performing: respectively taking the current candidate landing point as a center and a vertex, and performing square expansion search to obtain a candidate region of the current candidate landing point;
and determining a safety coefficient of each candidate landing site based on the gray level image and the candidate region to determine a target safety region of the lunar surface.
2. The method of claim 1, wherein the data fusing the gray scale image and the point cloud image acquired in real time to generate the target image comprises:
for each gray level image and point cloud image acquired at each moment, performing:
acquiring a gray level image and a point cloud image acquired at the current moment;
converting the depth value in the point cloud image into a gray value to obtain a converted point cloud image;
and fusing the gray level image at the current moment with the converted point cloud image according to the set channel proportion to generate a target image at the current moment.
3. The method of claim 1, wherein said inputting the target image into a pre-trained terrain segmentation model to obtain segmented images labeled with obstacle and non-obstacle regions comprises:
inputting the target image into a lightweight MobileNet V2 network in an encoder of a pre-trained terrain segmentation model to perform multi-scale feature extraction to obtain a multi-scale first feature extraction graph; the lightweight MobileNet V2 network comprises 1 convolution layer, 7 bottleneck layers, 1 convolution layer and 1 average pooling layer which are connected in sequence;
respectively sending the first multi-scale feature extraction graph to a coding network in an encoder and a feature extraction layer in a decoder to respectively obtain a coding result graph and a second feature extraction graph;
the feature fusion module performs feature fusion on the multi-scale second feature extraction graph output by the feature extraction layer and the coding result graph to obtain a feature fusion graph;
and a decoding module of the decoder decodes and partitions the feature fusion map to obtain partitioned images marked with barrier areas and non-barrier areas.
4. The method of claim 1, wherein the centroid of each barrier region is calculated by the formula:
wherein P is obst(i) (x c ,y c ) For the centroid coordinates of each obstacle region, x o And y o Respectively the abscissa and the ordinate of each pixel point in the barrier zone, N i I is the number of the barrier region for the number of pixels included in the barrier region.
5. The method of claim 1, wherein the determining a number of candidate landing sites based on the triangulation of the centroid comprises:
generating a plurality of triangular areas by taking the centroid of each three barrier areas as an endpoint;
determining two triangular areas sharing the same edge as target groups, and executing for each target group: determining the minimum angle value of six inner angles when the shared edge is taken as the unique diagonal line in a quadrilateral formed by two triangular areas in the target group; when the other diagonal is taken as the unique diagonal, screening out the two triangles of the target group when the minimum angle value of the six inner angles is larger than the minimum angle value when the shared edge is taken as the unique diagonal;
generating a circumcircle of each triangle area, judging whether the circumcircle of each triangle area contains the centroid of any other obstacle area, and if so, screening out the triangle area;
the outer centers of each triangle area remaining are found to determine each outer center as a candidate landing site.
6. The method according to any one of claims 1-5, wherein performing a square expansion search with the current candidate landing site as a center and a vertex, respectively, to obtain a candidate region of the current candidate landing site comprises:
generating a square with the side length as an initial set value by taking the current candidate landing point as the center; increasing the side length of the square, judging whether the point set in the square contains the pixel point of the barrier region, if not, continuing to increase the side length of the square until the point set in the square contains the pixel point of the barrier region, and taking the last side length value of the square as the target side length;
generating squares with side lengths as initial set values along a plurality of set directions by taking the current candidate landing points as vertexes, increasing the side length of each square, judging whether a point set in each square contains the pixel points of the barrier region, if not, continuing to increase the side length of the square until the point set in the square contains the pixel points of the barrier region, and taking the last side length value of the square as the target side length;
and comparing each target side length of the current candidate landing points, and determining the square with the maximum target side length as the candidate area of the current candidate landing points.
7. The method of claim 6, wherein the determining a safety factor for each of the candidate landing sites based on the grayscale image and the candidate regions to determine a target safety region of a lunar surface comprises:
for each of the candidate landing sites, performing: determining the safety coefficient of the current candidate landing point based on the target side length of the candidate area of the current candidate landing point, the maximum target side length of the candidate areas corresponding to all the candidate landing points and the mean square error of pixel values in the candidate area of the current candidate landing point in the gray level image;
and determining the candidate area corresponding to the candidate landing point with the safety coefficient larger than the safety threshold as a target safety area of the lunar surface.
8. A lunar surface safety area searching apparatus, comprising:
the preprocessing unit is used for carrying out data fusion on the gray level image and the point cloud image acquired in real time to generate a target image;
the segmentation unit is used for inputting the target image into a pre-trained terrain segmentation model to obtain a segmented image marked with an obstacle region and a non-obstacle region; the terrain segmentation model is generated based on deep LabV3+ segmentation network training, and a feature extraction network in an encoder of the deep LabV3+ segmentation network is a lightweight MobileNet V2 network;
a determining unit, configured to determine a centroid of each obstacle region in the segmented image, so as to determine a plurality of candidate landing points based on triangulation of the centroid;
a search unit, configured to perform, for each of the candidate landing sites: respectively taking the current candidate landing point as a center and a vertex, and performing square expansion search to obtain a candidate region of the current candidate landing point;
and the computing unit is used for determining the safety coefficient of each candidate landing point based on the gray level image and the candidate area so as to determine the target safety area of the lunar surface.
9. A computing device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the method of any of claims 1-7 when the computer program is executed.
10. A computer readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform the method of any of claims 1-7.
CN202310727200.6A 2023-06-19 2023-06-19 Method, device, equipment and medium for searching moon surface safety area Active CN116628251B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310727200.6A CN116628251B (en) 2023-06-19 2023-06-19 Method, device, equipment and medium for searching moon surface safety area

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310727200.6A CN116628251B (en) 2023-06-19 2023-06-19 Method, device, equipment and medium for searching moon surface safety area

Publications (2)

Publication Number Publication Date
CN116628251A true CN116628251A (en) 2023-08-22
CN116628251B CN116628251B (en) 2023-11-03

Family

ID=87602593

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310727200.6A Active CN116628251B (en) 2023-06-19 2023-06-19 Method, device, equipment and medium for searching moon surface safety area

Country Status (1)

Country Link
CN (1) CN116628251B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103662091A (en) * 2013-12-13 2014-03-26 北京控制工程研究所 High-precision safe landing guiding method based on relative navigation
CN104103070A (en) * 2014-05-26 2014-10-15 北京控制工程研究所 Landing point selecting method based on optical images
US20200159765A1 (en) * 2018-11-21 2020-05-21 Google Llc Performing image search using content labels

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103662091A (en) * 2013-12-13 2014-03-26 北京控制工程研究所 High-precision safe landing guiding method based on relative navigation
CN104103070A (en) * 2014-05-26 2014-10-15 北京控制工程研究所 Landing point selecting method based on optical images
US20200159765A1 (en) * 2018-11-21 2020-05-21 Google Llc Performing image search using content labels

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
刘旺旺等: "天问一号探测器火星着陆自主避障技术设计与验证", 宇航学报, vol. 43, no. 1, pages 46 - 55 *
徐云飞等: "非合作目标局部特征识别轻量化特征融合网络设计", 红外与激光工程, vol. 49, no. 7, pages 1 - 7 *
郑智辉;汪渤;周志强;高志峰;: "月面自主精确软着陆的景象匹配方法研究", 北京理工大学学报, no. 02, pages 172 - 177 *

Also Published As

Publication number Publication date
CN116628251B (en) 2023-11-03

Similar Documents

Publication Publication Date Title
CN109447994B (en) Remote sensing image segmentation method combining complete residual error and feature fusion
Zorzi et al. Polyworld: Polygonal building extraction with graph neural networks in satellite images
CN110276269B (en) Remote sensing image target detection method based on attention mechanism
AU2016201908B2 (en) Joint depth estimation and semantic labeling of a single image
Carozza et al. Markerless vision‐based augmented reality for urban planning
US7983474B2 (en) Geospatial modeling system and related method using multiple sources of geographic information
Wang et al. Modeling indoor spaces using decomposition and reconstruction of structural elements
US20190188856A1 (en) Systems and methods for block based edgel detection with false edge elimination
CN102804231A (en) Piecewise planar reconstruction of three-dimensional scenes
CN112347550A (en) Coupling type indoor three-dimensional semantic graph building and modeling method
KR20200027888A (en) Learning method, learning device for detecting lane using lane model and test method, test device using the same
Toriya et al. SAR2OPT: Image alignment between multi-modal images using generative adversarial networks
Axelsson et al. Roof type classification using deep convolutional neural networks on low resolution photogrammetric point clouds from aerial imagery
CN114519819B (en) Remote sensing image target detection method based on global context awareness
CN116883588A (en) Method and system for quickly reconstructing three-dimensional point cloud under large scene
CN111402429B (en) Scale reduction and three-dimensional reconstruction method, system, storage medium and equipment
CN116628251B (en) Method, device, equipment and medium for searching moon surface safety area
Feng et al. Multi-scale building maps from aerial imagery
KR101927861B1 (en) Method and apparatus for removing noise based on mathematical morphology from geometric data of 3d space
Sayed et al. Point clouds reduction model based on 3D feature extraction
CN114648757A (en) Three-dimensional target detection method and device
Jinghui et al. Building extraction in urban areas from satellite images using GIS data as prior information
Hensel et al. Building Roof Vectorization with PPGNET
Luo et al. A Deep Cross-Modal Fusion Network for Road Extraction With High-Resolution Imagery and LiDAR Data
Uysal et al. 3d modeling of historical doger caravansaries by digital photogrammetry

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant