CN106898008A - Rock detection method and device - Google Patents
Rock detection method and device Download PDFInfo
- Publication number
- CN106898008A CN106898008A CN201710118126.2A CN201710118126A CN106898008A CN 106898008 A CN106898008 A CN 106898008A CN 201710118126 A CN201710118126 A CN 201710118126A CN 106898008 A CN106898008 A CN 106898008A
- Authority
- CN
- China
- Prior art keywords
- notable
- optical imagery
- depth image
- rock
- registration
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Image Analysis (AREA)
Abstract
The invention provides a kind of rock detection method and device, image processing field is applied to, for the detection of Mars surface rock.Methods described includes:Obtain the optical imagery and depth image in Mars earth's surface region to be measured;The optical imagery and the depth image are carried out into registration;The depth image notable figure of the depth image after the optical imagery notable figure and registration of the optical imagery after registration is obtained respectively;The optical imagery notable figure and depth image notable figure fusion are obtained into fusion notable figure;Rock position in the fusion notable figure is determined, so that it is determined that rock position in region to be measured.This programme asks for notable figure by the optical imagery and depth image after the registration to Mars earth's surface region to be measured, the notable figure for obtaining is merged again, position where determining signal portion from the fusion notable figure after fusion is rock position, rock shadow information is not relied on, and testing result is accurate.
Description
Technical field
The present invention relates to image processing field, in particular to a kind of rock detection method and device.
Background technology
, it is necessary to evade the obstacles such as impact crater, rock during Mars probes landing.Compared with impact crater, rock has distribution
Density is big, small volume the features such as.The research of Golombek shows to be dispersed with a fixed number around " Mars Pathfinder " number landing field
The rock of amount, its height is about the 1/2 of base diameter.In European Space Agency (European Space Agency, ESA) 2018
Mars exploration task (Exobiology on Mars, ExoMars) and US National Aeronautics and Space Administration (National
Aeronautics and Space Administration, NASA) in the detection mission of Mars 2020 (Mars 2020), rock
Distributed areas are to determine a significant consideration of preferable landing point, therefore Mars surface rock region detection is obstacle inspection
The important content surveyed.
For the detection method of rock, mainly there are following methods:First, mutually tied with rock contour line extraction based on rock shade
The method of conjunction, the premise of this method is that rock has obvious shadow region;2nd, multiresolution analysis is carried out to touch-down zone
Method, the method using many gray thresholds dividing method, rock is marked using clustering algorithm, build rock distribution
Topological diagram, but the basis of the method is to be detected that the method is when illumination condition changes according to touch-down zone light characteristics
It is no not studied further effectively;3rd, the detection method based on grey level histogram, rock barrier zone is divided from image
Cut out, but need artificial setting threshold value, adaptivity is poor.Research method above is mainly by the shadow information of rock,
When the shadow information of rock is not obvious, Detection results will be affected, while being influenceed larger by illumination condition.
The content of the invention
In view of this, a kind of rock detection method and device are the embodiment of the invention provides, it is to be measured after to registration
The optical imagery in region is asked for notable figure with depth map and is merged respectively, it is determined that rock position in fusion notable figure,
So that it is determined that rock position in region to be measured, allows space probe to avoid rock region when landing, improve
The position of detection rock needs by rock shadow information and is influenceed larger problem by illumination condition in the prior art.
To achieve these goals, the technical solution adopted by the present invention is as follows:
A kind of rock detection method, is applied to the detection of Mars surface rock, and methods described includes:Obtain Mars earth's surface to be measured
The optical imagery and depth image in region;The optical imagery and the depth image are carried out into registration;Obtain respectively and match somebody with somebody
The depth image notable figure of the depth image after the optical imagery notable figure and registration of the optical imagery after standard;Will
The optical imagery notable figure and depth image notable figure fusion obtain fusion notable figure;Determine the fusion notable figure
Middle rock position, so that it is determined that rock position in region to be measured.
A kind of rock detection means, is applied to the detection of Mars surface rock, and described device includes:Image collection module, uses
In the optical imagery and depth image that obtain Mars earth's surface region to be measured;Registration module, for by the optical imagery and
The depth image carries out registration;Notable figure acquisition module, the optical picture for obtaining the optical imagery after registration respectively
As the depth image notable figure of the depth image after notable figure and registration;Fusion Module, for by the optical imagery
Notable figure and depth image notable figure fusion obtain fusion notable figure;Rock detection module, for determining the fusion
Rock position in notable figure, so that it is determined that rock position in region to be measured.
Rock detection method provided in an embodiment of the present invention and device, are obtaining the optical imagery in Mars earth's surface region to be measured
And after depth image and registration, have conspicuousness special in the image in the region to be measured for obtaining due to the rock in region to be measured
Levy, the depth image that the depth image after the optical imagery notable figure and registration of the optical imagery after registration is obtained respectively is notable
Figure, and optical imagery notable figure and depth image notable figure are carried out into fusion obtain fusion notable figure, such that it is able to from melting
Determination rock position in notable figure is closed, realization is independent of rock shadow information and realizes the detection of rock, and reduces illumination
Influence of the condition to detecting.
To enable the above objects, features and advantages of the present invention to become apparent, preferred embodiment cited below particularly, and coordinate
Appended accompanying drawing, is described in detail below.
Brief description of the drawings
To make the purpose, technical scheme and advantage of the embodiment of the present invention clearer, below in conjunction with the embodiment of the present invention
In accompanying drawing, the technical scheme in the embodiment of the present invention is clearly and completely described, it is clear that described embodiment is
A part of embodiment of the present invention, rather than whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art
The every other embodiment obtained under the premise of creative work is not made, belongs to the scope of protection of the invention.
Fig. 1 shows the block diagram of the computer that present pre-ferred embodiments are provided;
Fig. 2 shows the flow chart of the rock detection method that first embodiment of the invention is provided;
Fig. 3 shows the flow chart of the part steps of the rock detection method that first embodiment of the invention is provided;
Fig. 4 shows a kind of schematic diagram of optical imagery that first embodiment of the invention is provided;
Fig. 5 shows the schematic diagram of the depth image corresponding with the optical imagery of Fig. 4 that first embodiment of the invention is provided;
Fig. 6 shows the registration region scope in the deepness image registration to Fig. 4 in Fig. 5;
Fig. 7 shows the schematic diagram of another optical imagery that first embodiment of the invention is provided;
Fig. 8 shows the schematic diagram of the depth image corresponding with the optical imagery of Fig. 7 that first embodiment of the invention is provided;
Fig. 9 shows the notable figure of the optical imagery shown in Fig. 7;
Figure 10 shows the notable figure of the depth image shown in Fig. 8;
Figure 11 show Fig. 9 merged with Figure 10 after fusion notable figure;
Figure 12 shows the functional block diagram of the rock detection means that second embodiment of the invention is provided;
Figure 13 shows the functional block diagram of the part of module of the rock detection means that second embodiment of the invention is provided.
Specific embodiment
Below in conjunction with accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete
Ground description, it is clear that described embodiment is only a part of embodiment of the invention, rather than whole embodiments.Generally exist
The component of the embodiment of the present invention described and illustrated in accompanying drawing can be arranged and designed with a variety of configurations herein.Cause
This, the detailed description of the embodiments of the invention to providing in the accompanying drawings is not intended to limit claimed invention below
Scope, but it is merely representative of selected embodiment of the invention.Based on embodiments of the invention, those skilled in the art are not doing
The every other embodiment obtained on the premise of going out creative work, belongs to the scope of protection of the invention.
It should be noted that:Similar label and letter represents similar terms in following accompanying drawing, therefore, once a certain Xiang Yi
It is defined in individual accompanying drawing, then it need not be further defined and explained in subsequent accompanying drawing.Meanwhile, of the invention
In description, term " first ", " second " etc. are only used for distinguishing description, and it is not intended that indicating or implying relative importance.
As shown in figure 1, being the block diagram of the computer 100 that present pre-ferred embodiments are provided, the computer can be with
It is installed on the Mars probes for Mars landing.The computer 100 includes rock detection means 200, memory 101, deposits
Storage controller 102, processor 103, Peripheral Interface 104, input-output unit 105, display unit 106 and other.
It is the memory 101, storage control 102, processor 103, Peripheral Interface 104, input-output unit 105, aobvious
Show that each element of unit 106 is directly or indirectly electrically connected with each other, to realize the transmission or interaction of coordinate data.For example,
These elements can be realized being electrically connected with by one or more communication bus or holding wire each other.The rock detection means
200 include at least one the memory 101 can be stored in the form of software or firmware (firmware) in software function
Module.The processor 103 is used to perform the executable module stored in memory 101, such as described rock detection means 200
Including software function module or computer program.
Wherein, memory 101 may be, but not limited to, random access memory 101 (Random Access Memory,
RAM), read-only storage 101 (Read Only Memory, ROM), (Programmable of programmable read only memory 101
Read-Only Memory, PROM), (the Erasable Programmable Read-Only of erasable read-only memory 101
Memory, EPROM), (the Electric Erasable Programmable Read-Only of electricallyerasable ROM (EEROM) 101
Memory, EEPROM) etc..Wherein, memory 101 be used for storage program, the processor 103 after execute instruction is received,
Described program is performed, the side performed by the server/computer of the stream process definition that embodiment of the present invention any embodiment is disclosed
Method can apply in processor 103, or be realized by processor 103.
Processor 103 is probably a kind of IC chip, the disposal ability with signal.Above-mentioned processor 103 can
Being general processor 103, including central processing unit 103 (Central Processing Unit, abbreviation CPU), network processes
Device 103 (Network Processor, abbreviation NP) etc.;Can also be digital signal processor 103 (DSP), application specific integrated circuit
(ASIC), ready-made programmable gate array (FPGA) or other PLDs, discrete gate or transistor logic,
Discrete hardware components.Can realize or perform disclosed each method in the embodiment of the present invention, step and logic diagram.It is general
Processor 103 can be microprocessor 103 or the processor 103 can also be any conventional processor 103 etc..
Various input/output devices are coupled to processor 103 and memory 101 by the Peripheral Interface 104.To such as take the photograph
The image acquisition equipments such as camera are coupled to processor 103 and memory 101, and the figure for obtaining is shot to obtain image acquisition equipment
Picture.In certain embodiments, Peripheral Interface 104, processor 103 and storage control 102 can be realized in one single chip.
In some other example, they can be realized by independent chip respectively.
Input-output unit 105 is supplied to user input data to realize interacting for user and computer.The input and output
Unit may be, but not limited to, mouse and keyboard etc..
Display unit 106 provided between the computer and user an interactive interface (such as user interface) or
Referred to user for display image data.In the present embodiment, the display unit can be that liquid crystal display or touch-control are aobvious
Show device.If touch control display, it can be the capacitance type touch control screen or resistance type touch control screen for supporting single-point and multi-point touch operation
Deng.Support that single-point and multi-point touch operation refer to that touch control display can be sensed from one or more positions on the touch control display
The touch control operation that place produces simultaneously is put, and transfers to processor to be calculated and processed the touch control operation for sensing.
It should be understood that the structure shown in Fig. 1 be only illustrate, computer 100 may also include it is more more than shown in Fig. 1 or
Less component, or with the configuration different from shown in Fig. 1.Each component shown in Fig. 1 can using hardware, software or its
Combination is realized.
First embodiment
Fig. 2 shows the flow chart of rock detection method provided in an embodiment of the present invention, is applied to the inspection of Mars surface rock
Survey, so that whether Mars probes can be according to earth's surfaces with the presence of rock and rock position before landing, selection is most preferably
Lu Dian.Fig. 2 is referred to, the method includes:
Step S110:Obtain the optical imagery and depth image in Mars earth's surface region to be measured.
Processor obtains the optical imagery and depth image in region to be measured, and the optical imagery and depth image can lead to
Corresponding image acquisition equipment is crossed to obtain.For example, the optical imagery in region to be measured is shot by optical camera, by range measurement
Device such as laser radar obtains the depth image in region to be measured.Certainly, optical imagery and depth image can also be by other
Image acquisition equipment obtain, such as by RGBD sensors obtain depth image, by kinect sensors obtain optical imagery with
And depth image, it is not intended as limiting in the present embodiment.
Step S120:The optical imagery and the depth image are carried out into registration.
By registration, two images are unified in the same coordinate system, make seat of the same geographical position in different images
Mark can be mutually matched.
In the present embodiment, registering mode can be by deepness image registration to optical imagery, or by optics
Image registration is registrated in the coordinate system of a certain fixation in depth image or by depth image and optical imagery, not
As limitation.
Preferably, because relative to optical imagery, the field range of usual depth image is smaller, preferably will in the present embodiment
Deepness image registration in optical imagery, will depth image coordinate transform in the coordinates regional of optical imagery, and with this
Describe in detail as an example.
In registration process, it is first determined the transformational relation between depth image and optical imagery, further according to transformational relation
Optical imagery and depth image are transformed into the same coordinate system.Specifically, referring to Fig. 3, the registration process includes:
Step S121:It is determined that multiple mark points the first coordinate respectively in optical imagery and in depth image the
Two coordinates.
The plurality of mark point can be the multiple different geographical position point in region to be measured, and the first coordinate and second sit
Mark is the image coordinate in corresponding image.For example, it is mark to specify jobbie in region to be measured, with the mark
Multiple angle points as mark point, determine that coordinate of the multiple angle points of the label in optical imagery is corresponding angle point respectively
First coordinate and the second coordinate that the deep coordinate in image is spent is corresponding angle point.
Step S122:Calculate the conversion parameter between first coordinate and second coordinate.
According to transformational relation model between required the first coordinate and the second coordinate, the first coordinate and the second coordinate are calculated
Between conversion parameter, so that it is determined that the coordinate transformation parameter between optical imagery and depth image.
Specifically, considering the factors such as image scaling, rotation, translation, in the present embodiment, the first coordinate and second are set up
Transformational relation model is between coordinate:
Wherein, (x10,y10) it is corresponding points Ps of any point P on depth image in region to be measured0Coordinate, (x11,
y11) it is corresponding points Ps of the arbitrfary point P on optical imagery1Coordinate, that is to say, that P0With P1It is mutual corresponding coordinate, s is
Zoom factor, θ is the anglec of rotation, tx、tyThe respectively translational movement in x directions and y directions, it is possible to understand that, x directions are coordinate x10
Corresponding direction, y directions are coordinate y10Corresponding direction.
Line translation is entered to above-mentioned formula (1) as follows:
Line translation is entered to formula (2), is obtained:
For convenience of calculating, scos θ=a can be set1, ssin θ=a2, tx=a3, ty=a4, then formula (3) can be exchanged into
Corresponding points Qs of another point Q in depth image in region to be measured is set again0Coordinate is (x20,y20), it is in optical imagery
Upper corresponding point Q1Coordinate be (x21,y21), then had according to formula (4)
Merge formula (3), (5), obtain
a1、a2、a3、a4It is the conversion parameter between coordinate in coordinate in depth image and optical imagery, can be calculated
Using the two of which mark point in the multiple mark points for determining as P points and Q points, its coordinate is brought into formula
(7), will first coordinates of the mark point P in depth image and the second coordinate in optical imagery bring formula (7) into, will
First coordinates of the mark point Q in depth image and the second coordinate in optical imagery bring formula (7) into, you can turned
Change parameter a1、a2、a3、a4Value.
To improve the accuracy of conversion parameter, in the present embodiment, the multiple mark points of selection calculate conversion parameter, if n-th
First coordinate of the individual mark point in depth image is (xn0,yn0), the second coordinate in optical imagery is (xn1,yn1), by n
First coordinate of the individual mark point in optical imagery and the second coordinate in depth image bring formula (5) into, and merging is obtained
By the parameter a in the derivation of equation (8)1,a2,a3,a4Value can obtain more accurate first coordinate and described
Conversion parameter between second coordinate.
Certainly, in the present embodiment, transformational relation model is not intended as limiting between the first coordinate and the second coordinate,
Can be other, such as affine transformation, transitting probability, projective transformation, nonlinear transformation can be selected according to actual conditions
Select.
Step S123:According to the conversion parameter, the optical imagery and the depth image are transformed into same seat
In mark system.
It should be understood that by the conversion parameter between the first coordinate and the second coordinate, can be by depth image
Arbitrary coordinate is transformed into optical imagery, the above-mentioned conversion parameter a of correspondence1,a2,a3,a4, conversion formula is:
Wherein, (x0,y0) represent depth image in any point coordinate, (x1,y1) represent any point (x0,y0)
Corresponding coordinate in optical imagery.
Depth image is transformed in the coordinates regional of optical imagery, can be by calculating four angle points of depth image
Coordinate of the angular coordinate in optical imagery, so as to obtain coordinate range of the depth image in the coordinates regional of optical imagery.
For example, representing the optical imagery in the region to be measured of acquisition such as Fig. 4, Fig. 5 represents the corresponding depth map in the region to be measured
Picture.As shown in figure 5, the resolution ratio of depth image is m × n, then in the coordinate system of depth image, its four angular coordinates point
It is not:The upper left corner (0,0), the upper right corner (n, 0), the lower left corner (0, m), the lower right corner (n, m), by formula (9), can obtain its
The coordinate of four angle points is respectively (a in optical imagery3,a4), (a1n+a3,a2n+a4), (- a2m+a3,a1m+a4), (a1n-a2m+
a3,a2n+a1m+a4), so that it is determined that depth information image transforms to corresponding region in optical imagery, square frame 001 in such as Fig. 6
Shown region.It should be understood that the corresponding regions to be measured of Fig. 4 and Fig. 5 are not necessarily the actual area of Mars earth's surface, Fig. 4
Image registration is merely illustrative with Fig. 5.
Step S130:Obtain described after the optical imagery notable figure and registration of the optical imagery after registration respectively
The depth image notable figure of depth image.
Notable figure is asked for the optical imagery and depth image after registration, wherein, optical imagery notable figure is optics
The notable figure of image, depth image notable figure is the notable figure of depth image.In the present embodiment, the acquisition algorithm of notable figure
Do not limit, it is preferred that by vision significance algorithm based on graph theory (Graph-Based Visual Saliency,
GBVS the optical imagery notable figure of the optical imagery after registration) is obtained, is detected by the conspicuousness based on shape prior
It is described after method (context-based saliency and object-level shape prior, CBS) acquisition registration
The depth image notable figure of depth image.
Specifically, obtain optical imagery notable figure by the vision significance algorithm based on graph theory including:Light is extracted first
Learn image characteristic vector, extraction be include optical imagery color, brightness and three, the direction feature of the feature of aspect to
Amount.Regeneration optical imagery notable figure, the generating process can be, using the characteristic vector generation activation figure for extracting, in this base
The characteristics of Markov random field is used on plinth builds the Markov chain of two dimensional image, is obtained significantly by seeking its balanced distribution
Figure.Fig. 7 and Fig. 8 respectively illustrate the optical imagery and depth image in the region same to be measured after registration, and wherein Fig. 7 is light
Image is learned, Fig. 8 is depth image.The optical imagery notable figure of the Fig. 7 obtained by the vision significance algorithm based on graph theory is such as
Shown in Fig. 9.
The method detected by the conspicuousness based on shape prior obtains depth image notable figure to be included:A, by depth map
The color of picture and positional information calculation conspicuousness, and generate notable figure.B, the shape facility for extracting signal portion in notable figure.c、
Segmentation figure picture.Can be specifically, by building energy function, and to be split energy function minimum.D, based on segmentation result
Notable figure, and return to step b are regenerated, until energy function convergence stops iteration.As Figure 10 is shown by based on shape
The depth image notable figure of Fig. 8 that the method for the conspicuousness detection of priori is obtained.
Step S140:The optical imagery notable figure and depth image notable figure fusion are obtained into fusion notable
Figure.
Step S150:Rock position in the fusion notable figure is determined, so that it is determined that in region to be measured where rock
Position.
As shown in FIG. 9 and 10, notable figure is gray-scale map, color in the shallower region 002 and 003, Figure 10 of color in Fig. 9
Shallower region 004 and 005 is respectively provided with significant characteristics, and is rock correspondence region, its pixel value and other backgrounds
Region has significant difference.
In the present embodiment, the method that can be multiplied using pixel is to optical imagery notable figure and depth image notable figure
Merged, the specific mode being multiplied is that the pixel of correspondence position is multiplied, such as in optical imagery notable figure coordinate be (x1,
Y1 with coordinate in depth image notable figure for the pixel value of the corresponding pixel points of (x1, y1) is multiplied, multiplied result is pixel)
Coordinate is the pixel value of (x1, y1) in merging the image of sum.The image after fusion is carried out into binaryzation again, fusion is obtained notable
Figure.Significant characteristics are had according to rock position in notable figure, it may be determined that rock position in fusion notable figure.
As shown in FIG. 9 and 10, conspicuousness part colours are shallower in notable figure, and pixel value is larger, and background parts color
Relatively deep, pixel value is smaller.The larger position of pixel value in optical imagery notable figure and depth image notable figure, correspondence phase
Multiply pixel value still relatively large, and pixel value it is smaller or wherein in a width notable figure pixel value is smaller and aobvious in another width
The larger position of pixel value in work figure, pixel value is relatively small after correspondence is multiplied.Therefore, setting two-value is carried out according to actual conditions
After the standard value of change, the relatively small pixel value part in fusion notable figure to should be less than binaryzation standard value can be made to be the first pixel
Value, makes the pixel value major part wherein to should be greater than binaryzation standard be the second pixel value, so that in realizing fusion notable figure
Non-significant part is the first pixel, is judged to non-rock position, and conspicuousness part is the second pixel, is judged to where rock
Position.
As shown in figure 11, wherein white portion is notable portion to fusion notable figure after Fig. 9 and Figure 10 fusions and binaryzation
Point, it is judged to rock position, black region is background parts, is judged to non-rock region.
In addition, in fusion process, optical imagery notable figure and depth image notable figure first can be carried out into binaryzation,
Pixel value is 1 after signal portion binaryzation wherein in optical imagery notable figure, and pixel value is 0 after other parts binaryzation, together
Sample, pixel value is 1 after the signal portion binaryzation in depth image notable figure, and pixel value is 0 after other parts binaryzation.Again
Optical imagery notable figure and depth image notable figure after binaryzation is merged, is made in optical imagery notable figure and depth
Show as notable feature and the part of non-significant feature is shown as in another width in fusion in a width in degree image saliency map
Pixel value is 0 in image, shows as non-significant feature, so that in optical imagery notable figure and depth image notable figure
Show as the part of notable feature and show as notable feature in notable figure is merged, signal portion is more in making the notable figure of extraction
Accurately, make the rock position determined from fusion notable figure more accurate.
Rock position corresponds to the region to be measured of Mars earth's surface during notable figure will be merged again, so that it is determined that region to be measured
Interior rock position.
In sum, the rock detection method that the present embodiment is provided, by the optical imagery in Mars earth's surface region to be measured and
Notable figure is asked for after deepness image registration, then the notable figure of acquisition is merged, with the fusion notable figure that fusion is obtained
The corresponding region of signal portion is used as rock position, so that it is determined that rock position in region to be measured.
It is of course also possible to obtain the image including other information in Mars earth's surface region to be measured, such as region to be measured is infrared
Image etc., the notable figure of the image of the different information in region to be measured is included by least two width for obtaining to determine in region to be measured
Rock position.
Second embodiment
A kind of rock detection means 200 is present embodiments provided, the detection of Mars surface rock is applied to, Figure 12 is referred to,
Described device 200 includes:Image collection module 210, optical imagery and depth map for obtaining Mars earth's surface region to be measured
Picture;Registration module 220, for the optical imagery and the depth image to be carried out into registration;Notable figure acquisition module 230,
For the depth of the depth image after the optical imagery notable figure and registration that obtain the optical imagery after registration respectively
Degree image saliency map;Fusion Module 240, for the optical imagery notable figure and depth image notable figure fusion to be obtained
Notable figure must be merged;Rock detection module 250, for determine it is described fusion notable figure in rock position, so that it is determined that treating
Survey rock position in region.
Further, as shown in figure 13, registration module 220 includes:Mark point determination sub-module 221, for determining multiple
Mark point the first coordinate respectively in optical imagery and the second coordinate in depth image;Transformational relation acquisition submodule
222, for calculating the conversion parameter between first coordinate and second coordinate;Transform subblock 223, for basis
The conversion parameter, the optical imagery and the depth image are transformed into the same coordinate system.
Specifically, in the present embodiment, the notable figure acquisition module 230 can be used for aobvious by the vision based on graph theory
Work property algorithm obtains the optical imagery notable figure of the optical imagery after registration, for by the conspicuousness based on shape prior
The method of detection obtains the depth image notable figure of the depth image after registration.
In addition, the rock detection module 250 can include:Pixel determination sub-module, for determining the rock correspondence
Position merge notable figure in pixel value;Position determination sub-module, for determining pixel described in the fusion notable figure
It is the position where the rock to be worth corresponding region.
In several embodiments provided herein, it should be understood that disclosed apparatus and method, it is also possible to pass through
Other modes are realized.Device embodiment described above is only schematical, for example, flow chart and block diagram in accompanying drawing
Show the device of multiple embodiments of the invention, the architectural framework in the cards of method and computer program product,
Function and operation.At this point, each square frame in flow chart or block diagram can represent one the one of module, program segment or code
Part a, part for the module, program segment or code is used to realize holding for the logic function for specifying comprising one or more
Row instruction.It should also be noted that at some as in the implementation replaced, the function of being marked in square frame can also be being different from
The order marked in accompanying drawing occurs.For example, two continuous square frames can essentially be performed substantially in parallel, they are sometimes
Can perform in the opposite order, this is depending on involved function.It is also noted that every in block diagram and/or flow chart
The combination of the square frame in individual square frame and block diagram and/or flow chart, can use the function or the special base of action for performing regulation
Realized in the system of hardware, or can be realized with the combination of computer instruction with specialized hardware.
In addition, each functional module in each embodiment of the invention can integrate to form an independent portion
Divide, or modules individualism, it is also possible to which two or more modules are integrated to form an independent part.
If the function is to realize in the form of software function module and as independent production marketing or when using, can be with
Storage is in a computer read/write memory medium.Based on such understanding, technical scheme is substantially in other words
The part contributed to prior art or the part of the technical scheme can be embodied in the form of software product, the meter
Calculation machine software product is stored in a storage medium, including some instructions are used to so that a computer equipment (can be individual
People's computer, server, or network equipment etc.) perform all or part of step of each embodiment methods described of the invention.
And foregoing storage medium includes:USB flash disk, mobile hard disk, read-only storage (ROM, Read-Only Memory), arbitrary access are deposited
Reservoir (RAM, Random Access Memory), magnetic disc or CD etc. are various can be with the medium of store program codes.Need
Illustrate, herein, such as first and second or the like relational terms be used merely to by an entity or operation with
Another entity or operation make a distinction, and not necessarily require or imply these entities or there is any this reality between operating
The relation or order on border.And, term " including ", "comprising" or its any other variant be intended to the bag of nonexcludability
Contain, so that process, method, article or equipment including a series of key elements are not only including those key elements, but also including
Other key elements being not expressly set out, or it is this process, method, article or the intrinsic key element of equipment also to include.
In the absence of more restrictions, the key element limited by sentence "including a ...", it is not excluded that including the key element
Process, method, article or equipment in also there is other identical element.
The preferred embodiments of the present invention are the foregoing is only, is not intended to limit the invention, for the skill of this area
For art personnel, the present invention can have various modifications and variations.It is all within the spirit and principles in the present invention, made any repair
Change, equivalent, improvement etc., should be included within the scope of the present invention.It should be noted that:Similar label and letter exists
Similar terms is represented in following accompanying drawing, therefore, once being defined in a certain Xiang Yi accompanying drawing, then it is not required in subsequent accompanying drawing
It is further defined and is explained.
The above, specific embodiment only of the invention, but protection scope of the present invention is not limited thereto, and it is any
Those familiar with the art the invention discloses technical scope in, change or replacement can be readily occurred in, should all contain
Cover within protection scope of the present invention.Therefore, protection scope of the present invention described should be defined by scope of the claims.
Claims (10)
1. a kind of rock detection method, it is characterised in that be applied to the detection of Mars surface rock, methods described includes:
Obtain the optical imagery and depth image in Mars earth's surface region to be measured;
The optical imagery and the depth image are carried out into registration;
The depth of the depth image after the optical imagery notable figure and registration of the optical imagery after registration is obtained respectively
Degree image saliency map;
The optical imagery notable figure and depth image notable figure fusion are obtained into fusion notable figure;
Rock position in the fusion notable figure is determined, so that it is determined that rock position in region to be measured.
2. method according to claim 1, it is characterised in that described to enter the optical imagery and the depth image
Row registration includes:
It is determined that multiple mark point the first coordinates respectively in optical imagery and the second coordinate in depth image;
Calculate the conversion parameter between first coordinate and second coordinate;
According to the conversion parameter, the optical imagery and the depth image are transformed into the same coordinate system.
3. method according to claim 1, it is characterised in that registration is obtained by the vision significance algorithm based on graph theory
The optical imagery notable figure of the optical imagery afterwards.
4. method according to claim 1, it is characterised in that the method detected by the conspicuousness based on shape prior is obtained
Take the depth image notable figure of the depth image after registration.
5. method according to claim 1, it is characterised in that rock position bag in the determination fusion notable figure
Include:
Determine pixel value of the corresponding position of the rock in notable figure is merged;
Determine that the corresponding region of pixel value described in the fusion notable figure is the position where the rock.
6. a kind of rock detection means, it is characterised in that be applied to the detection of Mars surface rock, described device includes:
Image collection module, optical imagery and depth image for obtaining Mars earth's surface region to be measured;
Registration module, for the optical imagery and the depth image to be carried out into registration;
Notable figure acquisition module, for after the optical imagery notable figure and registration that obtain the optical imagery after registration respectively
The depth image depth image notable figure;
Fusion Module, it is notable for the optical imagery notable figure and depth image notable figure fusion to be obtained into fusion
Figure;
Rock detection module, for determine it is described fusion notable figure in rock position, so that it is determined that rock in region to be measured
Position.
7. device according to claim 6, it is characterised in that the registration module includes:
Mark point determination sub-module, for determining first coordinate of multiple mark points respectively in optical imagery and in depth map
The second coordinate as in;
Transformational relation acquisition submodule, for calculating the conversion parameter between first coordinate and second coordinate;
Transform subblock, for according to the conversion parameter, the optical imagery and the depth image being transformed into same
In coordinate system.
8. device according to claim 6, it is characterised in that the notable figure acquisition module is used for by based on graph theory
Vision significance algorithm obtains the optical imagery notable figure of the optical imagery after registration.
9. device according to claim 6, it is characterised in that the notable figure acquisition module is used for by based on shape elder generation
The method of the conspicuousness detection tested obtains the depth image notable figure of the depth image after registration.
10. device according to claim 6, it is characterised in that the rock detection module includes:
Pixel determination sub-module, for determining pixel value of the corresponding position of the rock in notable figure is merged;
Position determination sub-module, for determining that the corresponding region of pixel value is rock place described in the fusion notable figure
Position.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710118126.2A CN106898008A (en) | 2017-03-01 | 2017-03-01 | Rock detection method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710118126.2A CN106898008A (en) | 2017-03-01 | 2017-03-01 | Rock detection method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106898008A true CN106898008A (en) | 2017-06-27 |
Family
ID=59185304
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710118126.2A Pending CN106898008A (en) | 2017-03-01 | 2017-03-01 | Rock detection method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106898008A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111178429A (en) * | 2019-11-25 | 2020-05-19 | 上海联影智能医疗科技有限公司 | System and method for providing medical guidance using patient depth images |
CN112924911A (en) * | 2021-01-25 | 2021-06-08 | 上海东软医疗科技有限公司 | Method and device for acquiring coil information in magnetic resonance system |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102999892A (en) * | 2012-12-03 | 2013-03-27 | 东华大学 | Intelligent fusion method for depth images based on area shades and red green blue (RGB) images |
CN103871062A (en) * | 2014-03-18 | 2014-06-18 | 北京控制工程研究所 | Lunar surface rock detection method based on super-pixel description |
CN103927742A (en) * | 2014-03-21 | 2014-07-16 | 北京师范大学 | Global automatic registering and modeling method based on depth images |
CN104680545A (en) * | 2015-03-15 | 2015-06-03 | 西安电子科技大学 | Method for detecting existence of salient objects in optical images |
CN104700381A (en) * | 2015-03-13 | 2015-06-10 | 中国电子科技集团公司第二十八研究所 | Infrared and visible light image fusion method based on salient objects |
CN105869146A (en) * | 2016-03-22 | 2016-08-17 | 西安电子科技大学 | Saliency fusion-based SAR image change detection method |
CN105957054A (en) * | 2016-04-20 | 2016-09-21 | 北京航空航天大学 | Image change detecting method |
CN106373162A (en) * | 2015-07-22 | 2017-02-01 | 南京大学 | Salient object detection method based on saliency fusion and propagation |
-
2017
- 2017-03-01 CN CN201710118126.2A patent/CN106898008A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102999892A (en) * | 2012-12-03 | 2013-03-27 | 东华大学 | Intelligent fusion method for depth images based on area shades and red green blue (RGB) images |
CN103871062A (en) * | 2014-03-18 | 2014-06-18 | 北京控制工程研究所 | Lunar surface rock detection method based on super-pixel description |
CN103927742A (en) * | 2014-03-21 | 2014-07-16 | 北京师范大学 | Global automatic registering and modeling method based on depth images |
CN104700381A (en) * | 2015-03-13 | 2015-06-10 | 中国电子科技集团公司第二十八研究所 | Infrared and visible light image fusion method based on salient objects |
CN104680545A (en) * | 2015-03-15 | 2015-06-03 | 西安电子科技大学 | Method for detecting existence of salient objects in optical images |
CN106373162A (en) * | 2015-07-22 | 2017-02-01 | 南京大学 | Salient object detection method based on saliency fusion and propagation |
CN105869146A (en) * | 2016-03-22 | 2016-08-17 | 西安电子科技大学 | Saliency fusion-based SAR image change detection method |
CN105957054A (en) * | 2016-04-20 | 2016-09-21 | 北京航空航天大学 | Image change detecting method |
Non-Patent Citations (6)
Title |
---|
CONRAD SPITERI等: "Structure augmented monocular saliency for planetary rovers", 《ROBOTICS AND AUTONOMOUS SYSTEMS》 * |
JINGFAN GUO等: "Salient object detection in RGB-D image based on saliency fusion and propagation", 《ICIMCS ’15》 * |
丁萌等: "基于被动图像的探测器着陆过程中岩石检测", 《光电工程》 * |
夏振平等: "基于视觉显著性的立体显示图像深度调整", 《光学学报》 * |
张文国等: "基于稀疏表示的SAR/红外图像彩色融合", 《舰船电子工程》 * |
张珍等: "一种基于广义互信息的图像配准算法", 《传感器与微系统》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111178429A (en) * | 2019-11-25 | 2020-05-19 | 上海联影智能医疗科技有限公司 | System and method for providing medical guidance using patient depth images |
CN112924911A (en) * | 2021-01-25 | 2021-06-08 | 上海东软医疗科技有限公司 | Method and device for acquiring coil information in magnetic resonance system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Kalfarisi et al. | Crack detection and segmentation using deep learning with 3D reality mesh model for quantitative assessment and integrated visualization | |
Huyan et al. | Detection of sealed and unsealed cracks with complex backgrounds using deep convolutional neural network | |
Bulatov et al. | Context-based automatic reconstruction and texturing of 3D urban terrain for quick-response tasks | |
Zhang et al. | A kinect-based approach for 3D pavement surface reconstruction and cracking recognition | |
Haala et al. | Extraction of buildings and trees in urban environments | |
Gamba et al. | Improving urban road extraction in high-resolution images exploiting directional filtering, perceptual grouping, and simple topological concepts | |
Kwak et al. | Automatic representation and reconstruction of DBM from LiDAR data using Recursive Minimum Bounding Rectangle | |
Wang et al. | Window detection from mobile LiDAR data | |
Champion et al. | 2D building change detection from high resolution satelliteimagery: A two-step hierarchical method based on 3D invariant primitives | |
CN110067274B (en) | Equipment control method and excavator | |
Tarsha Kurdi et al. | Automatic filtering and 2D modeling of airborne laser scanning building point cloud | |
CN107301408A (en) | Human body mask extracting method and device | |
Wang et al. | A method for detecting windows from mobile LiDAR data | |
CN106898008A (en) | Rock detection method and device | |
Li et al. | Laser scanning based three dimensional measurement of vegetation canopy structure | |
CN116958145A (en) | Image processing method and device, visual detection system and electronic equipment | |
CN105118069B (en) | A kind of robot of complex environment straight-line detection and screening technique and application this method | |
CN107480645A (en) | A kind of tower crane collision avoidance system and method based on pattern recognition technique | |
CN114273826A (en) | Automatic identification method for welding position of large-sized workpiece to be welded | |
Adu-Gyamfi et al. | Functional evaluation of pavement condition using a complete vision system | |
Tuttas et al. | Reconstruction of façades in point clouds from multi aspect oblique ALS | |
CN105631849A (en) | Polygon object change detection method and device | |
Loktev et al. | Image Blur Simulation for the Estimation of the Behavior of Real Objects by Monitoring Systems. | |
US20230136883A1 (en) | Joint surface safety evaluation apparatus | |
Dandoš et al. | A new control mark for photogrammetry and its localization from single image using computer vision |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170627 |