CN117333518A - Laser scanning image matching method, system and computer equipment - Google Patents

Laser scanning image matching method, system and computer equipment Download PDF

Info

Publication number
CN117333518A
CN117333518A CN202311218484.2A CN202311218484A CN117333518A CN 117333518 A CN117333518 A CN 117333518A CN 202311218484 A CN202311218484 A CN 202311218484A CN 117333518 A CN117333518 A CN 117333518A
Authority
CN
China
Prior art keywords
matching
image
sub
scanned image
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311218484.2A
Other languages
Chinese (zh)
Inventor
黄埔军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yichun Yilian Printing Equipment Co ltd
Original Assignee
Yichun Yilian Printing Equipment Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yichun Yilian Printing Equipment Co ltd filed Critical Yichun Yilian Printing Equipment Co ltd
Priority to CN202311218484.2A priority Critical patent/CN117333518A/en
Publication of CN117333518A publication Critical patent/CN117333518A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a laser scanning matching method, a laser scanning matching system and computer equipment, wherein the method comprises the following steps: acquiring a corresponding scanning image through laser scanning; dividing the scanned image to obtain a plurality of sub-layer images; the gray scale of each sub-layer image is enhanced through an enhancement model, and after the gray scale enhancement is completed, each sub-layer image is spliced again to obtain the latest scanning image; extracting the features of the latest scanned image, and registering the features of the latest scanned image based on a registration algorithm to complete real-time matching of the scanned image. Through the method and the device, the matching algorithm is high in precision and match rate, and meanwhile, the matching efficiency is high.

Description

Laser scanning image matching method, system and computer equipment
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a laser scanning image matching method, system, and computer device.
Background
With the development of scientific technology, image matching has become a very important technology in the field of image information processing. Image matching, which refers to the process of spatially aligning two images recorded from the same scene by two different (or identical) sensors to determine the relative translation between the two images, is now required in about 40% of machine vision applications.
At present, due to the influence of shooting time, shooting angle, natural environment change, use of various sensors, defects of the sensors, noise and the like, the shot image has gray level distortion and geometric distortion C, and errors caused by an image preprocessing process are caused, so that a certain degree of difference is usually caused between a template image and a target image to be matched. In this case, how to make the matching algorithm high in accuracy, high in correct matching rate and high in speed becomes a concern.
Disclosure of Invention
Based on the above, the invention aims to provide a laser scanning matching method, a laser scanning matching system and computer equipment, so as to solve the defects in the prior art.
In order to achieve the above object, the present invention provides a laser scanning matching method, the method comprising:
acquiring a corresponding scanning image through laser scanning;
dividing the scanning image to obtain a plurality of sub-layer images, and obtaining gray mapping intervals of the sub-layer images;
dynamically balancing and enhancing the gray value of the corresponding sub-layer image through a transformation function based on the gray mapping interval, and re-splicing each sub-layer image after gray enhancement is completed so as to obtain the latest scanning image;
extracting the features of the latest scanned image, and registering the features of the latest scanned image based on a registration algorithm to complete real-time matching of the scanned image.
Preferably, the expression of the transformation function is as follows:
X j (h)=rangr i D j (h)Tj=1
wherein X is j (h) Mapping gray value D for gray value h in the sub-layer image j R is the cumulative probability function of each gray level in the sub-layer image i Is the j thAnd the gray value of the sub-layer image, T is a control parameter.
Preferably, the step of extracting the features of the latest scanned image includes:
and extracting the characteristics of the latest scanned image based on a SIFT algorithm.
Preferably, the step of registering the features of the latest scanned image based on a registration algorithm includes:
retrieving a salient region in the latest scanned image by a salient region detection algorithm, wherein the latest scanned image comprises a plurality of characteristic points;
setting a detection threshold value of each corresponding characteristic point through each salient region;
calculating hamming distances between the corresponding feature points and the nearest matching points and the next nearest matching points respectively based on the detection threshold;
and acquiring matching characteristic point pairs of the characteristic points according to the Hamming distance, and judging corresponding characteristic points based on the matching characteristic point pairs to register.
Preferably, the step of retrieving the salient region in the latest scanned image by a salient region detection algorithm includes:
removing texture areas of the latest scanned image by a high-humidity low-pass filtering and local entropy texture segmentation method to obtain three filtering gray maps divided by red, green and blue components;
dividing each filtering gray level map into a brightest region, a darkest region and a residual region by a clustering segmentation method, and comparing the brightness of the brightest region and the darkest region with the brightness of the residual region respectively;
selecting a target area based on a comparison result, wherein the target area is one of the brightest area and the darkest area;
and detecting corner points and edge points of the target area, taking part of the corner points and part of the edge points as salient points based on a detection structure, and expanding the salient points to a salient area through mathematical morphology.
To achieve the above object, the present invention also provides a three-dimensional laser scanning matching system, the system comprising:
the first acquisition module is used for acquiring corresponding scanning images through laser scanning;
the second acquisition module is used for dividing the scanning image to obtain a plurality of sub-layer images and acquiring gray mapping intervals of the sub-layer images;
the obtaining module is used for carrying out dynamic balanced enhancement on the gray value of the corresponding sub-layer image based on the gray mapping interval and through a transformation function, and after the gray enhancement is completed, splicing the sub-layer images again to obtain the latest scanning image;
and the matching module is used for extracting the characteristics of the latest scanned image, and registering the characteristics of the latest scanned image based on a registration algorithm so as to complete real-time matching of the scanned image.
Preferably, the expression of the transformation function is as follows:
X j (h)=rangr i D j (h)Tj=1
wherein X is j (h) Mapping gray value D for gray value h in the sub-layer image j R is the cumulative probability function of each gray level in the sub-layer image i And the gray value of the j-th sub-layer image is represented by T, and the T is a control parameter.
Preferably, the matching module includes:
and the extraction unit is used for extracting the characteristics of the latest scanned image based on a SIFT algorithm.
Preferably, the matching module includes:
a retrieval unit configured to retrieve a salient region in the latest scanned image by a salient region detection algorithm, the latest scanned image including a plurality of feature points;
a setting unit, configured to set a detection threshold value of each corresponding feature point through each salient region;
a calculating unit, configured to calculate hamming distances between the corresponding feature points and the nearest matching point and the next nearest matching point, respectively, based on the detection threshold;
and the registration unit is used for acquiring matching characteristic point pairs of the characteristic points according to the Hamming distance, and judging corresponding characteristic points to register based on the matching characteristic point pairs.
In order to achieve the above object, the present invention further provides a computer device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the laser scanning matching method described in the above when executing the computer program.
The beneficial effects of the invention are as follows: the method comprises the steps of dividing a scanning image into a plurality of sub-layer images, enhancing the gray scale of each sub-layer image, and then re-splicing to obtain the latest scanning image so as to ensure that the gray scale of a high-frequency area and the gray scale of a low-frequency area are basically consistent in the enhancement process of the scanning image, so that the subsequent matching effect can be improved, then extracting the characteristics of the latest scanning image, matching the images of the latest scanning image based on a registration algorithm to complete real-time matching of the scanning image, and through the steps, the matching algorithm has high precision and high matching rate, and the matching efficiency is relatively rapid.
Additional aspects and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
FIG. 1 is a flowchart of a laser scanned image matching method according to a first embodiment of the present invention;
FIG. 2 is a block diagram of a laser scanning image matching system according to a second embodiment of the present invention;
fig. 3 is a schematic hardware structure of a computer device according to a third embodiment of the present invention.
The invention will be further described in the following detailed description in conjunction with the above-described figures.
Detailed Description
In order that the invention may be readily understood, a more complete description of the invention will be rendered by reference to the appended drawings. Several embodiments of the invention are presented in the figures. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein in the description of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. The term "and/or" as used herein includes any and all combinations of one or more of the associated listed items.
Referring to fig. 1, a laser scanning image matching method according to a first embodiment of the present invention is shown, the method includes the following steps:
step S101, acquiring corresponding scanning images through laser scanning;
wherein the laser scanning is performed by a three-dimensional laser scanning device. To acquire a corresponding scanned image.
Step S102, dividing the scanned image to obtain a plurality of sub-layer images;
in order to ensure that the gray level of the high-frequency region is consistent with the gray level of the low-frequency region in the enhancement process of the scanning image obtained by three-dimensional laser scanning, the scanning image is divided into a plurality of sub-layer images, and it can be understood that the duty ratio of the number of gray level pixel points of the low-frequency region in the sub-layer images is obviously increased.
Step S103, enhancing the gray scale of each sub-layer image through an enhancement model, and re-splicing each sub-layer image after the gray scale enhancement is completed so as to obtain the latest scanning image;
the step of enhancing the gray scale of each sub-layer image through the enhancement model comprises the following steps:
acquiring a gray mapping interval of each sub-layer image;
it should be noted that, the formula for acquiring the gray mapping interval of each sub-layer image is as follows:
(r j min ,r j max )=(0,Tn j -Tn j-1 )
wherein r is j min And r j max The minimum gray value and the maximum gray value of the gray mapping interval corresponding to the jth sub-layer image are set between 0 and 5, and T is a control parameter, n j And n j-1 Local minima in the j-th and j-1-th sub-layer images, respectively.
Dynamically balancing and enhancing the gray value of the corresponding sub-layer image through a transformation function based on the gray mapping interval;
wherein the expression of the transformation function is as follows:
X j (h)=rangr j D j (h)Tj=1
wherein X is j (h) Mapping gray value D for gray value h in the sub-layer image j R is the cumulative probability function of each gray level in the sub-layer image i And (3) for the gray value of the j-th sub-layer image in the gray mapping interval, wherein T is a control parameter.
It should be noted that, the expression of the cumulative probability function of each gray level in the sub-layer image is as follows:
wherein m is j H is m for the number of pixel points existing in the gray mapping section in the jth sub-layer image j (h) The pixel constitution and the remaining characters are the same as those in the previous description, and are not repeated here.
Step S104, extracting the characteristics of the latest scanned image, and registering the characteristics of the latest scanned image based on a registration algorithm to complete real-time matching of the scanned image.
The feature of the latest scanned image is extracted based on a SIFT algorithm, then a salient region detection algorithm searches salient regions in the latest scanned image, and feature points with wrong matching are removed by using a binary feature matching scheme of rapid nearest neighbor detection and a random sampling consistency model so as to finish registration and realize matching.
Through the steps, the scanning image is divided into a plurality of sub-layer images, the gray scale of each sub-layer image is enhanced, then the latest scanning image is spliced again to ensure that the gray scale of a high-frequency area and the gray scale of a low-frequency area are basically consistent in the enhancement process of the scanning image so as to improve the subsequent matching effect, then the characteristics of the latest scanning image are extracted, and the images of the latest scanning image are matched based on a registration algorithm to complete real-time matching of the scanning image.
In some of these embodiments, the step of extracting features of the most recently scanned image comprises:
and extracting the characteristics of the latest scanned image based on a SIFT algorithm.
The step of extracting the features of the latest scanned image based on the SIFT algorithm comprises the following steps:
identifying extreme points of the scale space of the latest scanned image, locking the positions of core feature points in the latest scanned image, and judging the directions of the core feature points of the latest scanned image;
in the SIFT algorithm, a Gaussian difference function is used for convolution with the center point of the latest scanned image so as to obtain a scale space extreme point of the latest scanned image; locking core feature points of the latest scanned image by using a sum function, and removing feature points with poor stability in the latest scanned image by using a Hessian matrix; and carrying out equalization processing on the gradient directions of the pixels in the areas of the core feature points by using the gradient histogram, wherein it can be understood that the peak value in the processed gradient histogram is the direction corresponding to the matching of the core feature points, namely the direction of the core feature points.
And establishing a core feature vector based on the scale space extreme point, the core feature point position and the core feature point direction.
When the position, scale and direction information of the core feature points are known, in order to enable the core feature points to have scale and rotation stationarity, the gradient histogram in the block is calculated after the range near the core feature points is segmented, so as to establish feature vectors, obtain corresponding feature clusters, and finish the feature extraction step.
In some of these embodiments, the step of registering features of the most recently scanned image based on a registration algorithm comprises:
retrieving a salient region in the latest scanned image by a salient region detection algorithm, wherein the latest scanned image comprises a plurality of characteristic points;
setting a detection threshold value of each corresponding characteristic point through each salient region;
wherein, the calculation expression of the detection threshold value of each characteristic point is as follows:
wherein Ω is a detection threshold of the feature points, n is the number of pixel points in the salient region, and f (y) is a distribution interval of the salient region.
Calculating hamming distances between the corresponding feature points and the nearest matching points and the next nearest matching points respectively based on the detection threshold;
in order to improve the operation efficiency and reduce the matching error, a binary feature matching method based on rapid nearest neighbor search is used, and a random sampling consistency model is utilized to remove feature points of the matching error, so that the matching precision is enhanced.
And acquiring matching characteristic point pairs of the characteristic points according to the Hamming distance, and judging corresponding characteristic points based on the matching characteristic point pairs to register.
The matching feature point pair is a quotient of a hamming distance between the feature point corresponding to the detection threshold and the nearest matching point and a hamming distance between the feature point corresponding to the detection threshold and the next nearest matching point.
When the matching feature point pair is not more than 0.55, the feature point matching is judged to be successful, the feature point is reserved, when the feature point pair is not more than 0.55, the feature point is removed, and the matching of the laser scanning image is carried out.
In some of these embodiments, the step of retrieving the salient region in the latest scanned image by a salient region detection algorithm comprises:
removing texture areas of the latest scanned image by a high-humidity low-pass filtering and local entropy texture segmentation method to obtain three filtering gray maps divided by red, green and blue components;
the characteristic is more obvious on the gray level images of the red, green and blue channels, so that the background information of a high-frequency texture area is divided by adopting high-humidity low-pass filtering and local entropy texture on the color image, and then the red, green and blue color components are extracted to obtain three filtering gray level images divided by the red, green and blue components.
Dividing each filtering gray level map into a brightest region, a darkest region and a residual region by a clustering segmentation method, and comparing the brightness of the brightest region and the darkest region with the brightness of the residual region respectively;
selecting a target area based on a comparison result, wherein the target area is one of the brightest area and the darkest area;
and selecting the brightest region as a target region when the absolute difference between the brightness of the brightest region and the brightness of the residual region is larger than the absolute difference between the brightness of the brightest region and the brightness of the residual region, and selecting the darkest region as the target region when the absolute difference between the brightness of the brightest region and the brightness of the residual region is smaller than the absolute difference between the brightness of the brightest region and the brightness of the residual region.
And detecting corner points and edge points of the target area, taking part of the corner points and part of the edge points as salient points based on a detection structure, and expanding the salient points to a salient area through mathematical morphology.
Since the initial study object is a gray-scale image, most of the salient feature detectors are based on brightness, an area with severe brightness change of the image is defined as a salient area, and low-order visual characteristics of the image such as edges, contours, corner points and the like are taken as salient points of the image.
In this embodiment, since the number of corner points is large, a large number of false corner points are extracted, and the false corner points need to be screened, specifically, the non-extremum suppression in Harris algorithm is adopted to perform preliminary screening, and then the adaptive distance threshold is adopted to screen the corner points, so as to avoid too tight crowding of the corner points after preliminary screening, and the Canny edge detection algorithm is adopted to detect the edge points in this embodiment, so that after the detection is finished, the rest of corner points and edge points serve as significant points.
After the salient points are determined, respectively acquiring binary images of all salient points in the three filtering gray images, merging and marking the binary images of all salient points in the three filtering gray images as a total binary image, expanding and filling based on the total binary image to enable each filtering gray image to be closed, marking a communication area capable of being communicated in the three filtering gray images, and extracting the communication area with the largest area from the communication area as the salient area.
It should be noted that the steps illustrated in the above-described flow or flow diagrams of the figures may be performed in a computer system, such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flow diagrams, in some cases, the steps illustrated or described may be performed in an order other than that illustrated herein.
The second embodiment of the present application further provides a laser scanning image matching system, which is used for implementing the first embodiment and the preferred embodiment, and is not described in detail. As used below, the terms "module," "unit," "sub-unit," and the like may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
Fig. 2 is a diagram of a laser scanned image matching system according to a second embodiment of the present invention, as shown in fig. 2, the system includes:
a first acquisition module 10 for acquiring a corresponding scanned image by laser scanning;
the second obtaining module 20 is configured to divide the scanned image to obtain a plurality of sub-layer images, and obtain a gray mapping interval of each sub-layer image;
the obtaining module 30 is configured to dynamically and uniformly enhance the gray value of the corresponding sub-layer image based on the gray mapping interval and through a transformation function, and after the gray enhancement is completed, re-stitch each sub-layer image to obtain the latest scanned image;
and the matching module 40 is used for extracting the characteristics of the latest scanned image, and registering the characteristics of the latest scanned image based on a registration algorithm so as to complete real-time matching of the scanned image.
In specific implementation, the scanning image is divided into a plurality of sub-layer images, the gray scale of each sub-layer image is enhanced, then the latest scanning image is spliced again to ensure that the gray scale of a high-frequency area and the gray scale of a low-frequency area are basically consistent in the enhancement process of the scanning image so as to improve the subsequent matching effect, then the characteristics of the latest scanning image are extracted, and the images of the latest scanning image are matched based on a registration algorithm to complete real-time matching of the scanning image.
In some of these embodiments, the expression of the transformation function is as follows:
X j (h)=rangr i D j (h)Tj=1
wherein X is j (h) Mapping gray value D for gray value h in the sub-layer image j R is the cumulative probability function of each gray level in the sub-layer image i And the gray value of the j-th sub-layer image is represented by T, and the T is a control parameter.
In some of these embodiments, the matching module 40 includes:
and the extraction unit is used for extracting the characteristics of the latest scanned image based on a SIFT algorithm.
In some of these embodiments, the matching module 40 includes:
a retrieval unit configured to retrieve a salient region in the latest scanned image by a salient region detection algorithm, the latest scanned image including a plurality of feature points;
a setting unit, configured to set a detection threshold value of each corresponding feature point through each salient region;
a calculating unit, configured to calculate hamming distances between the corresponding feature points and the nearest matching point and the next nearest matching point, respectively, based on the detection threshold;
and the registration unit is used for acquiring matching characteristic point pairs of the characteristic points according to the Hamming distance, and judging corresponding characteristic points to register based on the matching characteristic point pairs.
In some of these embodiments, the retrieval unit comprises:
the segmentation subunit is used for removing texture areas of the latest scanned image through high-humidity low-pass filtering and a local entropy texture segmentation method to obtain three filtering gray maps divided by red, green and blue components;
a dividing subunit, configured to divide each of the filtered gray maps into a brightest region, a darkest region and a remaining region by using a cluster segmentation method, and compare the brightest region and the darkest region with the luminance of the remaining region, respectively;
a selecting subunit, configured to select a target area based on a comparison result, where the target area is one of the brightest area and the darkest area;
and the detection subunit is used for detecting the corner points and the edge points of the target area, taking part of the corner points and part of the edge points as salient points based on a detection structure, and expanding the salient points to a salient area through mathematical morphology.
The laser scanning image matching system provided by the embodiment of the invention has the same implementation principle and technical effects as those of the embodiment of the method, and for the sake of brevity, reference is made to the corresponding content in the embodiment of the method for the part of the embodiment of the device which is not mentioned.
The above-described respective modules may be functional modules or program modules, and may be implemented by software or hardware. For modules implemented in hardware, the various modules described above may be located in the same processor; or the above modules may be located in different processors in any combination.
In addition, the laser scan image matching method according to the third embodiment of the present application described in connection with fig. 1 may be implemented by a computer device. Fig. 3 is a schematic hardware structure of a computer device according to an embodiment of the present application.
The computer device may include a processor 32 and a memory 33 storing computer program instructions.
In particular, the processor 32 may include a Central Processing Unit (CPU), or an application specific integrated circuit (Application Specific Integrated Circuit, abbreviated as ASIC), or may be configured to implement one or more integrated circuits of embodiments of the present application.
The memory 33 may include, among other things, mass storage for data or instructions. By way of example, and not limitation, memory 33 may comprise a Hard Disk Drive (HDD), floppy Disk Drive, solid state Drive (Solid State Drive, SSD), flash memory, optical Disk, magneto-optical Disk, tape, or universal serial bus (Universal Serial Bus, USB) Drive, or a combination of two or more of the foregoing. The memory 33 may include removable or non-removable (or fixed) media, where appropriate. The memory 33 may be internal or external to the data processing device, where appropriate. In a particular embodiment, the memory 33 is a Non-Volatile (Non-Volatile) memory. In particular embodiments, memory 33 includes Read-Only Memory (ROM) and random access Memory (Random Access Memory, RAM). Where appropriate, the ROM may be a mask-programmed ROM, a programmable ROM (Programmable Read-Only Memory, abbreviated PROM), an erasable PROM (Erasable Programmable Read-Only Memory, abbreviated EPROM), an electrically erasable PROM (Electrically Erasable Programmable Read-Only Memory, abbreviated EEPROM), an electrically rewritable ROM (Electrically Alterable Read-Only Memory, abbreviated EAROM), or a FLASH Memory (FLASH), or a combination of two or more of these. The RAM may be Static Random-Access Memory (SRAM) or dynamic Random-Access Memory (Dynamic Random Access Memory DRAM), where the DRAM may be a fast page mode dynamic Random-Access Memory (Fast Page Mode Dynamic Random Access Memory FPMDRAM), extended data output dynamic Random-Access Memory (Extended Date Out Dynamic Random Access Memory EDODRAM), synchronous dynamic Random-Access Memory (Synchronous Dynamic Random-Access Memory SDRAM), or the like, as appropriate.
Memory 33 may be used to store or cache various data files that need to be processed and/or communicated, as well as possible computer program instructions for execution by processor 32.
The processor 32 implements the laser scanned image matching method of any of the above embodiments by reading and executing computer program instructions stored in the memory 33.
In some of these embodiments, the computer device may also include a communication interface 34 and a bus 31. As shown in fig. 3, the processor 32, the memory 33, and the communication interface 34 are connected to each other through the bus 31 and perform communication with each other.
The communication interface 34 is used to enable communication between various modules, devices, units and/or units in embodiments of the application. The communication interface 34 may also enable communication with other components such as: and the external equipment, the image/data acquisition equipment, the database, the external storage, the image/data processing workstation and the like are used for data communication.
Bus 31 includes hardware, software, or both, coupling components of the computer device to each other. Bus 31 includes, but is not limited to, at least one of: data Bus (Data Bus), address Bus (Address Bus), control Bus (Control Bus), expansion Bus (Expansion Bus), local Bus (Local Bus). By way of example, and not limitation, bus 31 may include a graphics acceleration interface (Accelerated Graphics Port), AGP or other graphics Bus, an enhanced industry standard architecture (Extended Industry Standard Architecture), EISA) Bus, front Side Bus (FSB), hyperTransport (HT) interconnect, industry standard architecture (Industry Standard Architecture), ISA) Bus, infiniBand (InfiniBand) interconnect, low Pin Count (LPC) Bus, memory Bus, micro channel architecture (Micro Channel Architecture), MCA Bus, peripheral component interconnect (Peripheral Component Interconnect), PCI-Express (PCI-X) Bus, serial advanced technology attachment (Serial Advanced Technology Attachment, SATA) Bus, video electronics standards association local (Video Electronics Standards Association Local Bus, VLB) Bus, or other suitable Bus, or a combination of two or more of the foregoing. Bus 31 may include one or more buses, where appropriate. Although embodiments of the present application describe and illustrate a particular bus, the present application contemplates any suitable bus or interconnect.
The computer device may execute the laser scanning image matching method in the embodiment of the present application based on the acquired computer program, thereby implementing the laser scanning image matching method described in connection with fig. 1.
In addition, in combination with the laser scanning image matching method in the above embodiment, the embodiment of the application may be implemented by providing a readable storage medium. The readable storage medium having stored thereon computer program instructions; the computer program instructions, when executed by a processor, implement a laser scanned image matching method of any of the above embodiments.
The technical features of the above-described embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above-described embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples merely represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.

Claims (10)

1. A laser scan matching method, the method comprising:
acquiring a corresponding scanning image through laser scanning;
dividing the scanning image to obtain a plurality of sub-layer images, and obtaining gray mapping intervals of the sub-layer images;
dynamically balancing and enhancing the gray value of the corresponding sub-layer image through a transformation function based on the gray mapping interval, and re-splicing each sub-layer image after gray enhancement is completed so as to obtain the latest scanning image;
extracting the features of the latest scanned image, and registering the features of the latest scanned image based on a registration algorithm to complete real-time matching of the scanned image.
2. The laser scan matching method according to claim 1, wherein the expression of the transformation function is as follows:
X j (h)=rangr i D j (h)Tj=1
wherein X is j (h) Mapping gray value D for gray value h in the sub-layer image j R is the cumulative probability function of each gray level in the sub-layer image i And the gray value of the j-th sub-layer image is represented by T, and the T is a control parameter.
3. The laser scan matching method according to claim 1, wherein said step of extracting features of said latest scanned image comprises:
and extracting the characteristics of the latest scanned image based on a SIFT algorithm.
4. The laser scan matching method according to claim 1, wherein said registering features of said latest scanned image based on a registration algorithm comprises:
retrieving a salient region in the latest scanned image by a salient region detection algorithm, wherein the latest scanned image comprises a plurality of characteristic points;
setting a detection threshold value of each corresponding characteristic point through each salient region;
calculating hamming distances between the corresponding feature points and the nearest matching points and the next nearest matching points respectively based on the detection threshold;
and acquiring matching characteristic point pairs of the characteristic points according to the Hamming distance, and judging corresponding characteristic points based on the matching characteristic point pairs to register.
5. The laser scan matching method according to claim 4, wherein said step of retrieving salient regions in said latest scanned image by salient region detection algorithm comprises:
removing texture areas of the latest scanned image by a high-humidity low-pass filtering and local entropy texture segmentation method to obtain three filtering gray maps divided by red, green and blue components;
dividing each filtering gray level map into a brightest region, a darkest region and a residual region by a clustering segmentation method, and comparing the brightness of the brightest region and the darkest region with the brightness of the residual region respectively;
selecting a target area based on a comparison result, wherein the target area is one of the brightest area and the darkest area;
and detecting corner points and edge points of the target area, taking part of the corner points and part of the edge points as salient points based on a detection structure, and expanding the salient points to a salient area through mathematical morphology.
6. A laser scanning matching system, the system comprising:
the first acquisition module is used for acquiring corresponding scanning images through laser scanning;
the second acquisition module is used for dividing the scanning image to obtain a plurality of sub-layer images and acquiring gray mapping intervals of the sub-layer images;
the obtaining module is used for carrying out dynamic balanced enhancement on the gray value of the corresponding sub-layer image based on the gray mapping interval and through a transformation function, and after the gray enhancement is completed, splicing the sub-layer images again to obtain the latest scanning image;
and the matching module is used for extracting the characteristics of the latest scanned image, and registering the characteristics of the latest scanned image based on a registration algorithm so as to complete real-time matching of the scanned image.
7. The laser scanning matching system of claim 6, wherein the transformation function is expressed as follows:
X j (h)=rangr i D j (h)T j=1
wherein X is j (h) Mapping gray value D for gray value h in the sub-layer image j R is the cumulative probability function of each gray level in the sub-layer image i And the gray value of the j-th sub-layer image is represented by T, and the T is a control parameter.
8. The laser scanning matching system of claim 6, wherein the matching module comprises:
and the extraction unit is used for extracting the characteristics of the latest scanned image based on a SIFT algorithm.
9. The laser scanning matching system of claim 6, wherein the matching module comprises:
a retrieval unit configured to retrieve a salient region in the latest scanned image by a salient region detection algorithm, the latest scanned image including a plurality of feature points;
a setting unit, configured to set a detection threshold value of each corresponding feature point through each salient region;
a calculating unit, configured to calculate hamming distances between the corresponding feature points and the nearest matching point and the next nearest matching point, respectively, based on the detection threshold;
and the registration unit is used for acquiring matching characteristic point pairs of the characteristic points according to the Hamming distance, and judging corresponding characteristic points to register based on the matching characteristic point pairs.
10. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the laser scanning matching method according to any of claims 1 to 5 when executing the computer program.
CN202311218484.2A 2023-09-20 2023-09-20 Laser scanning image matching method, system and computer equipment Pending CN117333518A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311218484.2A CN117333518A (en) 2023-09-20 2023-09-20 Laser scanning image matching method, system and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311218484.2A CN117333518A (en) 2023-09-20 2023-09-20 Laser scanning image matching method, system and computer equipment

Publications (1)

Publication Number Publication Date
CN117333518A true CN117333518A (en) 2024-01-02

Family

ID=89294387

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311218484.2A Pending CN117333518A (en) 2023-09-20 2023-09-20 Laser scanning image matching method, system and computer equipment

Country Status (1)

Country Link
CN (1) CN117333518A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118015004A (en) * 2024-04-10 2024-05-10 宝鸡康盛精工精密制造有限公司 Laser cutting scanning system and method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118015004A (en) * 2024-04-10 2024-05-10 宝鸡康盛精工精密制造有限公司 Laser cutting scanning system and method

Similar Documents

Publication Publication Date Title
CN110414507B (en) License plate recognition method and device, computer equipment and storage medium
CN111667520B (en) Registration method and device for infrared image and visible light image and readable storage medium
CN111681256B (en) Image edge detection method, image edge detection device, computer equipment and readable storage medium
CN106709500B (en) Image feature matching method
CN111369605B (en) Infrared and visible light image registration method and system based on edge features
CN108491498A (en) A kind of bayonet image object searching method based on multiple features detection
CN111814852B (en) Image detection method, image detection device, electronic equipment and computer readable storage medium
CN111325728B (en) Product defect detection method, device, equipment and storage medium
CN111105452A (en) High-low resolution fusion stereo matching method based on binocular vision
CN117333518A (en) Laser scanning image matching method, system and computer equipment
CN110245600A (en) Adaptively originate quick stroke width unmanned plane Approach for road detection
CN113609984A (en) Pointer instrument reading identification method and device and electronic equipment
Zhang et al. Depth enhancement with improved exemplar-based inpainting and joint trilateral guided filtering
CN116309562B (en) Board defect identification method and system
CN111553927B (en) Checkerboard corner detection method, detection system, computer device and storage medium
Hallek et al. Real-time stereo matching on CUDA using Fourier descriptors and dynamic programming
CN116543144A (en) Automatic extraction method, device, equipment and medium for positioning kernel area of image matching
CN115035281B (en) Rapid infrared panoramic image stitching method
CN115423765A (en) Grain defect quantitative segmentation method based on template image
CN114998186A (en) Image processing-based method and system for detecting surface scab defect of copper starting sheet
CN115019069A (en) Template matching method, template matching device and storage medium
CN112052859A (en) License plate accurate positioning method and device in free scene
CN112652004B (en) Image processing method, device, equipment and medium
CN110287972B (en) Animal image contour extraction and matching method
CN118279398B (en) Infrared image target positioning method, device, equipment, medium and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination