CN118429973A - Panoramic image stitching method, device and equipment based on scanning pen and storage medium - Google Patents
Panoramic image stitching method, device and equipment based on scanning pen and storage medium Download PDFInfo
- Publication number
- CN118429973A CN118429973A CN202410607736.9A CN202410607736A CN118429973A CN 118429973 A CN118429973 A CN 118429973A CN 202410607736 A CN202410607736 A CN 202410607736A CN 118429973 A CN118429973 A CN 118429973A
- Authority
- CN
- China
- Prior art keywords
- image
- spliced
- initial
- panoramic image
- edge gradient
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 68
- 238000012545 processing Methods 0.000 claims abstract description 51
- 238000004458 analytical method Methods 0.000 claims abstract description 21
- 238000007499 fusion processing Methods 0.000 claims abstract description 13
- 230000004927 fusion Effects 0.000 claims description 22
- 238000004422 calculation algorithm Methods 0.000 claims description 19
- 238000005070 sampling Methods 0.000 claims description 17
- 238000004590 computer program Methods 0.000 claims description 15
- 230000003044 adaptive effect Effects 0.000 claims description 14
- 238000001914 filtration Methods 0.000 claims description 10
- 238000007781 pre-processing Methods 0.000 claims description 10
- 238000004891 communication Methods 0.000 claims description 7
- 230000011218 segmentation Effects 0.000 claims description 7
- 238000000605 extraction Methods 0.000 claims description 6
- 230000008569 process Effects 0.000 abstract description 19
- 238000012937 correction Methods 0.000 abstract description 5
- 238000012216 screening Methods 0.000 abstract description 5
- 238000010586 diagram Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000009466 transformation Effects 0.000 description 4
- 238000003709 image segmentation Methods 0.000 description 3
- 230000006978 adaptation Effects 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000003708 edge detection Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000000717 retained effect Effects 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Landscapes
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The application relates to the technical field of image processing and discloses a panoramic image stitching method, a device, equipment and a storage medium based on a scanning pen, wherein the method comprises the steps of extracting edge gradient information and determining an image matching threshold; performing image fusion processing and self-adaptive adjustment processing on each image to be spliced to generate an initial panoramic image; and correcting the position information by analyzing the connected domain, and generating a target spliced image. By the method, the key feature points in the image can be more accurately identified by extracting the edge gradient information of the image, the matching threshold value determined by the edge gradient information is helpful for screening out the most matched image areas, so that the accuracy and efficiency of splicing are improved, the connected domain analysis is performed on the initial panoramic image, different areas and objects in the image can be identified, the correction position information is helpful for correcting errors generated in the splicing process, and the efficiency of splicing the image in the scanning process of the scanning pen is improved.
Description
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a panoramic image stitching method, device, apparatus and storage medium based on a scanning pen.
Background
With the development of information technology, the demands for digital reading and data arrangement are increasing. The scanning pen is used as a portable digital tool, paper documents can be quickly converted into electronic documents, and the efficiency of information input and processing is greatly improved. Meanwhile, the learning and working processes of the user are optimized, and a solid technical foundation is laid for development and application of derivative products such as dictionary pens, translation pens and translation machines.
Although panoramic image stitching techniques have been widely used in many fields, there are still some technical challenges to be resolved in the application scenario based on a scanning pen. Due to the different modes, angles and directions of holding the pen, the images captured by the camera of the scanning pen are deformed and inclined in different directions, which brings difficulties to image registration and fusion. Existing image registration algorithms may not be able to effectively handle such non-linear deformations and inclinations, resulting in inaccurate or unstable stitching results. The image content collected by the scanning pen is rich in form and comprises characters with different languages, character sizes, fonts, contrast and colors and background information. These diversified scan contents place higher demands on the panoramic image stitching technology. The existing panoramic image stitching algorithm may not be suitable for the complex scenes, so that the stitching effect is poor, for example, the overlapping area is too few, the contrast is too low, the image text information is too few, and the like. In practice, a user may need to quickly acquire a high quality stitched image. However, the existing panoramic image stitching algorithm may require a long time to process the image sequence, and cannot meet the requirement of real-time. In addition, the complexity of the algorithm is high, which may cause large consumption of computing resources and affect the splicing efficiency. Therefore, how to improve the efficiency of the scanning pen in splicing images in the scanning process becomes a technical problem to be solved.
Disclosure of Invention
The application provides a panoramic image stitching method, device and equipment based on a scanning pen and a storage medium, so as to improve the efficiency of stitching images in the scanning process of the scanning pen.
In a first aspect, the present application provides a panoramic image stitching method based on a scanning pen, the method comprising:
extracting edge gradient information of each image to be spliced, and determining an image matching threshold value based on each edge gradient information;
based on the image matching threshold, performing image fusion processing and self-adaptive adjustment processing on each image to be spliced to generate an initial panoramic image;
And determining the position information of the content in the initial panoramic image by carrying out connected domain analysis on the initial panoramic image, correcting the position information, and generating a target spliced image.
Further, determining position information of content in the initial panoramic image by performing connected domain analysis on the initial panoramic image, correcting the position information, and generating a target spliced image, including:
Taking a preset pixel point of the initial panoramic image as a starting point of a first communication domain, scanning the initial panoramic image pixel by pixel, and marking the scanned pixel point;
Taking unlabeled pixel points in the initial panoramic image as a second connected domain starting point, and adding a label into the second connected domain;
And carrying out merging processing, segmentation processing and/or feature extraction processing on each connected domain based on the attribute of each connected domain to generate a target spliced image.
Further, extracting edge gradient information of each image to be spliced, and determining an image matching threshold based on each edge gradient information, including:
determining a target matching area in each image to be spliced;
extracting the edge gradient information in the target matching region, and determining the number of image features according to the edge gradient information;
The image matching threshold is determined based on the number of image features.
Further, based on the image matching threshold, performing image fusion processing and adaptive adjustment processing on each image to be spliced, and generating an initial panoramic image, including:
determining pixel points, in which the image characteristic values in the images to be spliced are larger than the image matching threshold value, as target pixel points;
and carrying out image fusion on the target pixel point based on a local self-adaptive fusion algorithm, and carrying out contrast self-adaptive adjustment and brightness self-adaptive adjustment on the fused image to generate the initial global image.
Further, extracting edge gradient information of each image to be spliced, and before determining an image matching threshold based on each edge gradient information, the method comprises the following steps:
and preprocessing at least two initial images to generate the corresponding images to be spliced.
Further, preprocessing at least two initial images to generate the corresponding images to be spliced, including:
Converting each initial image into a gray image based on a preset graying formula;
Performing pixel sampling processing on each gray level image to generate a sampled image after sampling processing;
And filtering the sampled images based on a Gaussian blur filtering algorithm to generate the images to be spliced.
Further, the preset graying formula is gray= (305×r+601×g+117×b), where Gray is a Gray value, R is a red channel intensity value, G is a green channel intensity value, and B is a blue channel intensity value.
In a second aspect, the present application also provides a panoramic image stitching apparatus based on a scanning pen, the apparatus comprising:
The image matching threshold determining module is used for extracting edge gradient information of each image to be spliced and determining an image matching threshold based on each edge gradient information;
The initial panoramic image generation module is used for carrying out image fusion processing and self-adaptive adjustment processing on the images to be spliced based on the image matching threshold value to generate an initial panoramic image;
The target spliced image generation module is used for determining the position information of the content in the initial panoramic image through connected domain analysis on the initial panoramic image, correcting the position information and generating a target spliced image.
In a third aspect, the present application also provides a computer device comprising a memory and a processor; the memory is used for storing a computer program; the processor is configured to execute the computer program and implement the panoramic image stitching method based on the scanning pen when the computer program is executed.
In a fourth aspect, the present application also provides a computer readable storage medium storing a computer program, which when executed by a processor causes the processor to implement a panoramic image stitching method based on a scanning pen as described above.
The application discloses a panoramic image stitching method, a device, equipment and a storage medium based on a scanning pen, wherein the panoramic image stitching method based on the scanning pen comprises the steps of extracting edge gradient information of images to be stitched and determining an image matching threshold value based on the edge gradient information; based on the image matching threshold, performing image fusion processing and self-adaptive adjustment processing on each image to be spliced to generate an initial panoramic image; and determining the position information of the content in the initial panoramic image by carrying out connected domain analysis on the initial panoramic image, correcting the position information, and generating a target spliced image. By the method, the key feature points in the image can be more accurately identified by extracting the edge gradient information of the image, the matching threshold value determined by the edge gradient information is helpful for screening out the most matched image areas, so that the accuracy and efficiency of splicing are improved, the connected domain analysis is performed on the initial panoramic image, different areas and objects in the image can be identified, the correction position information is helpful for correcting errors generated in the splicing process, and the efficiency of splicing the image in the scanning process of the scanning pen is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a panoramic image stitching method based on a scanning pen according to an embodiment of the present application;
Fig. 2 is a schematic block diagram of a panoramic image stitching apparatus based on a scanning pen according to an embodiment of the present application;
Fig. 3 is a schematic block diagram of a computer device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The flow diagrams depicted in the figures are merely illustrative and not necessarily all of the elements and operations/steps are included or performed in the order described. For example, some operations/steps may also be split, combined or partially combined, the order of actual execution may vary depending on the actual situation.
It is to be understood that the terminology used in the description of the application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should also be understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
The embodiment of the application provides a panoramic image stitching method, device and equipment based on a scanning pen and a storage medium. The panoramic image stitching method based on the scanning pen can be applied to a server, key feature points in the image can be identified more accurately by extracting edge gradient information of the image, a matching threshold value determined by the edge gradient information is helpful for screening out the most matched image areas, so that stitching accuracy and efficiency are improved, connected domain analysis is conducted on an initial panoramic image, different areas and objects in the image can be identified, correction of position information is helpful for correcting errors generated in the stitching process, and image stitching efficiency of the scanning pen in the scanning process is improved. The server may be an independent server or a server cluster.
Some embodiments of the present application are described in detail below with reference to the accompanying drawings. The following embodiments and features of the embodiments may be combined with each other without conflict.
Referring to fig. 1, fig. 1 is a schematic flow chart of a panoramic image stitching method based on a scanning pen according to an embodiment of the application. The panoramic image stitching method based on the scanning pen can be applied to a server, key feature points in the image can be identified more accurately by extracting edge gradient information of the image, a matching threshold value determined by the edge gradient information is helpful for screening out the most matched image areas, so that the stitching accuracy and efficiency are improved, the initial panoramic image is subjected to connected domain analysis, different areas and objects in the image can be identified, correction of position information is helpful for correcting errors generated in the stitching process, and the image stitching efficiency of the scanning pen in the scanning process is improved.
As shown in fig. 1, the panoramic image stitching method based on the scanning pen specifically includes steps S10 to S30.
S10, extracting edge gradient information of each image to be spliced, and determining an image matching threshold value based on each edge gradient information;
In one embodiment, edge detection is a basic operation in image processing, the main purpose of which is to detect edge information of an object, i.e. the contour of the object, in an image. An edge is a place in an image where a sharp change in pixel values occurs, typically a boundary between an object and a background in the image or between different areas inside the object.
1. Feature quantity adjustment: and the matching threshold value is adjusted according to the feature quantity of the image content, so that the stability and accuracy of splicing are improved.
The number and the obvious degree of the features are determined by extracting the edge gradient information of the image and the size and the number of the gradient information, so that the matching threshold value is dynamically adjusted, and the richer the normal gradient information is, the higher the setting of the matching threshold value is.
MATCHTEMP L ATE operator: feature matching is performed using the MATCHTEMP L ATE operator of OpenCV, and the matching location is determined with minMaxLoc.
MATCHTEMP L ATE is a function in the OpenCV library that is used for image template matching. Template matching is an image recognition technique that compares one image (called a template) with another image (called a scene) to determine the location of the template in the scene. The MATCHTEMP L ATE function returns a match result matrix, where the value of each element represents the degree to which the corresponding position in the scene matches the template.
Step S20, performing image fusion processing and self-adaptive adjustment processing on the images to be spliced based on the image matching threshold value to generate an initial panoramic image;
In one embodiment, the panoramic image is synthesized using techniques such as weighted averaging, local adaptive fusion, and the like.
Locally adaptive fusion is an image processing technique that is used to fuse multiple images into one image, typically to improve image quality. The technology is particularly suitable for the fields of image enhancement, image fusion, image compression, image denoising and the like. The core idea of local adaptive fusion is to consider the local characteristics of the image, using different fusion strategies for each local region.
A key advantage of local adaptive fusion is that it enables different fusion strategies to be employed according to different regions of the image content, thereby better preserving important features and details of the image. For example, in performing multispectral and panchromatic image fusion, more texture information may be retained in a high-resolution panchromatic image while more spectral information is retained in the multispectral image.
The performance of the locally adaptive fusion algorithm depends largely on the segmentation algorithm, feature extraction method and fusion rules chosen. Designing an effective local adaptive fusion algorithm requires a deep understanding of the basic principles and skills of image processing.
The self-adaptive adjustment can be performed according to the contrast and brightness of the image, so that the splicing effect of the image with low contrast is enhanced.
And step S30, determining the position information of the content in the initial panoramic image by carrying out connected domain analysis on the initial panoramic image, correcting the position information, and generating a target spliced image.
In one embodiment, connected domain analysis is a fundamental technique in image processing and computer vision for identifying and marking connected regions in images. The connected region is a set of adjacent pixel points that have the same gray value or color and are topologically connected. Connected domain analysis is very useful in image segmentation, object recognition, image compression, and many other applications.
The embodiment discloses a panoramic image stitching method, a device, equipment and a storage medium based on a scanning pen, wherein the panoramic image stitching method based on the scanning pen comprises the steps of extracting edge gradient information of images to be stitched and determining an image matching threshold value based on the edge gradient information; based on the image matching threshold, performing image fusion processing and self-adaptive adjustment processing on each image to be spliced to generate an initial panoramic image; and determining the position information of the content in the initial panoramic image by carrying out connected domain analysis on the initial panoramic image, correcting the position information, and generating a target spliced image. By the method, the key feature points in the image can be more accurately identified by extracting the edge gradient information of the image, the matching threshold value determined by the edge gradient information is helpful for screening out the most matched image areas, so that the accuracy and efficiency of splicing are improved, the connected domain analysis is performed on the initial panoramic image, different areas and objects in the image can be identified, the correction position information is helpful for correcting errors generated in the splicing process, and the efficiency of splicing the image in the scanning process of the scanning pen is improved.
Based on the embodiment shown in fig. 1, in this embodiment, step S30 includes:
Taking a preset pixel point of the initial panoramic image as a starting point of a first communication domain, scanning the initial panoramic image pixel by pixel, and marking the scanned pixel point;
Taking unlabeled pixel points in the initial panoramic image as a second connected domain starting point, and adding a label into the second connected domain;
And carrying out merging processing, segmentation processing and/or feature extraction processing on each connected domain based on the attribute of each connected domain to generate a target spliced image.
In one embodiment, the basic steps of connected domain analysis are:
Binarization: in general, connected domain analysis starts from binary images, where there are only two possible pixel values: background and foreground. If the input image is color or gray, it first needs to be converted into a binary image;
Scanning an image: scanning the image pixel by pixel, starting from the upper left corner of the image;
Identifying a connected domain: when a pixel belonging to the foreground is scanned (i.e., its value is 1 or other non-zero value), it indicates that a starting point of a connected domain is found;
Marking a connected domain: starting from the current pixel, the entire connected domain is identified by tracking its connected pixels (i.e., adjacent pixels having the same value). This process typically uses depth-first search (DFS) or breadth-first search (BFS) algorithms;
And (3) distributing marks: allocating a unique mark or index for each communication domain;
filling and marking: replacing all pixels in the connected domain with their corresponding labels;
Repeating: scanning of the image continues until all foreground pixels are marked.
Based on the embodiment shown in fig. 1, in this embodiment, step S10 includes:
determining a target matching area in each image to be spliced;
extracting the edge gradient information in the target matching region, and determining the number of image features according to the edge gradient information;
The image matching threshold is determined based on the number of image features.
Based on the embodiment shown in fig. 1, in this embodiment, step S20 includes:
determining pixel points, in which the image characteristic values in the images to be spliced are larger than the image matching threshold value, as target pixel points;
and carrying out image fusion on the target pixel point based on a local self-adaptive fusion algorithm, and carrying out contrast self-adaptive adjustment and brightness self-adaptive adjustment on the fused image to generate the initial global image.
In one embodiment, the contrast adaptation is used to improve the contrast of the image, making the image appear more sharp, and is suitable for situations where some areas of the image have a lower contrast and require enhanced detail. The basic idea of contrast adaptive adjustment is to adjust the contrast according to local characteristics of the image, instead of simply applying global contrast enhancement.
The following are some key steps for contrast adaptation:
image segmentation: the image is segmented into small blocks or regions. This may be achieved by various image segmentation algorithms, such as threshold segmentation, region growing, edge detection, etc.;
calculating a histogram: a gray level histogram is calculated for each of the divided regions. The histogram shows the number of pixels of different gray values in the region;
contrast limitation: from the histogram, a contrast limit for the region is determined. This typically involves finding a valley in the histogram, i.e. a point where the number of pixels drops significantly;
histogram equalization: histogram equalization is applied to each region to improve its contrast. Histogram equalization is a statistical method that redistributes the gray values of the pixels so that the histogram is more uniform;
adjusting contrast: the contrast of each region is adjusted according to the contrast limit. This may be achieved by linear transformation or non-linear transformation;
Combining the images: and combining the adjusted areas into a final output image.
The self-adaptive brightness adjustment refers to adjusting the brightness of an image according to the brightness characteristics of the content of the image so as to improve the visual effect of the image or adapt to different display environments. This adjustment may be global or local, depending on the application scenario and the target.
The following are some common methods of luminance adaptive adjustment:
1. global brightness adjustment
Global brightness adjustment is to apply the same brightness transformation to the whole image. The common global brightness adjustment method comprises the following steps:
Linear transformation: the overall brightness of the image is adjusted by increasing or decreasing the brightness values of all pixels. For example, if it is desired that the image be entirely bright, a positive constant may be added to the luminance value of each pixel;
Contrast Limited Adaptive Histogram Equalization (CLAHE): although the CLAHE is mainly used for contrast enhancement, it can also be used to adjust brightness by limiting contrast enhancement during histogram equalization to avoid overstretching the histogram.
2. Local brightness adjustment
Local brightness adjustment considers the brightness characteristics of different regions in the image, applying different adjustment strategies to each region. Common local brightness adjustment methods include:
Local histogram equalization: respectively carrying out histogram equalization on each small area (or window) of the image, and then combining the results to improve the contrast and brightness of the whole image;
adaptive threshold: the threshold is dynamically adjusted according to the local brightness characteristics of the image to better distinguish between foreground and background during binarization.
Based on the embodiment shown in fig. 1, in this embodiment, step S10 includes:
and preprocessing at least two initial images to generate the corresponding images to be spliced.
Further, preprocessing at least two initial images to generate the corresponding images to be spliced, including:
Converting each initial image into a gray image based on a preset graying formula;
Performing pixel sampling processing on each gray level image to generate a sampled image after sampling processing;
And filtering the sampled images based on a Gaussian blur filtering algorithm to generate the images to be spliced.
In a specific embodiment, the image preprocessing process includes the following steps:
1. graying: the color image is converted into a gray image, and the processing complexity is reduced.
Graying formula: gray= (305×r+601×g+117×b);
2. sampling: the image is sampled, the data volume is reduced, and the processing speed is improved.
The sampling strategy is formulated according to the actual size of the image and the specific characteristic information of characters in the image. For example, the sampling rate used is 3-point decimating, i.e., decimating one of every three pixels for subsequent processing. The sampling method can effectively reduce the consumption of computing resources and improve the processing efficiency while guaranteeing the success rate of splicing.
3. Local region matching: and (3) matching the local areas of the designated images so as to improve the instantaneity and the efficiency.
Because the two sides are uneven in brightness and obvious in distortion, the upper part and the lower part are blank areas, and a large number of experiments prove that the effect is best when the arrangement of the local area is specifically 1/5-4/5 in the x direction and 1/4-3/4 in the y direction after a large number of experiments.
4. Gaussian blur: and a Gaussian blur filter is applied to remove noise interference, so that the calculation complexity is reduced as much as possible under the condition of ensuring the effect.
Further, the preset graying formula is gray= (305×r+601×g+117×b), where Gray is a Gray value, R is a red channel intensity value, G is a green channel intensity value, and B is a blue channel intensity value.
Specifically, graying is a process of converting a color image into a gray image. The gray image has only one color channel, and the value of each pixel is expressed as a gray value. A color image is typically composed of three color channels: red (R), green (G), blue (B). The basic idea of graying is to combine the values of the three color channels in some way to obtain a gray value.
Referring to fig. 2, fig. 2 is a schematic block diagram of a panorama image stitching device based on a scanning pen for performing the panorama image stitching method based on the scanning pen according to an embodiment of the present application. The panoramic image stitching device based on the scanning pen can be configured on a server.
As shown in fig. 2, the panorama image stitching device 400 based on a scanning pen includes:
An image matching threshold determining module 410, configured to extract edge gradient information of each image to be stitched, and determine an image matching threshold based on each edge gradient information;
the initial panoramic image generation module 420 is configured to perform image fusion processing and adaptive adjustment processing on each image to be stitched based on the image matching threshold value, so as to generate an initial panoramic image;
The target stitched image generating module 430 is configured to determine location information of content in the initial panoramic image by performing connected domain analysis on the initial panoramic image, and correct the location information to generate a target stitched image.
Further, the target stitched image generating module 430 includes:
The pixel point scanning unit is used for taking a preset pixel point of the initial panoramic image as a starting point of a first communication domain, scanning the initial panoramic image pixel by pixel, and marking the scanned pixel points;
A label adding unit, configured to take an unlabeled pixel point in the initial panoramic image as a starting point of a second connected domain, and add a label to the second connected domain;
And the target spliced image generation unit is used for carrying out merging processing, segmentation processing and/or feature extraction processing on each connected domain based on the attribute of each connected domain to generate a target spliced image.
Further, the image matching threshold determining module 410 includes:
the target matching area determining unit is used for determining a target matching area in each image to be spliced;
The image feature quantity determining unit is used for extracting the edge gradient information in the target matching area and determining the quantity of image features according to the edge gradient information;
An image matching threshold determining unit configured to determine the image matching threshold based on the number of image features.
Further, the initial panoramic image generation module 420 includes:
the target pixel point determining unit is used for determining the pixel points, in which the image characteristic values in the images to be spliced are larger than the image matching threshold value, as target pixel points;
And the initial global image generation unit is used for carrying out image fusion on the target pixel points based on a local self-adaptive fusion algorithm, and carrying out contrast self-adaptive adjustment and brightness self-adaptive adjustment on the fused image to generate the initial global image.
Further, the panoramic image stitching apparatus 400 based on the scanning pen includes:
the image generation module to be spliced is used for preprocessing at least two initial images and generating corresponding images to be spliced.
Further, the image generation module to be spliced includes:
A gray image conversion unit for converting each of the initial images into a gray image based on a preset graying formula;
a sampling image generating unit for performing pixel sampling processing on each gray level image to generate a sampling image after sampling processing;
The image to be spliced is generated by filtering each sampling image based on a Gaussian blur filtering algorithm.
It should be noted that, for convenience and brevity of description, the specific working process of the apparatus and each module described above may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
The apparatus described above may be implemented in the form of a computer program which is executable on a computer device as shown in fig. 3.
Referring to fig. 3, fig. 3 is a schematic block diagram of a computer device according to an embodiment of the present application. The computer device may be a server.
With reference to FIG. 3, the computer device includes a processor, memory, and a network interface connected by a system bus, where the memory may include a non-volatile storage medium and an internal memory.
The non-volatile storage medium may store an operating system and a computer program. The computer program comprises program instructions that, when executed, cause the processor to perform any of a number of scanning pen-based panoramic image stitching methods.
The processor is used to provide computing and control capabilities to support the operation of the entire computer device.
The internal memory provides an environment for the execution of a computer program in a non-volatile storage medium that, when executed by a processor, causes the processor to perform any of a number of scanning pen-based panoramic image stitching methods.
The network interface is used for network communication such as transmitting assigned tasks and the like. It will be appreciated by those skilled in the art that the structure shown in FIG. 3 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
It should be appreciated that the Processor may be a central processing unit (Central Processing Unit, CPU), it may also be other general purpose processors, digital signal processors (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. Wherein the general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Wherein in one embodiment the processor is configured to run a computer program stored in the memory to implement the steps of:
extracting edge gradient information of each image to be spliced, and determining an image matching threshold value based on each edge gradient information;
based on the image matching threshold, performing image fusion processing and self-adaptive adjustment processing on each image to be spliced to generate an initial panoramic image;
And determining the position information of the content in the initial panoramic image by carrying out connected domain analysis on the initial panoramic image, correcting the position information, and generating a target spliced image.
In one embodiment, the position information of the content in the initial panoramic image is determined by performing connected domain analysis on the initial panoramic image, and the position information is corrected to generate a target stitched image for realizing:
Taking a preset pixel point of the initial panoramic image as a starting point of a first communication domain, scanning the initial panoramic image pixel by pixel, and marking the scanned pixel point;
Taking unlabeled pixel points in the initial panoramic image as a second connected domain starting point, and adding a label into the second connected domain;
And carrying out merging processing, segmentation processing and/or feature extraction processing on each connected domain based on the attribute of each connected domain to generate a target spliced image.
In one embodiment, edge gradient information of each image to be spliced is extracted, and an image matching threshold is determined based on each edge gradient information, so as to realize:
determining a target matching area in each image to be spliced;
extracting the edge gradient information in the target matching region, and determining the number of image features according to the edge gradient information;
The image matching threshold is determined based on the number of image features.
In one embodiment, based on the image matching threshold, performing image fusion processing and adaptive adjustment processing on each image to be stitched, and generating an initial panoramic image for implementation:
determining pixel points, in which the image characteristic values in the images to be spliced are larger than the image matching threshold value, as target pixel points;
and carrying out image fusion on the target pixel point based on a local self-adaptive fusion algorithm, and carrying out contrast self-adaptive adjustment and brightness self-adaptive adjustment on the fused image to generate the initial global image.
In one embodiment, the method is used for extracting the edge gradient information of each image to be spliced and realizing before determining the image matching threshold value based on each edge gradient information:
and preprocessing at least two initial images to generate the corresponding images to be spliced.
In one embodiment, preprocessing at least two initial images to generate the corresponding images to be spliced, which are used for realizing:
Converting each initial image into a gray image based on a preset graying formula;
Performing pixel sampling processing on each gray level image to generate a sampled image after sampling processing;
And filtering the sampled images based on a Gaussian blur filtering algorithm to generate the images to be spliced.
The embodiment of the application also provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program, the computer program comprises program instructions, and the processor executes the program instructions to realize any panoramic image stitching method based on the scanning pen.
The computer readable storage medium may be an internal storage unit of the computer device according to the foregoing embodiment, for example, a hard disk or a memory of the computer device. The computer readable storage medium may also be an external storage device of the computer device, such as a plug-in hard disk, a smart memory card (SMART MED I A CARD, SMC), a secure digital (Secure Di gita l, SD) card, a flash memory card (F L ASH CARD), etc. that are provided on the computer device.
While the application has been described with reference to certain preferred embodiments, it will be understood by those skilled in the art that various changes and substitutions of equivalents may be made and equivalents will be apparent to those skilled in the art without departing from the scope of the application. Therefore, the protection scope of the application is subject to the protection scope of the claims.
Claims (10)
1. A panoramic image stitching method based on a scanning pen is characterized by comprising the following steps:
extracting edge gradient information of each image to be spliced, and determining an image matching threshold value based on each edge gradient information;
based on the image matching threshold, performing image fusion processing and self-adaptive adjustment processing on each image to be spliced to generate an initial panoramic image;
And determining the position information of the content in the initial panoramic image by carrying out connected domain analysis on the initial panoramic image, correcting the position information, and generating a target spliced image.
2. The method for stitching panoramic images based on a scanning pen according to claim 1, wherein the initial panoramic image includes at least one connected domain, the determining location information of content in the initial panoramic image by performing connected domain analysis on the initial panoramic image, and correcting the location information, and generating a target stitched image includes:
Taking a preset pixel point of the initial panoramic image as a starting point of a first communication domain, scanning the initial panoramic image pixel by pixel, and marking the scanned pixel point;
Taking unlabeled pixel points in the initial panoramic image as a second connected domain starting point, and adding a label into the second connected domain;
And carrying out merging processing, segmentation processing and/or feature extraction processing on each connected domain based on the attribute of each connected domain to generate a target spliced image.
3. The method for stitching panoramic images based on a scanning pen according to claim 1, wherein the extracting edge gradient information of each image to be stitched and determining an image matching threshold based on each edge gradient information comprises:
determining a target matching area in each image to be spliced;
extracting the edge gradient information in the target matching region, and determining the number of image features according to the edge gradient information;
The image matching threshold is determined based on the number of image features.
4. The method for stitching panoramic images based on a scanning pen according to claim 1, wherein the performing image fusion processing and adaptive adjustment processing on each image to be stitched based on the image matching threshold value to generate an initial panoramic image includes:
determining pixel points, in which the image characteristic values in the images to be spliced are larger than the image matching threshold value, as target pixel points;
and carrying out image fusion on the target pixel point based on a local self-adaptive fusion algorithm, and carrying out contrast self-adaptive adjustment and brightness self-adaptive adjustment on the fused image to generate the initial global image.
5. The method for stitching panoramic images based on a scanning pen according to claim 1, wherein before extracting edge gradient information of each image to be stitched and determining an image matching threshold based on each edge gradient information, the method comprises:
and preprocessing at least two initial images to generate the corresponding images to be spliced.
6. The method for stitching panoramic images based on a scanning pen according to claim 5, wherein preprocessing at least two initial images to generate the corresponding images to be stitched comprises:
Converting each initial image into a gray image based on a preset graying formula;
Performing pixel sampling processing on each gray level image to generate a sampled image after sampling processing;
And filtering the sampled images based on a Gaussian blur filtering algorithm to generate the images to be spliced.
7. The method of claim 6, wherein the predetermined Gray scale formula is gray= (305×r+601×g+117×b), where Gray is a Gray scale value, R is a red channel intensity value, G is a green channel intensity value, and B is a blue channel intensity value.
8. Panoramic image stitching device based on scanning pen, characterized by comprising:
The image matching threshold determining module is used for extracting edge gradient information of each image to be spliced and determining an image matching threshold based on each edge gradient information;
The initial panoramic image generation module is used for carrying out image fusion processing and self-adaptive adjustment processing on the images to be spliced based on the image matching threshold value to generate an initial panoramic image;
The target spliced image generation module is used for determining the position information of the content in the initial panoramic image through connected domain analysis on the initial panoramic image, correcting the position information and generating a target spliced image.
9. A computer device, the computer device comprising a memory and a processor;
the memory is used for storing a computer program;
the processor for executing the computer program and for implementing the scanning pen based panoramic image stitching method according to any one of claims 1 to 7 when executing the computer program.
10. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program which, when executed by a processor, causes the processor to implement the scanning pen based panoramic image stitching method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410607736.9A CN118429973A (en) | 2024-05-16 | 2024-05-16 | Panoramic image stitching method, device and equipment based on scanning pen and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410607736.9A CN118429973A (en) | 2024-05-16 | 2024-05-16 | Panoramic image stitching method, device and equipment based on scanning pen and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN118429973A true CN118429973A (en) | 2024-08-02 |
Family
ID=92331460
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410607736.9A Pending CN118429973A (en) | 2024-05-16 | 2024-05-16 | Panoramic image stitching method, device and equipment based on scanning pen and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118429973A (en) |
-
2024
- 2024-05-16 CN CN202410607736.9A patent/CN118429973A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104751142B (en) | A kind of natural scene Method for text detection based on stroke feature | |
US8457403B2 (en) | Method of detecting and correcting digital images of books in the book spine area | |
CN109784342B (en) | OCR (optical character recognition) method and terminal based on deep learning model | |
CN112183038A (en) | Form identification and typing method, computer equipment and computer readable storage medium | |
CN110020692B (en) | Handwriting separation and positioning method based on print template | |
CN111275034B (en) | Method, device, equipment and storage medium for extracting text region from image | |
CN112749696B (en) | Text detection method and device | |
CN114283156B (en) | Method and device for removing document image color and handwriting | |
CN113609984A (en) | Pointer instrument reading identification method and device and electronic equipment | |
CN111915635A (en) | Test question analysis information generation method and system supporting self-examination paper marking | |
CN109741273A (en) | A kind of mobile phone photograph low-quality images automatically process and methods of marking | |
CN112508024A (en) | Intelligent identification method for embossed seal font of electrical nameplate of transformer | |
CN115578741A (en) | Mask R-cnn algorithm and type segmentation based scanned file layout analysis method | |
RU2633182C1 (en) | Determination of text line orientation | |
CN108877030B (en) | Image processing method, device, terminal and computer readable storage medium | |
CN118275449A (en) | Copper strip surface defect detection method, device and equipment | |
CN111445402B (en) | Image denoising method and device | |
CN106056575B (en) | A kind of image matching method based on like physical property proposed algorithm | |
CN108133205B (en) | Method and device for copying text content in image | |
CN114511862B (en) | Form identification method and device and electronic equipment | |
CN116030472A (en) | Text coordinate determining method and device | |
Bhaskar et al. | Implementing optical character recognition on the android operating system for business cards | |
CN118429973A (en) | Panoramic image stitching method, device and equipment based on scanning pen and storage medium | |
CN113705571A (en) | Method and device for removing red seal based on RGB threshold, readable medium and electronic equipment | |
CN111191580B (en) | Synthetic rendering method, apparatus, electronic device and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |