CN112511767B - Video splicing method and device, and storage medium - Google Patents

Video splicing method and device, and storage medium Download PDF

Info

Publication number
CN112511767B
CN112511767B CN202011195981.1A CN202011195981A CN112511767B CN 112511767 B CN112511767 B CN 112511767B CN 202011195981 A CN202011195981 A CN 202011195981A CN 112511767 B CN112511767 B CN 112511767B
Authority
CN
China
Prior art keywords
images
image
edge information
edge
correlation coefficient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011195981.1A
Other languages
Chinese (zh)
Other versions
CN112511767A (en
Inventor
李朋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Inspur Scientific Research Institute Co Ltd
Original Assignee
Shandong Inspur Scientific Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Inspur Scientific Research Institute Co Ltd filed Critical Shandong Inspur Scientific Research Institute Co Ltd
Priority to CN202011195981.1A priority Critical patent/CN112511767B/en
Publication of CN112511767A publication Critical patent/CN112511767A/en
Application granted granted Critical
Publication of CN112511767B publication Critical patent/CN112511767B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a video splicing method, video splicing equipment and a storage medium, wherein images to be spliced acquired by a plurality of preset image acquisition devices are acquired. And based on a preset edge extraction algorithm, performing edge extraction on each image to be spliced to obtain a corresponding edge information image. Wherein, the edge information image comprises a plurality of extracted edge profiles. And comparing the edge information images based on the edge contour to determine the overlapping area of the edge information images. And carrying out image splicing according to the overlapping area of each edge information image and the image to be spliced so as to obtain a corresponding panoramic image.

Description

Video splicing method and device, and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a video stitching method and apparatus, and a storage medium.
Background
With the development of the technology and the development of video display equipment, people no longer satisfy the visual perception brought by the common video, and therefore, the panoramic video appears in the visual field of people.
In the prior art, two modes are generally adopted to obtain a financial video, one mode is to capture video image information of a whole scene through a professional camera such as a wide-angle camera; and the other method is to acquire videos with different visual angles through a common camera and acquire a panoramic image through a stitching algorithm. The professional wide-angle camera is expensive, so that the applicable range is small. Compared with the first mode, the method for obtaining the panoramic image by adopting the video splicing method is more popular and has wider application range.
However, in the existing video splicing, on the premise of ensuring the video splicing precision, the calculation amount in the video splicing process is large, excessive calculation resources are occupied, and the video splicing efficiency is low.
Therefore, how to provide a video splicing technology which saves computing resources and improves video efficiency becomes a crucial technical problem.
Disclosure of Invention
In order to solve the above problem, embodiments of the present specification provide a video splicing method and apparatus, and a storage medium.
The embodiment of the specification adopts the following technical scheme:
the embodiment of the application provides a video splicing method, which comprises the following steps: and acquiring the images to be spliced acquired by a plurality of preset image acquisition devices. And based on a preset edge extraction algorithm, performing edge extraction on each image to be spliced to obtain a corresponding edge information image. Wherein, the edge information image comprises a plurality of extracted edge profiles. And comparing the edge information images based on the edge contour to determine the overlapping area of the edge information images. And carrying out image splicing according to the overlapping area of each edge information image and the image to be spliced so as to obtain a corresponding panoramic image.
According to the embodiment of the application, the images to be spliced are collected through the plurality of image collecting devices, the requirements on the hardware aspect of the image collecting devices are reduced, and the video splicing cost can be reduced. The image edge is processed by using the edge extraction mode, so that the reasons of unclear and the like in the image can be avoided, the interference on the determination of the overlapping area of the edge information image is brought, and the video splicing efficiency and the splicing precision are improved.
In one possible implementation, location information of each image capture device is obtained. And according to the position information, determining adjacent edge information images from the edge information image set. Wherein, the edge information image set is composed of a plurality of edge information images. And comparing the adjacent edge information images based on the edge contour, and determining the overlapping area of the adjacent edge information images.
In one possible implementation, perspective information of each image acquisition device is acquired. And determining the to-be-determined overlapping areas corresponding to the adjacent edge information images respectively according to the position information and the visual angle information. And comparing the regions to be overlapped based on the edge contour, and determining the overlapping regions of the adjacent edge information images.
In a possible implementation manner, binarization processing is performed on adjacent edge information images to obtain corresponding images to be compared. And generating a correlation coefficient set between undetermined overlapping regions in the images to be compared according to a preset rule. The correlation coefficient set comprises at least one correlation coefficient, and the correlation coefficient is used for representing the correlation between the undetermined overlapping regions. And determining an overlapping area between the images to be compared according to the correlation coefficient set.
In one possible implementation manner, based on a preset rule, the comparison matrices respectively corresponding to the regions to be overlapped are determined. And calculating the correlation coefficient between the comparison matrixes according to the comparison matrixes. And reducing the number of columns of the contrast matrix according to the first preset direction and the corresponding preset threshold, and calculating the correlation coefficient between the contrast matrices after the number of columns is reduced until the number of columns of the contrast matrix is zero. And forming a correlation coefficient set according to the correlation coefficients among the comparison matrixes and the correlation coefficients among the comparison matrixes with the reduced column numbers.
In a possible implementation manner, based on the image to be compared, the number of columns of the comparison matrix is increased according to the second preset direction and the corresponding preset threshold, and the correlation coefficient between the comparison matrices with the increased number of columns is calculated. And calculating the difference value of the correlation coefficient between the contrast matrixes after the number of columns is increased and the correlation coefficient between the contrast matrixes. And determining the overlapping area of the images to be spliced according to the difference value until the difference value meets the preset condition.
In one possible implementation, a panoramic image display request is received from a terminal device. And acquiring the corresponding images to be spliced according to the panoramic image display request. And the images to be spliced are stored in a double-rate synchronous dynamic random access memory DDR in advance. And performing image splicing based on the overlapping area of each edge information image and the image to be spliced to obtain a panoramic image and sending the panoramic image to the terminal equipment.
In a possible implementation manner, the image to be stitched acquired by the image acquisition device is stored in a pre-set FIFO memory, so that the image to be stitched is stored in the DDR by the FIFO memory.
A video stitching device, comprising: the system includes at least one processor, and a memory communicatively coupled to the at least one processor. Wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform functions of: image information from different cameras is acquired. The image information is images collected by different cameras on the same horizontal plane and in different directions. And generating image comparison information based on the image information. And determining the overlapped part in the image information according to the image comparison information. Wherein the overlapping portion is located at an image edge position. Based on the overlapping portions in the image information, the image information from different cameras is stitched.
A non-transitory computer storage medium for video stitching, storing computer-executable instructions configured to: acquiring preset images to be spliced acquired by a plurality of image acquisition devices; based on a preset edge extraction algorithm, performing edge extraction on each image to be spliced to obtain a corresponding edge information image; wherein, the edge information image comprises a plurality of extracted edge outlines; comparing the edge information images based on the edge contour to determine an overlapping area of the edge information images; and carrying out image splicing according to the overlapping area of each edge information image and the image to be spliced so as to obtain a corresponding panoramic image.
The embodiment of the specification adopts at least one technical scheme which can achieve the following beneficial effects: the image to be spliced is acquired through the image acquisition equipment, and the image to be spliced is processed in an edge extraction mode, so that the calculated amount for determining the overlapping area of the image to be spliced is reduced, and the video splicing efficiency is improved. Meanwhile, the embodiment of the application saves the splicing cost of the panoramic video and improves the user experience in a video hardware processing mode.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a flowchart of a video stitching method according to an embodiment of the present application;
fig. 2 is another flowchart of a video stitching method according to an embodiment of the present application;
fig. 3 is a flowchart of another video stitching method according to an embodiment of the present application;
fig. 4 is a schematic diagram of a video stitching method according to an embodiment of the present application;
fig. 5 is a flowchart of a video stitching method according to an embodiment of the present application;
fig. 6 is a flowchart of a video stitching method according to an embodiment of the present application;
fig. 7 is a schematic diagram of a video stitching method according to an embodiment of the present application;
fig. 8 is a schematic diagram of a video stitching method according to an embodiment of the present application;
FIG. 9 is a flowchart of a video stitching method according to an embodiment of the present application;
fig. 10 is an application scene diagram of the video stitching method according to the embodiment of the present application;
fig. 11 is a schematic structural diagram of a video splicing apparatus corresponding to fig. 1 for carrying the video splicing method according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present disclosure more apparent, the technical solutions of the present disclosure will be clearly and completely described below with reference to the specific embodiments of the present disclosure and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments obtained by a person skilled in the art without making any inventive step based on the embodiments in the description belong to the protection scope of the present application.
The technical solutions provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings.
An embodiment of the present application provides a video stitching method, as shown in fig. 1, the method may include steps S101 to S103:
s101, acquiring images to be spliced, which are acquired by a plurality of preset image acquisition devices.
The image capturing devices have a certain viewing angle, which is called a viewing angle, for example, the viewing angle of the image capturing device may be at least 90 degrees, that is, an image with a viewing angle of at least 90 degrees in the horizontal direction may be captured. If an image of about one image acquisition device at one viewing angle is to be acquired, a plurality of image acquisition devices are adopted in the embodiment of the application, images at the same time of a plurality of viewing angles are acquired, and the images at the same time are spliced. For example, a 360-degree panoramic image is required in movie production, and when the viewing angle of the image acquisition device is 90 degrees, images in different directions can be acquired through four cameras, and the acquired images are spliced to obtain corresponding panoramic images.
In the embodiment of the present application, the image capturing device may be an electronic device such as a camera, a mobile phone, and a portable computer, which is not limited in the present application.
It should be noted that each of the acquired images to be stitched is an image acquired by the corresponding image acquisition device at the same time. That is to say, the video stitching is based on image stitching, and after the image stitching at the same time is completed, the images are sorted according to the time sequence, so that the corresponding stitched video can be obtained.
And S102, performing edge extraction on each image to be spliced based on a preset edge extraction algorithm to obtain a corresponding edge information image.
The edge information image may include a plurality of extracted edge profiles.
According to the embodiment of the application, the edges of the obtained images to be spliced are extracted through a preset edge extraction algorithm, so that the images formed by the edges with the violent gray value change in the images to be spliced can be respectively obtained, and the images are edge information images corresponding to the images to be spliced.
In this embodiment of the present application, the preset edge extraction algorithm may be a Canny operator edge extraction algorithm, or a Sobel operator edge extraction algorithm, or the like. The Sobel operator edge extraction algorithm has a good effect on processing images with gradually changed gray levels and more noise, so that the edge positioning is not very accurate, and the edge of the image is more than one pixel. And the Canny operator edge extraction algorithm is not easy to interfere with noise, and can detect a real weak edge. Therefore, the video stitching method provided by the embodiment of the application is more suitable for a Canny operator edge extraction algorithm.
The edge extraction is carried out on the image to be spliced, so that the interference of noise and fuzzy areas in the image to be spliced on the splicing is reduced, and the image splicing efficiency is improved.
S103, comparing the edge information images based on the edge contour, and determining the overlapping area of the edge information images.
In order to more efficiently determine the overlapping area of the edge information images, comparison of the respective edge information images may be performed according to the discharge position of the image pickup device or the like. Specifically, as shown in fig. 2, S103 may specifically include the following steps:
s201, acquiring position information of each image acquisition device.
In the embodiment of the application, the image acquisition equipment can be provided with a positioning device, and the server acquires the position information of the image acquisition equipment through the positioning device. The server may also store the position information of the image capturing device in advance, where the position information of the image capturing device stored in advance may be obtained by a user through input of a user terminal. The server may obtain the position information of the image capturing device through the two manners, and may also obtain the position information of the image capturing device through another manner capable of obtaining the position information, which is not limited in this application.
It should be noted that the location information may refer to a specific installation location of the image capturing device, for example, represented by longitude and latitude coordinates; and also refers to relative positions between the image acquisition devices, and is used for representing the position relationship between the image acquisition devices. For example, if the four image capturing devices are image capturing devices a, b, c, and d, respectively, where a is adjacent to b, b is adjacent to c, and c is adjacent to a, the position information of the image capturing device a may be that a is adjacent to b.
S202, according to the position information, determining adjacent edge information images from the edge information image set.
In the embodiment of the application, after the edge extraction is performed on each image to be spliced, corresponding edge information images can be obtained respectively, and the obtained edge information images form an image set. If the number of the image acquisition devices is n, the number of the acquired images to be spliced is also n, and the number of the corresponding edge information images is also n. That is, the number of the edge information images is the same as the number of the images to be stitched, and the edge information images are in a one-to-one relationship.
The server can determine the position relation between the edge information images according to the acquired position information of the image acquisition equipment, so that the adjacent edge information images are determined. For example, if the image capturing devices a and B are adjacent to each other, the edge information image a corresponding to the image capturing device a and the edge information image B corresponding to the image capturing device B are adjacent edge information images.
By the scheme, the adjacent edge information images are obtained according to the position information of the image acquisition equipment, namely, the contrast relation between the edge information images can be determined, the time for determining the overlapping area by comparing the edge information images can be saved, the image splicing efficiency is improved, and the calculation resources are saved.
S203, comparing the adjacent edge information images based on the edge contour, and determining the overlapping area of the adjacent edge information images.
According to the embodiment of the application, the adjacent edge information images are compared through a plurality of edge profiles contained in the edge information images, so that the overlapping area of the adjacent edge information images is determined. As shown in fig. 3, step S203 may be specifically implemented by the following steps:
s301, obtaining the view angle information of each image acquisition device.
The visual angle information is used for representing the visual angle of the image acquisition equipment. For example, the angle of view of the image capture device is 96 degrees.
The server may store rule information of each image capturing device in advance, and the specification information may include view angle information to acquire view angle information of each image capturing device.
And S302, determining the to-be-determined overlapping areas corresponding to the adjacent edge information images respectively according to the position information and the view angle information.
In the embodiment of the application, the server can determine the undetermined overlapping area of the edge information image according to the position information and the view angle information of the image acquisition equipment. For example, the image capturing devices are a, b, c and d, respectively, the viewing angle of each image capturing device is 95 degrees, and the formed panoramic image is a 360-degree panoramic image. One of the adjacent edge information images is an image a and an image B, and according to the view angle and the position of the image acquisition device, the region to be overlapped is determined to be 5 degrees, as shown in fig. 4, a 'in the edge information image a and B' in the edge information image B are regions to be overlapped.
By the method, the corresponding undetermined overlapping area can be determined from the edge information image, and the undetermined overlapping area can be searched when the overlapping area is determined, so that the computing resource is further saved and the video splicing efficiency is improved on the basis of ensuring the video splicing precision.
It should be noted that, the overlapped area may also be directly confirmed through the edge information image, which requires more computing resources and is higher in video splicing cost compared with the above scheme.
S303, comparing the regions to be overlapped based on the edge contour, and determining the overlapping region of the adjacent edge information images.
Specifically, as shown in fig. 5, the following steps may be performed:
s501, performing binarization processing on the adjacent edge information images to obtain corresponding images to be compared.
In the embodiment of the application, the server performs edge extraction on the image to be identified to obtain an edge information image, wherein the edge information image is a gray image and comprises an edge profile. The server performs image binarization processing on the edge information image, that is, the pixel value of the pixel point of the edge contour can be set to 255, and the pixel values of the pixel points corresponding to other regions in the image can be set to 0. As will be understood by those skilled in the art, the pixel value of the pixel of the edge contour may be set to 0, and the pixel value of the pixel of the other region of the image may be set to 255.
And S502, generating a correlation coefficient set between the regions to be overlapped in the images to be compared according to a preset rule.
The correlation coefficient set comprises at least one correlation coefficient, and the correlation coefficient is used for representing the correlation between the undetermined overlapping regions.
Specifically, as shown in fig. 6, step S502 may be implemented by the following steps:
s601, based on a preset rule, determining the contrast matrixes respectively corresponding to the regions to be determined to be overlapped.
Specifically, after the edge information image is subjected to binarization processing, pixel values corresponding to pixel points in the region to be overlapped are used as matrix values, corresponding contrast matrixes are generated according to the positions of the pixel points, and the rows and columns of the matrixes correspond to the rows and columns of the pixel points in the region to be overlapped.
It should be noted that, when the edge information image is not subjected to binarization processing, a corresponding contrast matrix may be generated according to the position of each pixel point and the pixel value corresponding to the pixel point in the to-be-determined overlap region may be used as the value of the matrix, and the rows and columns of the matrix correspond to the rows and columns of the pixel points in the to-be-determined overlap region.
S602, calculating the correlation coefficient between the comparison matrixes according to the comparison matrixes.
Specifically, as shown in fig. 7, the comparison matrix C and the comparison matrix D are the comparison matrices generated for two adjacent edge information images, and the matrices corresponding to the corresponding to-be-determined overlap regions E, F are calculated to obtain the corresponding correlation coefficients. Further, multiplying and summing the corresponding position values in the matrix to obtain the correlation coefficient of the contrast matrix C and the contrast matrix D, and recording the correlation coefficient as h 1
S603, according to the first preset direction and the corresponding preset threshold, reducing the number of columns of the contrast matrix, and calculating the correlation coefficient between the contrast matrices with the reduced number of columns until the number of columns of the contrast matrix is zero. The correlation coefficient between the reduced row contrast matrices is recorded as h 2 And the analogy is repeated until the column number of the contrast matrix is zero. If the contrast matrix has n rows of pixel points, n correlation coefficients can be obtained.
The first preset direction may be a direction departing from the region to be overlapped. For example, the corresponding preset threshold may be 1,the correlation coefficient between the reduced-by-one-column contrast matrices is calculated and recorded as h 2 And the analogy is repeated until the column number of the contrast matrix is zero. Taking fig. 8 as an example:
and respectively moving the to-be-determined overlapping area of the comparison matrix G to the right and left by the displacement of one pixel point, or moving the to-be-determined overlapping area of the comparison matrix G and the comparison matrix H by the displacement of one pixel point, and calculating the correlation coefficient of the comparison matrix reduced by one column at the moment. And the column number of the comparison matrix is zero along with the displacement of the to-be-determined overlapping area moved by the five pixel points. At this time, six correlation coefficient values are obtained by calculation.
S604, forming the correlation coefficient set according to the correlation coefficients among the comparison matrixes and the correlation coefficients among the comparison matrixes with the reduced column numbers.
Forming a set of correlation coefficients, e.g. { h }, based on the correlation coefficients obtained in the above scheme 1 ,h 2 ,……,h n }。
S503, determining an overlapping area between the images to be compared according to the correlation coefficient set.
And comparing the numerical values of the correlation coefficients in the correlation coefficient set according to the correlation coefficient set, and determining an overlapping area between the images to be compared according to the numerical values of the correlation coefficients.
Specifically, after the binarization processing, if the pixel value of the pixel of the edge contour in the image to be compared is 255 and the pixel values of the pixels corresponding to other regions in the image are 0, the corresponding region is the overlap region between the images to be compared, and the contrast matrix corresponding to the value with the largest correlation coefficient in the correlation coefficient set is the contrast matrix corresponding to the value with the largest correlation coefficient in the correlation coefficient set.
In the embodiment of the application, the server compares edge contours in the to-be-determined overlapping areas in the adjacent edge information images, and determines the overlapping areas of the adjacent edge information images according to the comparison result. As shown in fig. 4, a and B are adjacent edge information images, a 'and B' are to-be-determined overlapping regions of a and B, respectively, and a 'and B' are compared to determine whether the to-be-determined overlapping regions of a and B are overlapping regions thereof.
By the scheme, the undetermined overlapping area of the edge information image is determined, so that the whole area of the edge information image can be prevented from being compared, the calculation amount of a server is reduced, and the efficiency of determining the overlapping area of the edge information image is improved.
Moreover, through the binarization processing, the pixel values of the pixels in the to-be-compared image are only 0 and 255, so that the calculation amount of the subsequent determined overlapping area can be further reduced, the calculation resource is further saved, and the working efficiency is improved.
In an actual scene, an error may occur when the image acquisition device is placed, so that a situation that the to-be-compared image cannot cover all the overlapping regions may occur when the overlapping regions determined by the above method between the to-be-compared images are the to-be-compared overlapping regions. Therefore, in order to further improve the precision of video splicing, the video splicing method provided in the embodiment of the present application may further include the following steps, as shown in fig. 9:
and S901, increasing the number of columns of the comparison matrix according to a second preset direction and a corresponding preset threshold value based on the image to be compared, and calculating a correlation coefficient between the comparison matrices with the increased number of columns.
This scheme is opposite to the first preset direction in the above-mentioned scheme S603, and the purpose is to further determine whether the range of the overlap region is the region to be overlapped.
And S902, calculating the difference value between the correlation coefficient between the contrast matrixes with the increased column number and the correlation coefficient between the contrast matrixes.
Taking a preset threshold as 1 column as an example, after a contrast matrix of a column of pixel points is added, calculating the value of a correlation coefficient, and performing difference operation by using the value of the correlation coefficient between the value and the contrast matrix corresponding to the region to be overlapped to calculate the difference value.
And S903, determining the overlapping area of the images to be spliced according to the difference value until the difference value meets the preset condition.
Specifically, if the pixel value of the pixel of the edge contour in the image to be compared is 255 and the pixel values of the pixels corresponding to other regions in the image are 0, the region to be overlapped is the overlapping region of the image to be stitched when the difference is a negative number. If the difference value is positive, the image to be spliced is determined to be an overlapping area of the image to be spliced, and if the difference value is negative, the image to be spliced is determined to be an overlapping area of the image to be spliced.
And S104, performing image splicing according to the overlapping area of each edge information image and the image to be spliced to obtain a corresponding panoramic image.
Specifically, a panoramic image display request from a terminal device is received. Acquiring corresponding images to be spliced according to the panoramic image display request; and the image to be spliced is stored in the DDR SDRAM in advance. And performing image splicing based on the overlapping area of each edge information image and the image to be spliced to obtain a panoramic image and sending the panoramic image to the terminal equipment.
In some embodiments of the present application, a video stitching method provided in an embodiment of the present application further includes: and storing the image to be spliced acquired by the image acquisition equipment to a pre-set FIFO memory so that the FIFO memory stores the image to be spliced to the DDR SDRAM.
Fig. 10 is an application scene diagram of the video stitching method according to the embodiment of the present application. As shown in fig. 10, the video stitching method provided in the embodiment of the present application may specifically be as follows in an application scenario:
in the application scene, four image acquisition devices (cameras) exist, the server respectively acquires images acquired by the cameras, and corresponding data analysis is not performed on the images to obtain images to be spliced. And storing the images to be spliced in the DDR SDRAM through the FIFO memory and the DDR SDRM controller.
The server carries out edge extraction and image binarization processing on the images to be spliced to obtain images to be compared, carries out edge comparison on adjacent images to be compared to obtain overlapping areas of the adjacent images, and calculates parameters corresponding to the overlapping areas.
And after receiving a panoramic image display request sent by the terminal equipment, the server acquires images to be spliced from the DRR SDRAM, performs image splicing according to parameters corresponding to the overlapping area to obtain panoramic images, and performs video format packaging on the panoramic images according to the specification of the display so that the display displays the panoramic images.
Based on the scheme, the edge extraction is carried out on the image to be spliced to obtain the edge information image, the overlapping area is determined based on the edge outline in the edge information image, on the basis of ensuring the video splicing precision, the computing resources are saved to a great extent, the computing cost is saved, the working efficiency is improved, and the user experience is improved.
Fig. 11 is a schematic structural diagram of a video stitching apparatus according to an embodiment of the present application, and as shown in fig. 11, the apparatus includes:
the system includes at least one processor, and a memory communicatively coupled to the at least one processor. Wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to: acquiring preset images to be spliced acquired by a plurality of image acquisition devices; and based on a preset edge extraction algorithm, performing edge extraction on the images to be spliced to obtain corresponding edge information images. Wherein, the edge information image comprises a plurality of extracted edge profiles. Comparing the edge information images based on the edge contour to determine an overlapping area of the edge information images;
and carrying out image splicing according to the overlapping area of each edge information image and the image to be spliced so as to obtain a corresponding panoramic image.
The embodiments in the present application are described in a progressive manner, and the same and similar parts among the embodiments can be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the device and media embodiments, the description is relatively simple as it is substantially similar to the method embodiments, and reference may be made to some descriptions of the method embodiments for relevant points.
The device and the medium provided by the embodiment of the application correspond to the method one to one, so the device and the medium also have the similar beneficial technical effects as the corresponding method, and the beneficial technical effects of the method are explained in detail above, so the beneficial technical effects of the device and the medium are not repeated herein.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (5)

1. A method for video stitching, the method comprising:
acquiring preset images to be spliced acquired by a plurality of image acquisition devices;
based on a preset edge extraction algorithm, performing edge extraction on each image to be spliced to obtain a corresponding edge information image; wherein, the edge information image comprises a plurality of extracted edge outlines;
comparing the edge information images based on the edge contour to determine an overlapping area of the edge information images;
performing image splicing according to the overlapping area of each edge information image and the image to be spliced to obtain a corresponding panoramic image;
comparing the edge information images based on the edge profile to determine an overlapping area of the edge information images, specifically comprising:
acquiring position information of each image acquisition device;
determining adjacent edge information images from the edge information image set according to the position information; wherein the edge information image set is composed of a plurality of edge information images;
based on the edge contour, comparing the adjacent edge information images to determine an overlapping area of the adjacent edge information images;
based on the edge contour, comparing the adjacent edge information images, and determining an overlapping area of the adjacent edge information images, specifically comprising:
acquiring visual angle information of each image acquisition device;
determining to-be-determined overlapping areas corresponding to the adjacent edge information images respectively according to the position information and the visual angle information;
comparing the regions to be overlapped based on the edge contour to determine the overlapping regions of the adjacent edge information images;
based on the edge contour, comparing the regions to be overlapped, and determining the overlapping region of the adjacent edge information images, specifically comprising:
carrying out binarization processing on the adjacent edge information images to obtain corresponding images to be compared;
generating a correlation coefficient set between undetermined overlapping areas in the images to be compared according to a preset rule; the correlation coefficient set comprises at least one correlation coefficient, and the correlation coefficient is used for representing the correlation between the undetermined overlapping areas;
determining an overlapping area between the images to be compared according to the correlation coefficient set;
generating a set of correlation coefficients between undetermined overlapping regions in the images to be compared according to a preset rule, specifically comprising:
determining comparison matrixes respectively corresponding to the regions to be determined to be overlapped based on a preset rule;
calculating correlation coefficients among the comparison matrixes according to the comparison matrixes;
reducing the number of columns of the contrast matrix according to a first preset direction and a corresponding preset threshold value, and calculating a correlation coefficient between the contrast matrices with the reduced number of columns until the number of columns of the contrast matrix is zero;
forming a correlation coefficient set according to the correlation coefficients among the comparison matrixes and the correlation coefficients among the comparison matrixes with the reduced column numbers;
in the case that the determined overlapping region between the images to be compared is the to-be-compared overlapping region according to the set of correlation coefficients, the method further includes:
based on the image to be compared, increasing the number of columns of the comparison matrix according to a second preset direction and a corresponding preset threshold value, and calculating a correlation coefficient between the comparison matrices with the increased number of columns;
calculating the difference value of the correlation coefficient between the contrast matrixes with the increased column number and the correlation coefficient between the contrast matrixes;
and determining the overlapping area of the images to be spliced according to the difference value until the difference value meets the preset condition.
2. The method according to claim 1, wherein image stitching is performed according to the overlapping area of each of the edge information images and the image to be stitched to obtain a corresponding panoramic image, and specifically includes:
receiving a panoramic image display request from a terminal device;
acquiring corresponding images to be spliced according to the panoramic image display request; the images to be spliced are stored in a double-rate synchronous dynamic random access memory DDR in advance;
and performing image splicing based on the overlapping area of each edge information image and the image to be spliced to obtain a panoramic image and sending the panoramic image to the terminal equipment.
3. The method of claim 2, further comprising:
and storing the image to be spliced acquired by the image acquisition equipment to a preset FIFO memory so that the FIFO memory stores the image to be spliced to the DDR.
4. A video stitching device, characterized in that the device comprises:
at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to:
acquiring preset images to be spliced acquired by a plurality of image acquisition devices;
based on a preset edge extraction algorithm, performing edge extraction on each image to be spliced to obtain a corresponding edge information image; wherein, the edge information image comprises a plurality of extracted edge outlines;
comparing the edge information images based on the edge contour to determine an overlapping area of the edge information images;
performing image splicing according to the overlapping area of each edge information image and the image to be spliced to obtain a corresponding panoramic image;
comparing the edge information images based on the edge profile to determine an overlapping area of the edge information images, specifically comprising:
acquiring position information of each image acquisition device;
determining adjacent edge information images from the edge information image set according to the position information; wherein the edge information image set is composed of a plurality of edge information images;
based on the edge contour, comparing the adjacent edge information images to determine an overlapping area of the adjacent edge information images;
comparing the adjacent edge information images based on the edge profile to determine an overlapping area of the adjacent edge information images, specifically including:
acquiring visual angle information of each image acquisition device;
determining to-be-determined overlapping areas corresponding to the adjacent edge information images respectively according to the position information and the visual angle information;
comparing the regions to be overlapped based on the edge contour to determine the overlapping regions of the adjacent edge information images;
based on the edge contour, comparing the regions to be overlapped, and determining the overlapping region of the adjacent edge information images, specifically comprising:
carrying out binarization processing on the adjacent edge information images to obtain corresponding images to be compared;
generating a correlation coefficient set between undetermined overlapping areas in the images to be compared according to a preset rule; the correlation coefficient set comprises at least one correlation coefficient, and the correlation coefficient is used for representing the correlation between the undetermined overlapping areas;
determining an overlapping area between the images to be compared according to the correlation coefficient set;
generating a set of correlation coefficients between undetermined overlapping regions in the images to be compared according to a preset rule, specifically comprising:
determining comparison matrixes respectively corresponding to the regions to be determined to be overlapped based on a preset rule;
calculating correlation coefficients among the comparison matrixes according to the comparison matrixes;
reducing the number of columns of the contrast matrix according to a first preset direction and a corresponding preset threshold value, and calculating a correlation coefficient between the contrast matrices with the reduced number of columns until the number of columns of the contrast matrix is zero;
forming a correlation coefficient set according to the correlation coefficients among the comparison matrixes and the correlation coefficients among the comparison matrixes with the reduced column numbers;
in the case that the determined overlapping region between the images to be compared is the to-be-compared overlapping region according to the set of correlation coefficients, the method further includes:
based on the image to be compared, increasing the number of columns of the comparison matrix according to a second preset direction and a corresponding preset threshold value, and calculating a correlation coefficient between the comparison matrices with the increased number of columns;
calculating the difference value of the correlation coefficient between the contrast matrixes with the increased column number and the correlation coefficient between the contrast matrixes;
and determining the overlapping area of the images to be spliced according to the difference value until the difference value meets the preset condition.
5. A non-transitory computer storage medium for video stitching, storing computer-executable instructions, the computer-executable instructions configured to:
acquiring preset images to be spliced acquired by a plurality of image acquisition devices;
based on a preset edge extraction algorithm, performing edge extraction on each image to be spliced to obtain a corresponding edge information image; wherein, the edge information image comprises a plurality of extracted edge outlines;
comparing the edge information images based on the edge contour to determine an overlapping area of the edge information images;
performing image splicing according to the overlapping area of each edge information image and the image to be spliced to obtain a corresponding panoramic image;
comparing the edge information images based on the edge profile to determine an overlapping area of the edge information images, specifically comprising:
acquiring position information of each image acquisition device;
determining adjacent edge information images from the edge information image set according to the position information; wherein the edge information image set is composed of a plurality of edge information images;
based on the edge contour, comparing the adjacent edge information images to determine an overlapping area of the adjacent edge information images;
based on the edge contour, comparing the adjacent edge information images, and determining an overlapping area of the adjacent edge information images, specifically comprising:
acquiring visual angle information of each image acquisition device;
determining to-be-determined overlapping areas corresponding to the adjacent edge information images respectively according to the position information and the visual angle information;
comparing the regions to be overlapped based on the edge contour to determine the overlapping regions of the adjacent edge information images;
based on the edge contour, comparing the regions to be overlapped, and determining the overlapping region of the adjacent edge information images, specifically comprising:
carrying out binarization processing on the adjacent edge information images to obtain corresponding images to be compared;
generating a correlation coefficient set between undetermined overlapping areas in the images to be compared according to a preset rule; the correlation coefficient set comprises at least one correlation coefficient, and the correlation coefficient is used for representing the correlation between the undetermined overlapping areas;
determining an overlapping area between the images to be compared according to the correlation coefficient set;
generating a set of correlation coefficients between undetermined overlapping regions in the images to be compared according to a preset rule, specifically comprising:
determining comparison matrixes respectively corresponding to the regions to be determined to be overlapped based on a preset rule;
calculating correlation coefficients among the comparison matrixes according to the comparison matrixes;
reducing the number of columns of the contrast matrix according to a first preset direction and a corresponding preset threshold value, and calculating a correlation coefficient between the contrast matrices with the reduced number of columns until the number of columns of the contrast matrix is zero;
forming a correlation coefficient set according to the correlation coefficients among the comparison matrixes and the correlation coefficients among the comparison matrixes with the reduced column numbers;
in the case that the determined overlapping region between the images to be compared is the to-be-compared overlapping region according to the set of correlation coefficients, the method further includes:
based on the image to be compared, increasing the number of columns of the comparison matrix according to a second preset direction and a corresponding preset threshold value, and calculating a correlation coefficient between the comparison matrices with the increased number of columns;
calculating the difference value of the correlation coefficient between the contrast matrixes with the increased column number and the correlation coefficient between the contrast matrixes;
and determining the overlapping area of the images to be spliced according to the difference value until the difference value meets the preset condition.
CN202011195981.1A 2020-10-30 2020-10-30 Video splicing method and device, and storage medium Active CN112511767B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011195981.1A CN112511767B (en) 2020-10-30 2020-10-30 Video splicing method and device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011195981.1A CN112511767B (en) 2020-10-30 2020-10-30 Video splicing method and device, and storage medium

Publications (2)

Publication Number Publication Date
CN112511767A CN112511767A (en) 2021-03-16
CN112511767B true CN112511767B (en) 2022-08-02

Family

ID=74954768

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011195981.1A Active CN112511767B (en) 2020-10-30 2020-10-30 Video splicing method and device, and storage medium

Country Status (1)

Country Link
CN (1) CN112511767B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113094013B (en) * 2021-04-08 2021-12-31 深圳市极客智能科技有限公司 Remote transmission system, method, device, equipment and storage medium for spliced display screen
CN113111843B (en) * 2021-04-27 2023-12-29 北京赛博云睿智能科技有限公司 Remote image data acquisition method and system
CN114004744B (en) * 2021-10-15 2023-04-28 深圳市亚略特科技股份有限公司 Fingerprint splicing method and device, electronic equipment and medium
CN117173161B (en) * 2023-10-30 2024-02-23 杭州海康威视数字技术股份有限公司 Content security detection method, device, equipment and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103679672A (en) * 2013-10-28 2014-03-26 华南理工大学广州学院 Panorama image splicing method based on edge vertical distance matching
CN107993197A (en) * 2017-12-28 2018-05-04 哈尔滨工业大学深圳研究生院 The joining method and system of a kind of panorama camera
KR101885728B1 (en) * 2017-05-19 2018-08-06 이화여자대학교 산학협력단 Image stitching system, method and computer readable recording medium
CN108416732A (en) * 2018-02-02 2018-08-17 重庆邮电大学 A kind of Panorama Mosaic method based on image registration and multi-resolution Fusion
CN112541902A (en) * 2020-12-15 2021-03-23 平安科技(深圳)有限公司 Similar area searching method, similar area searching device, electronic equipment and medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09147107A (en) * 1995-11-16 1997-06-06 Futec Inc Method and device for evaluating image position
EP3611701A4 (en) * 2017-04-11 2020-12-02 Rakuten, Inc. Image processing device, image processing method, and program
CN109459119B (en) * 2018-10-17 2020-06-05 京东数字科技控股有限公司 Weight measurement method, device and computer readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103679672A (en) * 2013-10-28 2014-03-26 华南理工大学广州学院 Panorama image splicing method based on edge vertical distance matching
KR101885728B1 (en) * 2017-05-19 2018-08-06 이화여자대학교 산학협력단 Image stitching system, method and computer readable recording medium
CN107993197A (en) * 2017-12-28 2018-05-04 哈尔滨工业大学深圳研究生院 The joining method and system of a kind of panorama camera
CN108416732A (en) * 2018-02-02 2018-08-17 重庆邮电大学 A kind of Panorama Mosaic method based on image registration and multi-resolution Fusion
CN112541902A (en) * 2020-12-15 2021-03-23 平安科技(深圳)有限公司 Similar area searching method, similar area searching device, electronic equipment and medium

Also Published As

Publication number Publication date
CN112511767A (en) 2021-03-16

Similar Documents

Publication Publication Date Title
CN112511767B (en) Video splicing method and device, and storage medium
KR102480245B1 (en) Automated generation of panning shots
CN108921897B (en) Method and apparatus for locating card area
KR101862889B1 (en) Autofocus for stereoscopic camera
US10187546B2 (en) Method and device for correcting document image captured by image pick-up device
US20150124059A1 (en) Multi-frame image calibrator
US9402065B2 (en) Methods and apparatus for conditional display of a stereoscopic image pair
CN105791801A (en) Image Processing Apparatus, Image Pickup Apparatus, Image Processing Method
US11615548B2 (en) Method and system for distance measurement based on binocular camera, device and computer-readable storage medium
US20120105601A1 (en) Apparatus and method for creating three-dimensional panoramic image by using single camera
CN111582022A (en) Fusion method and system of mobile video and geographic scene and electronic equipment
CN104065863A (en) Image processing method and processing device
US11985421B2 (en) Device and method for predicted autofocus on an object
CN105467741A (en) Panoramic shooting method and terminal
CN104123716B (en) The detection method of picture steadiness, device and terminal
CN110223320B (en) Object detection tracking method and detection tracking device
CN114494824B (en) Target detection method, device and equipment for panoramic image and storage medium
CN116456191A (en) Image generation method, device, equipment and computer readable storage medium
CN115222602A (en) Image splicing method, device, equipment and storage medium
CN104935815A (en) Shooting method, shooting device, camera and mobile terminal
CN114095644B (en) Image correction method and computer equipment
CN113515978B (en) Data processing method, device and storage medium
CN113409375A (en) Image processing method, image processing apparatus, and non-volatile storage medium
CN113362351A (en) Image processing method and device, electronic equipment and storage medium
CN114697501B (en) Time-based monitoring camera image processing method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220701

Address after: 250101 building S02, 1036 Chaochao Road, high tech Zone, Jinan City, Shandong Province

Applicant after: Shandong Inspur Scientific Research Institute Co.,Ltd.

Address before: Floor 6, Chaochao Road, Shandong Province

Applicant before: JINAN INSPUR HIGH-TECH TECHNOLOGY DEVELOPMENT Co.,Ltd.

GR01 Patent grant
GR01 Patent grant