CN114298903A - Image splicing method and device and electronic equipment - Google Patents

Image splicing method and device and electronic equipment Download PDF

Info

Publication number
CN114298903A
CN114298903A CN202111507013.4A CN202111507013A CN114298903A CN 114298903 A CN114298903 A CN 114298903A CN 202111507013 A CN202111507013 A CN 202111507013A CN 114298903 A CN114298903 A CN 114298903A
Authority
CN
China
Prior art keywords
image
spliced
original
target
rectangle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111507013.4A
Other languages
Chinese (zh)
Inventor
王方宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Vicino Electronics Co ltd
Original Assignee
Qingdao Vicino Electronics Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Vicino Electronics Co ltd filed Critical Qingdao Vicino Electronics Co ltd
Priority to CN202111507013.4A priority Critical patent/CN114298903A/en
Publication of CN114298903A publication Critical patent/CN114298903A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention relates to the technical field of image processing, in particular to an image splicing method, an image splicing device and electronic equipment, wherein the method comprises the steps of obtaining at least two images to be spliced; splicing the at least two images to be spliced to generate an original spliced image; acquiring at least two target position points to determine a maximum inscribed rectangle corresponding to the original spliced image; and clipping the original spliced image based on the maximum inscribed rectangle to determine a target spliced image. The maximum inscribed rectangle is determined through at least two target position points, and then the original spliced image is cut based on the maximum inscribed rectangle, so that not only can the image information in the original spliced image be retained to the maximum extent, but also a black area influencing display in the original spliced image can be removed, and the maximum complete display of the original spliced image is realized.

Description

Image splicing method and device and electronic equipment
Technical Field
The invention relates to the technical field of image processing, in particular to an image splicing method and device and electronic equipment.
Background
The image splicing technology is a branch of image processing and is widely applied to the fields of vehicle-mounted panoramic vision, synthesis of a plurality of pictures and the like. Opencv is a commonly used open source library for image processing, and an algorithm of the Opencv mainly embodies indexes such as reliability and universality, and is difficult to meet requirements such as attractiveness and operation speed required in engineering. The image stitching algorithm in Opencv can stitch a plurality of pictures shot at different angles, focal lengths and exposure degrees into a panoramic image which maximally retains the content of each picture.
However, due to the difference of camera parameters such as the acquisition angle and the focal length of different image acquisition devices, the large-area black is used for replacing the scene with missing edges, which results in poor display effect of the spliced images and influences the beauty and the practicability.
Disclosure of Invention
In view of this, embodiments of the present invention provide an image stitching method, an image stitching device and an electronic device, so as to solve the problem that a display effect of a stitched image is not good.
According to a first aspect, an embodiment of the present invention provides an image stitching method, including:
acquiring at least two images to be spliced;
splicing the at least two images to be spliced to generate an original spliced image;
acquiring at least two target position points to determine a maximum inscribed rectangle corresponding to the original spliced image;
and clipping the original spliced image based on the maximum inscribed rectangle to determine a target spliced image.
According to the image splicing method provided by the embodiment of the invention, the maximum inscribed rectangle is determined through at least two target position points, and the original spliced image is cut based on the maximum inscribed rectangle, so that the image information in the original spliced image can be retained to the maximum extent, the black area influencing the display in the original spliced image can be removed, and the maximum complete display of the original spliced image is realized.
With reference to the first aspect, in a first implementation manner of the first aspect, the acquiring at least two target location points to determine a maximum inscribed rectangle corresponding to the original stitched image includes:
performing image processing on the original spliced image, and determining edge feature points of the original spliced image;
sequentially extracting any two target edge feature points from the edge feature points to generate a plurality of selectable rectangles;
and determining the target position point in the target edge feature points based on the position relationship between the other edge feature points and each optional rectangle.
According to the image splicing method provided by the embodiment of the invention, because the edge feature points belong to the original spliced image and are positioned at the edge of the original spliced image, the maximum inscribed rectangle is determined by utilizing the edge feature points, the data processing amount can be reduced, and the processing efficiency is improved.
With reference to the first implementation manner of the first aspect, in a second implementation manner of the first aspect, the determining the target location point in the target edge feature points based on the location relationship between the other edge feature points and each of the selectable rectangles includes:
judging whether other edge feature points are positioned outside the optional rectangle or not;
when the edge feature point is positioned outside the selectable rectangle, determining that the selectable rectangle is a target rectangle and the target edge feature point forming the selectable rectangle is a selectable position point;
comparing the area size of each optional rectangle, and determining the target position point in the optional position points.
According to the image splicing method provided by the embodiment of the invention, when the edge feature points are positioned outside the selectable rectangle, the pixel points which do not belong to the image to be spliced in the original spliced image are also positioned outside the selectable rectangle, and the selectable rectangle is screened by utilizing the position relation between other edge feature points and the selectable rectangle, so that the screening accuracy and efficiency can be ensured.
With reference to the second implementation manner of the first aspect, in a third implementation manner of the first aspect, the comparing the area size of each of the selectable rectangles, and determining the target location point from among the selectable location points includes:
screening the selectable rectangles with the largest area from the selectable rectangles, and determining the selectable rectangles as the largest inscribed rectangles;
and determining the target position point corresponding to the maximum inscribed rectangle in the selectable position points.
With reference to the first implementation manner of the first aspect, in a fourth implementation manner of the first aspect, the acquiring at least two target location points further includes:
judging whether the original splicing image is the first original splicing image after being electrified or not;
and when the original spliced image is the first original spliced image after being electrified, executing the step of carrying out image processing on the original spliced image and determining the edge characteristic points of the original spliced image.
According to the image splicing method provided by the embodiment of the invention, for the first original spliced image after being electrified, image processing needs to be carried out on the first original spliced image to determine the edge characteristic point, so that the maximum inscribed rectangle is determined, errors caused by position changes of image acquisition equipment can be avoided, and the accuracy of the obtained target spliced image is ensured.
With reference to the fourth implementation manner of the first aspect, in the fifth implementation manner of the first aspect, the acquiring at least two target location points further includes:
and when the original spliced image is not the first original spliced image after being powered on, extracting the at least two target position points from the target storage space.
According to the image splicing method provided by the embodiment of the invention, because the probability of position change is very low in the working process of the image acquisition equipment, when the image is not the first original spliced image, the target position point is directly extracted to determine the maximum inscribed rectangle, so that the data processing amount can be reduced, and the image splicing efficiency can be improved.
With reference to the first aspect, in a sixth implementation manner of the first aspect, the stitching the at least two images to be stitched to generate an original stitched image includes:
performing feature recognition on the at least two images to be spliced;
and splicing the at least two images to be spliced based on the result of the feature identification to generate the original spliced image.
According to the image splicing method provided by the embodiment of the invention, when the images to be spliced are spliced, the characteristics of the images to be spliced are extracted, so that the images to be spliced are automatically spliced, and the image continuity of the original spliced images can be ensured.
According to a second aspect, an embodiment of the present invention provides an image stitching apparatus, including:
the first acquisition module is used for acquiring at least two images to be spliced;
the splicing module is used for splicing the at least two images to be spliced to generate an original spliced image;
the second acquisition module is used for acquiring at least two target position points so as to determine a maximum inscribed rectangle corresponding to the original spliced image;
and the cropping module is used for cropping the original spliced image based on the maximum inscribed rectangle and determining a target spliced image.
According to a third aspect, an embodiment of the present invention provides an electronic device, including: the image stitching method includes a memory and a processor, where the memory and the processor are communicatively connected with each other, the memory stores computer instructions, and the processor executes the computer instructions to execute the image stitching method described in the first aspect or any one of the embodiments of the first aspect.
According to a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, which stores computer instructions for causing a computer to execute the image stitching method described in the first aspect or any one of the implementation manners of the first aspect.
It should be noted that, for the beneficial effects of the image stitching device, the electronic device and the computer-readable storage medium provided in the embodiment of the present invention, please refer to the corresponding description of the image stitching method above, which is not described herein again.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a schematic diagram of a stitched image obtained by stitching according to an existing image stitching method;
FIG. 2 is a flow chart of an image stitching method according to an embodiment of the present invention;
FIG. 3 is a flow chart of an image stitching method according to an embodiment of the present invention;
FIG. 4 is a flow chart of an image stitching method according to an embodiment of the present invention;
FIG. 5 is a flow chart of an image stitching method according to an embodiment of the present invention;
FIG. 6 is a block diagram of an image stitching apparatus according to an embodiment of the present invention;
fig. 7 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The image stitching method provided by the embodiment of the invention can be applied to live-action navigation or live-action positioning and the like in a vehicle-mounted scene, can also be applied to other fields, does not limit the application field at all, and can be specifically set according to actual requirements. When the method is applied to a vehicle-mounted scene, at least two image acquisition devices are installed on a vehicle to acquire images at different angles, and then at least two acquired images to be spliced are spliced to obtain an original spliced image. And determining a maximum inscribed rectangle in the original spliced image, and clipping the original spliced image to realize the maximum and complete display of the spliced image.
Further, when the position of the image acquisition device is fixed, the maximum inscribed rectangle in each original stitched image is fixed, and therefore, after the maximum inscribed rectangle is determined, the clipping of the target stitched image can be performed directly by using the maximum inscribed rectangle. When the position of the image acquisition equipment is changed, the position of the maximum inscribed rectangle in the original spliced image needs to be determined again.
Taking a vehicle-mounted live-action ring as an example, each image acquisition device acquires an image to be spliced, the image to be spliced is sent to the electronic device, and the electronic device splices the images to be spliced based on the images to be spliced to obtain an original spliced image. And then determining a maximum inscribed rectangle in the original spliced image, and clipping the original spliced image through the maximum inscribed rectangle to obtain a target spliced image. The target mosaic image can be displayed on a display screen of the vehicle to realize live-action panoramic viewing.
In accordance with an embodiment of the present invention, there is provided an image stitching method embodiment, it is noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than here.
In this embodiment, an image stitching method is provided, which can be used in the above-mentioned electronic devices, such as a computer, a tablet computer, and the like, and fig. 2 is a flowchart of the image stitching method according to the embodiment of the present invention, as shown in fig. 2, the flowchart includes the following steps:
and S11, acquiring at least two images to be spliced.
The images to be spliced are images acquired by different image acquisition equipment at the same time, each image acquisition equipment respectively acquires the images to be spliced, a timestamp can be marked on the acquired original images to obtain the images to be spliced, and the images to be spliced are transmitted to the electronic equipment through the images to be spliced; accordingly, the electronic device can determine at least two images to be stitched at the same time according to the time stamp.
Or the image acquisition equipment respectively acquires the images to be spliced and sends the acquired images to be spliced to the electronic equipment, and the electronic equipment determines at least two images to be spliced by using the images to be spliced received at the same moment.
Alternatively, the image capturing device may also be integrated on an electronic device, and the electronic device has at least two image capturing devices for capturing at least two images to be stitched.
And S12, splicing at least two images to be spliced to generate an original spliced image.
After the electronic equipment acquires at least two images to be spliced, splicing the images to be spliced according to the source of each image to be spliced. For example, splicing is performed according to the position information of the image acquisition equipment corresponding to the image to be spliced; or, a view layout is preset, and each image to be stitched is filled in the view layout according to a preset rule to generate an original stitched image.
Details about this step will be described later.
And S13, acquiring at least two target position points to determine the maximum inscribed rectangle corresponding to the original spliced image.
The at least two target position points are used for determining the maximum inscribed rectangle corresponding to the original stitched image, the number of the maximum inscribed rectangle may be 2 or more, and the like, and the maximum inscribed rectangle may be specifically set according to actual requirements, so that the minimum number of the obtained target position points is ensured to be 2.
The determination of the maximum inscribed rectangle may be to determine an edge line in the original stitched image, determine at least two pixel points arbitrarily based on the vicinity of the edge line, and determine the maximum inscribed rectangle by using the pixel points as selectable position points; or determining edge feature points near the edges, forming selectable rectangles by using the edge feature points, and screening the selectable rectangles to determine the maximum inscribed rectangle.
Details about this step will be described later.
And S14, clipping the original spliced image based on the maximum inscribed rectangle, and determining a target spliced image.
And the electronic equipment determines a target area of the maximum inscribed rectangle in the original spliced image, and cuts the target area out of the original spliced image to obtain the target spliced image.
According to the image splicing method provided by the embodiment, the maximum inscribed rectangle is determined through at least two target position points, and then the original spliced image is cut based on the maximum inscribed rectangle, so that not only can the image information in the original spliced image be retained to the maximum extent, but also the black area influencing the display in the original spliced image can be removed, and the maximum complete display of the original spliced image is realized.
In this embodiment, an image stitching method is provided, which can be used in the above-mentioned electronic device, such as a computer, a tablet computer, and the like, and fig. 3 is a flowchart of the image stitching method according to the embodiment of the present invention, as shown in fig. 3, the flowchart includes the following steps:
and S21, acquiring at least two images to be spliced.
Please refer to S11 in fig. 2 for details, which are not described herein.
And S22, splicing at least two images to be spliced to generate an original spliced image.
Please refer to S12 in fig. 2 for details, which are not described herein.
And S23, acquiring at least two target position points to determine the maximum inscribed rectangle corresponding to the original spliced image.
Specifically, S23 includes:
and S231, performing image processing on the original spliced image, and determining edge feature points of the original spliced image.
It should be noted that the edge feature point is not specifically a pixel point on the edge line, but a feature point near the edge line, which may be on the edge line or near the edge line. For example, feature points of a lamp near the edge line, feature points of an air conditioner near the edge line, and the like.
Regarding the extraction of the edge feature points, the electronic device performs gray processing on the original spliced image. As shown in fig. 1, since the areas not belonging to the images to be stitched are filled with black, the adjacent pixel points in the edge area may have sudden pixel change, and based on this, the edge feature points may be extracted.
S232, sequentially extracting any two target edge feature points from the edge feature points to generate a plurality of selectable rectangles.
The electronic equipment randomly extracts two target edge feature points at a time from the extracted edge feature points, and generates the selectable rectangle by taking the two target edge feature points as diagonal points of the selectable rectangle. Thus, the electronic device generates a plurality of selectable rectangles by using the combination of all arbitrary two target edge feature points among all the edge feature points.
And S233, determining a target position point in the target edge feature points based on the position relationship between the other edge feature points and each optional rectangle.
For the maximum inscribed rectangle, it is guaranteed that the rest edge feature points are outside the target edge feature point forming the maximum inscribed rectangle. Based on this, the electronic device can obtain a plurality of target rectangles from the plurality of selectable rectangles through screening, and then screen the target rectangle with the largest area from the plurality of target rectangles to be used as the maximum inscribed rectangle. Accordingly, the target edge feature point forming the maximum inscribed rectangle is the target location point.
It should be noted that, after determining an optional rectangle, the electronic device may filter the optional rectangle to determine whether the edge feature point forming the optional rectangle is the target location point. Or, all the selectable rectangles may be determined first, and then the selectable rectangles are sequentially screened, so as to finally determine the target position point.
In some optional implementations of this embodiment, the S233 may include:
(1) and judging whether other edge feature points are positioned outside the optional rectangle.
(2) When the edge feature point is located outside the selectable rectangle, the selectable rectangle is determined to be the target rectangle and the target edge feature points forming the selectable rectangle are selectable position points.
(3) And comparing the area sizes of the optional rectangles, and determining a target position point from the optional position points.
After the optional rectangle is formed, the electronic device compares the positions of the other edge feature points with the optional rectangle to determine whether the other edge feature points are all outside the optional rectangle. When the other edge feature points are all outside the selectable rectangle, the selectable rectangle is the target rectangle. Further, the area of each target rectangle is compared, and the target rectangle with the largest area is determined as the maximum inscribed rectangle, so that the target edge feature point forming the maximum inscribed rectangle is the target position point.
Optionally, the step (3) may include:
and 3.1) screening the selectable rectangles with the largest area from the selectable rectangles, and determining the selectable rectangles as the largest inscribed rectangles.
3.2) determining the target position point corresponding to the maximum inscribed rectangle in the selectable position points.
The electronic equipment compares the areas of the selectable rectangles in sequence, can sort according to the area size, and determines the selectable rectangle with the largest area as the largest inscribed rectangle. Accordingly, the selectable position point forming the largest inscribed rectangle is determined as the target position point.
When the edge feature points are located outside the selectable rectangle, pixel points which do not belong to the image to be spliced in the original spliced image are also located outside the selectable rectangle, the selectable rectangle is screened by utilizing the position relation between other edge feature points and the selectable rectangle, and the screening accuracy and efficiency can be ensured.
In some alternative embodiments of the present embodiment, this is based on a processing premise that the image capturing device is fixed in position during operation. Based on this, S23 may further include:
(1) and judging whether the original spliced image is the first original spliced image after power-on.
When the original splicing image is the first original splicing image after being powered on, executing S231; otherwise, executing step (2).
(2) The at least two target location points are extracted from the target storage space.
When the original stitched image is the first original stitched image after being powered on, it indicates that the target position point needs to be determined again at this time, that is, the target position point is determined in the manner described in the above S23; otherwise, the target position point can be directly extracted from the target storage space, and the maximum inscribed rectangle is directly determined, so that the target spliced image is determined.
For the first original spliced image after being powered on, image processing needs to be carried out on the first original spliced image to determine edge feature points, and then the maximum inscribed rectangle is determined, so that errors caused by position changes of image acquisition equipment can be avoided, and the accuracy of the obtained target spliced image is ensured. Meanwhile, in the working process of the image acquisition equipment, the probability of position change is low, so that when the image is not the first original spliced image, the target position point is directly extracted to determine the maximum inscribed rectangle, the data processing amount can be reduced, and the image splicing efficiency is improved.
And S24, clipping the original spliced image based on the maximum inscribed rectangle, and determining a target spliced image.
Please refer to S14 in fig. 2 for details, which are not described herein.
According to the image stitching method provided by the embodiment, because the edge feature points belong to the original stitched image and are located at the edge of the original stitched image, the maximum inscribed rectangle is determined by using the edge feature points, so that the data processing amount can be reduced, and the processing efficiency can be improved.
As a specific application example of the software program implementation of this embodiment, as shown in fig. 4, the image stitching method includes:
s1, acquiring images to be spliced, completing splicing, and acquiring an original spliced image;
s2, judging whether the original splicing image is obtained for the first time, if so, executing S3, otherwise, executing S8;
s3, extracting n edge feature points of the original spliced image;
s4, i + +, selecting the ith, judging that i is less than n, executing S5 when i is less than n, and otherwise executing S8;
s5, j + +, selecting the jth, judging j < n, executing S6 when j < n, otherwise executing S5; that is, the electronic device extracts the first edge feature point, and forms any two edge feature points with other edge feature points in sequence by using the first edge feature point until all the edge feature points are traversed.
And S6, judging whether other edge characteristic points are outside the rectangle. When outside the rectangle, S7 is executed; otherwise, S5 is executed.
S7, reserving the maximum rectangular area, storing edge characteristic points, and returning to S5;
and S8, cutting the original spliced image according to the stored characteristic points corresponding to the maximum inscribed rectangle, and determining the target spliced image.
Specifically, based on the premise that the positions of the front camera, the rear camera, the left camera and the right camera of the automobile are fixed, edge feature points of the spliced image are extracted, two feature points are selected each time to form a rectangle, feature points corresponding to the feature points with the largest area are found out and are located outside the rectangle, the feature points serve as areas of maximized and complete display after splicing, and subsequent real-time splicing is cut based on the areas, so that the spliced image is displayed in a maximized and complete mode. Because the position of the vehicle-mounted camera is fixed, the algorithm only needs to be executed once, two feature points obtained by algorithm calculation are directly used for subsequent splicing, and then the spliced image is cut. The image splicing method can stably and completely display the spliced image to the maximum extent, only needs to be executed once, and does not influence the speed of subsequent splicing.
In this embodiment, an image stitching method is provided, which can be used in the above-mentioned electronic devices, such as a computer, a tablet computer, and the like, fig. 5 is a flowchart of the image stitching method according to the embodiment of the present invention, as shown in fig. 5, the flowchart includes the following steps:
and S31, acquiring at least two images to be spliced.
Please refer to S11 in fig. 2 for details, which are not described herein.
And S32, splicing at least two images to be spliced to generate an original spliced image.
Specifically, S32 includes:
s321, performing feature recognition on at least two images to be spliced.
After the electronic equipment acquires at least two images to be spliced, characteristic identification is carried out on the images to be spliced, characteristic points, identification focal lengths and the like are extracted, and identification results are used as the basis for splicing subsequent images. .
And S322, splicing at least two images to be spliced based on the result of the feature identification to generate an original spliced image.
And the electronic equipment automatically completes splicing by utilizing the identified characteristics. Wherein, the splicing can be realized based on opencv.
And S33, acquiring at least two target position points to determine the maximum inscribed rectangle corresponding to the original spliced image.
Please refer to S23 in fig. 3 for details, which are not described herein.
And S34, clipping the original spliced image based on the maximum inscribed rectangle, and determining a target spliced image.
Please refer to S14 in fig. 2 for details, which are not described herein.
According to the image splicing method provided by the embodiment, the automatic splicing of the images to be spliced is realized by extracting the features of the images to be spliced, and the image continuity of the original spliced images can be ensured.
In this embodiment, an image stitching apparatus is further provided, and the apparatus is used to implement the foregoing embodiments and preferred embodiments, and the description of which has been already made is omitted. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
The present embodiment provides an image stitching apparatus, as shown in fig. 6, including:
a first obtaining module 41, configured to obtain at least two images to be stitched;
the splicing module 42 is configured to splice the at least two images to be spliced to generate an original spliced image;
a second obtaining module 43, configured to obtain at least two target position points, so as to determine a maximum inscribed rectangle corresponding to the original stitched image;
and the cropping module 44 is configured to crop the original stitched image based on the maximum inscribed rectangle, and determine a target stitched image.
The image stitching device in this embodiment is presented as a functional unit, where the unit refers to an ASIC circuit, a processor and memory executing one or more software or fixed programs, and/or other devices that may provide the above-described functionality.
Further functional descriptions of the modules are the same as those of the corresponding embodiments, and are not repeated herein.
An embodiment of the present invention further provides an electronic device, which includes the image capturing apparatus shown in fig. 6.
Referring to fig. 7, fig. 7 is a schematic structural diagram of an electronic device according to an alternative embodiment of the present invention, and as shown in fig. 7, the electronic device may include: at least one processor 51, such as a CPU (Central Processing Unit), at least one communication interface 53, memory 54, at least one communication bus 52. Wherein a communication bus 52 is used to enable the connection communication between these components. The communication interface 53 may include a Display (Display) and a Keyboard (Keyboard), and the optional communication interface 53 may also include a standard wired interface and a standard wireless interface. The Memory 54 may be a high-speed RAM Memory (volatile Random Access Memory) or a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The memory 54 may alternatively be at least one memory device located remotely from the processor 51. Wherein the processor 51 may be in connection with the apparatus described in fig. 6, the memory 54 stores an application program, and the processor 51 calls the program code stored in the memory 54 for performing any of the above-mentioned method steps.
The communication bus 52 may be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus. The communication bus 52 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 7, but this is not intended to represent only one bus or type of bus.
The memory 54 may include a volatile memory (RAM), such as a random-access memory (RAM); the memory may also include a non-volatile memory (english: non-volatile memory), such as a flash memory (english: flash memory), a hard disk (english: hard disk drive, abbreviated: HDD) or a solid-state drive (english: SSD); the memory 54 may also comprise a combination of the above types of memories.
The processor 51 may be a Central Processing Unit (CPU), a Network Processor (NP), or a combination of a CPU and an NP.
The processor 51 may further include a hardware chip. The hardware chip may be an application-specific integrated circuit (ASIC), a Programmable Logic Device (PLD), or a combination thereof. The PLD may be a Complex Programmable Logic Device (CPLD), a field-programmable gate array (FPGA), a General Array Logic (GAL), or any combination thereof.
Optionally, the memory 54 is also used to store program instructions. The processor 51 may call program instructions to implement the image stitching method as shown in any of the embodiments of the present application.
The embodiment of the invention also provides a non-transitory computer storage medium, wherein the computer storage medium stores computer executable instructions, and the computer executable instructions can execute the image splicing method in any method embodiment. The storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a Flash Memory (Flash Memory), a Hard Disk (Hard Disk Drive, abbreviated as HDD), a Solid State Drive (SSD), or the like; the storage medium may also comprise a combination of memories of the kind described above.
Although the embodiments of the present invention have been described in conjunction with the accompanying drawings, those skilled in the art may make various modifications and variations without departing from the spirit and scope of the invention, and such modifications and variations fall within the scope defined by the appended claims.

Claims (10)

1. An image stitching method, comprising:
acquiring at least two images to be spliced;
splicing at least two images to be spliced to generate an original spliced image;
acquiring at least two target position points to determine a maximum inscribed rectangle corresponding to the original spliced image;
and clipping the original spliced image based on the maximum inscribed rectangle to determine a target spliced image.
2. The method of claim 1, wherein the obtaining at least two target location points to determine a maximum inscribed rectangle corresponding to the original stitched image comprises:
performing image processing on the original spliced image, and determining edge feature points of the original spliced image;
sequentially extracting any two target edge feature points from the edge feature points to generate a plurality of selectable rectangles;
and determining the target position point in the target edge feature points based on the position relationship between the other edge feature points and each optional rectangle.
3. The method according to claim 2, wherein determining the target position point among the target edge feature points based on the position relationship between the other edge feature points and the optional rectangles comprises:
judging whether other edge feature points are positioned outside the optional rectangle or not;
when the edge feature point is positioned outside the selectable rectangle, determining that the selectable rectangle is a target rectangle and the target edge feature point forming the selectable rectangle is a selectable position point;
comparing the area size of each optional rectangle, and determining the target position point in the optional position points.
4. The method of claim 3, wherein said comparing the area size of each of said selectable rectangles to determine said target location point among said selectable location points comprises:
screening the selectable rectangles with the largest area from the selectable rectangles, and determining the selectable rectangles as the largest inscribed rectangles;
and determining the target position point corresponding to the maximum inscribed rectangle in the selectable position points.
5. The method of claim 2, wherein said obtaining at least two target location points further comprises:
judging whether the original splicing image is the first original splicing image after being electrified or not;
and when the original spliced image is the first original spliced image after being electrified, executing the step of carrying out image processing on the original spliced image and determining the edge characteristic points of the original spliced image.
6. The method of claim 5, wherein said obtaining at least two target location points further comprises:
and when the original spliced image is not the first original spliced image after being powered on, extracting the at least two target position points from the target storage space.
7. The method according to claim 1, wherein the stitching the at least two images to be stitched to generate an original stitched image comprises:
performing feature recognition on the at least two images to be spliced;
and splicing the at least two images to be spliced based on the result of the feature identification to generate the original spliced image.
8. An image stitching device, comprising:
the first acquisition module is used for acquiring at least two images to be spliced;
the splicing module is used for splicing the at least two images to be spliced to generate an original spliced image;
the second acquisition module is used for acquiring at least two target position points so as to determine a maximum inscribed rectangle corresponding to the original spliced image;
and the cropping module is used for cropping the original spliced image based on the maximum inscribed rectangle and determining a target spliced image.
9. An electronic device, comprising:
a memory and a processor, the memory and the processor being communicatively connected to each other, the memory having stored therein computer instructions, the processor executing the computer instructions to perform the image stitching method according to any one of claims 1 to 7.
10. A computer-readable storage medium storing computer instructions for causing a computer to perform the image stitching method according to any one of claims 1 to 7.
CN202111507013.4A 2021-12-10 2021-12-10 Image splicing method and device and electronic equipment Pending CN114298903A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111507013.4A CN114298903A (en) 2021-12-10 2021-12-10 Image splicing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111507013.4A CN114298903A (en) 2021-12-10 2021-12-10 Image splicing method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN114298903A true CN114298903A (en) 2022-04-08

Family

ID=80967845

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111507013.4A Pending CN114298903A (en) 2021-12-10 2021-12-10 Image splicing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN114298903A (en)

Similar Documents

Publication Publication Date Title
CN111160172B (en) Parking space detection method, device, computer equipment and storage medium
CN109040596B (en) Method for adjusting camera, mobile terminal and storage medium
CN110796600B (en) Image super-resolution reconstruction method, image super-resolution reconstruction device and electronic equipment
CN113079325B (en) Method, apparatus, medium, and device for imaging billions of pixels under dim light conditions
CN109389659B (en) Rendering method and device of mathematical formula in PPT, storage medium and terminal equipment
CN115115611B (en) Vehicle damage identification method and device, electronic equipment and storage medium
CN108805799B (en) Panoramic image synthesis apparatus, panoramic image synthesis method, and computer-readable storage medium
CN112348835A (en) Method and device for detecting material quantity, electronic equipment and storage medium
CN111444808A (en) Image-based accident liability assignment method and device, computer equipment and storage medium
CN112770042A (en) Image processing method and device, computer readable medium, wireless communication terminal
CN114007044A (en) Opencv-based image splicing system and method
CN110266926B (en) Image processing method, image processing device, mobile terminal and storage medium
CN113160272B (en) Target tracking method and device, electronic equipment and storage medium
CN116760937B (en) Video stitching method, device, equipment and storage medium based on multiple machine positions
CN114298903A (en) Image splicing method and device and electronic equipment
CN112329729B (en) Small target ship detection method and device and electronic equipment
WO2023044656A1 (en) Vehicle passage warning method and apparatus, and vehicle-mounted terminal
CN115829890A (en) Image fusion method, device, equipment, storage medium and product
CN114492492A (en) Two-dimensional code scanning method and device, storage medium and electronic equipment
CN114429464A (en) Screen-breaking identification method of terminal and related equipment
CN115797164A (en) Image splicing method, device and system in fixed view field
CN112991175A (en) Panoramic picture generation method and device based on single PTZ camera
CN112312129B (en) Test method, device and system and intelligent cabin
CN112580638B (en) Text detection method and device, storage medium and electronic equipment
CN113034369A (en) Image generation method and device based on multiple cameras and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination