CN114677280A - Method, apparatus, device and program product for generating panoramic image - Google Patents

Method, apparatus, device and program product for generating panoramic image Download PDF

Info

Publication number
CN114677280A
CN114677280A CN202210375846.8A CN202210375846A CN114677280A CN 114677280 A CN114677280 A CN 114677280A CN 202210375846 A CN202210375846 A CN 202210375846A CN 114677280 A CN114677280 A CN 114677280A
Authority
CN
China
Prior art keywords
image
mapping
panoramic
local
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210375846.8A
Other languages
Chinese (zh)
Inventor
刘威
潘慈辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beike Technology Co Ltd
Original Assignee
Beike Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beike Technology Co Ltd filed Critical Beike Technology Co Ltd
Priority to CN202210375846.8A priority Critical patent/CN114677280A/en
Publication of CN114677280A publication Critical patent/CN114677280A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4007Interpolation-based scaling, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing

Abstract

The embodiment of the disclosure discloses a method, a device, equipment and a computer program product for generating a panoramic image, wherein the method comprises the following steps: acquiring a local image to be spliced and position mapping information corresponding to the local image, wherein the position mapping information represents the mapping relation between the positions of panoramic pixel points and the positions of the local pixel points, the panoramic pixel points are pixel points in the panoramic image to be generated, and the local pixel points are pixel points in the local image; mapping the panoramic pixel points to the local images based on the position mapping information to obtain the mapping positions of the panoramic pixel points in the local images; determining a target pixel value of the mapping position in the local image; and determining the target pixel value as the pixel value of the panoramic pixel point in the panoramic image to obtain the panoramic image. The decoupling of the mapping transformation of the pixel position and the interpolation processing of the pixel value in the panoramic image generation process is realized, and the definition loss caused by the interpolation of the pixel value can be reduced, so that the definition of the panoramic image is improved.

Description

Method, apparatus, device and program product for generating panoramic image
Technical Field
The present disclosure relates to the field of image processing, and in particular, to a method, an apparatus, an electronic device, and a computer program product for generating a panoramic image.
Background
In the field of image processing, a panoramic image is usually formed by stitching a plurality of partial images. For example, a panoramic image in a VR (Virtual Reality) device may use a fish-eye camera to shoot a plurality of fish-eye images with different viewing angles at a machine position, and then splice the plurality of fish-eye images into the panoramic image.
In the process of stitching the local images into the panoramic image, the local images need to be subjected to panoramic mapping transformation processing so as to stitch the local images into the panoramic image. The panorama mapping process represents a set of a plurality of types of mapping processes used in the process of stitching the local images into the panoramic image, and for example, a fisheye image may be subjected to a plurality of types of mapping processes such as distortion removal, sphere projection, vector propagation, trigonometric transformation, stitching, and the like in sequence.
Disclosure of Invention
The embodiment of the disclosure provides a method and a device for generating a panoramic image, an electronic device and a computer program product.
In one aspect of the disclosed embodiments, a method for generating a panoramic image is provided, including: acquiring a local image to be spliced and position mapping information corresponding to the local image, wherein the position mapping information represents the mapping relation between the positions of panoramic pixel points and the positions of the local pixel points, the panoramic pixel points are pixel points in the panoramic image to be generated, and the local pixel points are pixel points in the local image; mapping the panoramic pixel points to the local images based on the position mapping information to obtain the mapping positions of the panoramic pixel points in the local images; determining a target pixel value of the mapping position in the local image; and determining the target pixel value as the pixel value of the panoramic pixel point in the panoramic image to obtain the panoramic image.
In some embodiments, the location mapping information is generated by: based on a preset panoramic image splicing strategy, carrying out panoramic mapping transformation on the local image, wherein the panoramic mapping transformation represents a set of mapping transformation related to panoramic image splicing; extracting position transformation information respectively corresponding to each mapping transformation processing in the panoramic mapping transformation; and carrying out merging interpolation processing on each position transformation information to obtain position mapping information.
In some embodiments, the position mapping information is a three-channel position mapping image; the positions of the pixel points in the position mapping image correspond to the positions of the panoramic pixel points in the panoramic image one by one, and the pixel values of the pixel points in the position mapping image comprise the number of the local image where the mapping position corresponding to the panoramic pixel point is located and the coordinates of the mapping position.
In some embodiments, the method further comprises: determining the product of the coordinates of the mapping position in the position mapping image and a preset coefficient, and determining the product as the encoding coordinates, wherein the value of the preset coefficient is 10nN is a positive integer; replacing the coordinates of the mapping position in the position mapping image with the encoding coordinates to obtain an encoded position mapping image; and storing the corresponding relation between the coded position mapping image and the local image and the coded position mapping image to a preset storage address.
In some embodiments, obtaining the position mapping information corresponding to the local image includes: acquiring a coded position mapping image from a storage address based on the local image; determining the product of the encoding coordinate and the reciprocal of the preset coefficient, and replacing the encoding coordinate with the product to obtain a decoded position mapping image; and determining the decoded position mapping image as the position mapping information corresponding to the local image.
In some embodiments, before determining the decoded position mapping image as the position mapping information corresponding to the local image, the method further comprises: and when the resolution of the predetermined panoramic image is different from the resolution of the decoded position mapping image, sampling the decoded position mapping image, and adjusting the resolution of the decoded position mapping image to the resolution of the panoramic image.
In some embodiments, determining the target pixel value of the mapped location in the local image comprises: and when the coordinates of the mapping position are floating point numbers, performing interpolation processing on the mapping position based on the pixel values in the neighborhood of the mapping position to obtain a target pixel value of the mapping position.
In still another aspect of the disclosed embodiments, there is provided an apparatus for generating a panoramic image, including: the device comprises an acquisition unit, a matching unit and a matching unit, wherein the acquisition unit is configured to acquire local images to be spliced and position mapping information corresponding to the local images, the position mapping information represents the mapping relation between the positions of panoramic pixel points and the positions of the local pixel points, the panoramic pixel points are pixel points in the panoramic images to be generated, and the local pixel points are pixel points in the local images; the mapping unit is configured to map the panoramic pixel points to the local images based on the position mapping information to obtain mapping positions of the panoramic pixel points in the local images; a determination unit configured to determine a target pixel value of the mapping position in the partial image; and the generating unit is configured to determine the target pixel value as the pixel value of the panoramic pixel point in the panoramic image, so as to obtain the panoramic image.
In another aspect of the disclosed embodiments, an electronic device is provided, including: a memory for storing a computer program product; a processor for executing the computer program product stored in the memory, and the computer program product, when executed, implements the method for generating a panoramic image in any of the above embodiments.
In yet another aspect of the disclosed embodiments, there is provided a computer program product comprising computer program instructions which, when executed by a processor, implement the method of generating a panoramic image of any of the above embodiments.
According to the method for generating the panoramic image, the corresponding mapping position of the panoramic pixel point in the panoramic image in the local image can be determined according to the position mapping relation, and then the target pixel value of the mapping position in the local image is used as the pixel value of the panoramic pixel point in the panoramic image. The decoupling of the mapping transformation of the pixel position and the interpolation processing of the pixel value in the panoramic image generation process is realized, the definition loss caused by the interpolation of the pixel value can be reduced, and the definition of the panoramic image is improved.
The technical solution of the present disclosure is further described in detail by the accompanying drawings and embodiments.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description, serve to explain the principles of the disclosure.
The present disclosure may be more clearly understood from the following detailed description, taken with reference to the accompanying drawings, in which:
fig. 1 is a scene schematic diagram of an embodiment of a method of generating a panoramic image according to the present disclosure;
FIG. 2 is a schematic flow chart diagram illustrating one embodiment of a method of generating a panoramic image according to the present disclosure;
FIG. 3 is a schematic flow chart diagram illustrating the generation of location mapping information in one embodiment of the disclosed method of generating a panoramic image;
FIG. 4 is a schematic illustration of a position mapped image and a panoramic image in one embodiment of generating a panoramic image of the present disclosure;
FIG. 5 is a schematic flow chart illustrating storage of a location-mapped image in an embodiment of a method of generating a panoramic image according to the present disclosure;
FIG. 6 is a schematic diagram of an encoded position-mapped image in an embodiment of a method of generating a panoramic image according to the present disclosure;
FIG. 7 is a schematic flow chart illustrating the process of obtaining location mapping information according to an embodiment of the method of generating a panoramic image according to the present disclosure;
FIG. 8 is a schematic structural diagram illustrating an embodiment of an apparatus for generating a panoramic image according to the present disclosure;
fig. 9 is a schematic structural diagram of an embodiment of an application of the electronic device of the present disclosure.
Detailed Description
Various exemplary embodiments of the present disclosure will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless specifically stated otherwise.
It will be understood by those of skill in the art that the terms "first," "second," and the like in the embodiments of the present disclosure are used merely to distinguish one element from another, and are not intended to imply any particular technical meaning, nor is the necessary logical order between them.
It is also understood that in embodiments of the present disclosure, "a plurality" may refer to two or more than two, and "at least one" may refer to one, two or more than two.
It is also to be understood that any reference to any component, data, or structure in the embodiments of the disclosure, may be generally understood as one or more, unless explicitly defined otherwise or stated otherwise.
In addition, the term "and/or" in the present disclosure is only one kind of association relationship describing the association object, and indicates that three relationships may exist, for example, a and/or B, may indicate: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" in the present disclosure generally indicates that the former and latter associated objects are in an "or" relationship.
It should also be understood that the description of the various embodiments of the present disclosure emphasizes the differences between the various embodiments, and the same or similar parts may be referred to each other, so that the descriptions thereof are omitted for brevity.
Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
Embodiments of the disclosure may be implemented in electronic devices such as terminal devices, computer systems, servers, etc., which are operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known terminal devices, computing systems, environments, and/or configurations that may be suitable for use with electronic devices, such as terminal devices, computer systems, servers, and the like, include, but are not limited to: personal computer systems, server computer systems, thin clients, thick clients, hand-held or laptop devices, microprocessor-based systems, set-top boxes, programmable consumer electronics, networked personal computers, minicomputer systems, mainframe computer systems, distributed cloud computing environments that include any of the above, and the like.
Electronic devices such as terminal devices, computer systems, servers, etc. may be described in the general context of computer system-executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, etc. that perform particular tasks or implement particular abstract data types. The computer system/server may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices.
Summary of the disclosure
In the related art, in the process of generating a panoramic image, a panoramic mapping transformation process needs to be performed on a local image to stitch a plurality of local images into a panoramic image. Taking a fisheye image as an example, a fisheye image panorama mapping process generally includes the following types of mapping transformation processes: distortion removal, sphere projection, vector propagation, trigonometric transformation, stitching, and the like. Firstly, taking a fisheye image as an input image, and performing distortion removal treatment to obtain a first output image; then, on the basis of the first output image, performing sphere projection to obtain a second output image; then, on the basis of the second output image, carrying out vector propagation processing to obtain a third output image; and then, carrying out triangular transformation on the basis of the third output image, and finally splicing the partial images subjected to the triangular transformation into a panoramic image.
In the process of implementing the present disclosure, the inventors found that, in the process of generating the panoramic image, each mapping transformation performs interpolation processing on the pixel values of the image, which results in gradual reduction of the definition of the transformed image, and thus results in lower definition of the stitched panoramic image.
The method for generating a panoramic image according to the present disclosure is exemplarily described below with reference to fig. 1, where fig. 1 shows a scene schematic diagram of the method for generating a panoramic image according to the present disclosure, and in the scene shown in fig. 1, an execution subject (not shown in the figure) may be a terminal device or may be a server. The execution subject may acquire the partial image 110 from the camera and then extract the position mapping information 120 corresponding to the partial image 110 from the local storage space. Then, the mapping position 111 of the panoramic pixel 131 in the panoramic image 130 to be generated in the local image 110 is determined through the position mapping information 120, so as to obtain a target pixel value of the mapping position 111 in the local image 110, and then the target pixel value is determined as the pixel value of the panoramic pixel 131. The steps are repeated to determine the pixel values of all the panoramic pixel points, and the final panoramic image 140 can be obtained.
The method for generating the panoramic image realizes the decoupling of the mapping transformation of the pixel position and the interpolation processing of the pixel value in the panoramic image generation process, and can reduce the definition loss caused by the interpolation of the pixel value, thereby improving the definition of the panoramic image.
Exemplary method
The method of generating a panoramic image of the present disclosure is described below with reference to fig. 2, and fig. 2 shows a flowchart of an embodiment of the method of generating a panoramic image of the present disclosure, which includes the following steps, as shown in fig. 2:
and step 210, obtaining the local images to be spliced and the position mapping information corresponding to the local images.
The position mapping information represents the mapping relation between the positions of the panoramic pixel points and the positions of the local pixel points, the panoramic pixel points are pixel points in a panoramic image to be generated, and the local pixel points are pixel points in the local image.
In this embodiment, the local image represents an original image used for stitching into a panoramic image, and may be a plurality of images taken by a camera at different viewing angles of the same point.
As an example, the position mapping information may take the form of data in a two-dimensional matrix. The positions of elements in the two-dimensional matrix correspond to the positions of the panoramic pixel points in the panoramic image one by one, and the values of the elements can represent the positions of the local pixel points in the local image. In particular, element A in a two-dimensional matrix(i,j)Representing the jth element of the ith row in the two-dimensional matrix; panoramic pixel B(i,j)Representing the jth pixel point of the ith line in the panoramic image; a. the(i,j)The value of (a) is (b), and the (a) th line and the (b) th local pixel point in the local image are represented; then, through the two-dimensional matrix, the panoramic pixel point B can be determined(i,j)And a mapping relation exists between the local pixel points and the b-th local pixel point in the a-th line in the local image.
And step 220, mapping the panoramic pixel points to the local images based on the position mapping information to obtain the mapping positions of the panoramic pixel points in the local images.
In a specific example, the execution subject may be a terminal device, on which a two-dimensional matrix in the example of step 210 is prestored. The execution subject determines the mapping position of each panoramic pixel point in the panoramic image in the local image by traversing the value of each element in the two-dimensional matrix, for example, by element A(i,j)Can determine the panoramic pixel point B(i,j)The coordinates of the mapped position in the partial image are denoted as (a, b).
Step 230, determining the target pixel value of the mapping position in the local image.
As an example, when the coordinates of the mapping position are integers, the pixel value of the local pixel point corresponding to the coordinates may be directly determined as the target pixel value; when the coordinates of the mapping position are floating point numbers, the execution subject may round down or round up, determine a local pixel point corresponding to the mapping position, and then determine a pixel value of the local pixel point as a target pixel value. For example, the coordinates of the mapping position are (0.8, 0.2), and the coordinates of the local pixel point obtained by rounding up are (1, 1); the local pixel coordinates obtained by rounding down are then (0, 0).
In some optional implementations of this embodiment, the target pixel value of the mapping position in the partial image may be determined by: and when the coordinates of the mapping position are floating point numbers, performing interpolation processing on the mapping position based on the pixel values in the neighborhood of the mapping position to obtain a target pixel value of the mapping position.
As an example, the coordinates of the mapping position are (0.8, 0.2), and the coordinates of four local pixel points in the neighborhood are: (0, 0), (0, 1), (1, 1) and (1, 0), the execution subject may first extract pixel values of the four local pixel points, then perform weighted average on the four pixel values, and determine the obtained result as a target pixel value.
In a preferred aspect of the foregoing example, the weight of the pixel value of the local pixel point may be determined according to the distance between the local pixel point and the mapping position, for example, the smaller the distance, the greater the weight.
In this embodiment, the mapping position is interpolated based on the pixel values in the neighborhood of the mapping position to determine the target pixel value, so that the consistency between the target pixel value and the pixel values of the local pixel points in the neighborhood of the target pixel value can be improved, the accuracy of the target pixel value can be improved, and the definition of the panoramic image can be improved.
And 240, determining the target pixel value as the pixel value of the panoramic pixel point in the panoramic image to obtain the panoramic image.
In this embodiment, since the position mapping information covers the positions of all the panoramic pixels in the panoramic image, on this basis, the panoramic image can be obtained only by determining the pixel value of each panoramic pixel.
In the method for generating the panoramic image, according to the position mapping relationship, the corresponding mapping position of the panoramic pixel in the panoramic image in the local image may be determined, and then the target pixel value of the mapping position in the local image is used as the pixel value of the panoramic pixel in the panoramic image. The decoupling of the mapping transformation of the pixel position and the interpolation processing of the pixel value in the panoramic image generation process is realized, the definition loss caused by the interpolation of the pixel value can be reduced, and the definition of the panoramic image is improved.
Referring next to fig. 3, fig. 3 shows a flowchart of generating location mapping information in an embodiment of the method of generating a panoramic image of the present disclosure, as shown in fig. 3, the flowchart includes the following steps:
and 310, carrying out panoramic mapping transformation on the local images based on a preset panoramic image splicing strategy.
The panorama mapping transformation represents a set of mapping transformations related to the stitching of the panoramic image, and may include the following types of mapping transformations, for example: distortion removal, sphere projection, vector propagation, trigonometric transformation, stitching, and the like.
In a specific example, the execution subject may implement a panoramic mapping transformation on the local image through image processing software (for example, OpenCV), and the execution subject may first perform a distortion removal process on the local image according to camera parameters and distortion coefficients to eliminate distortion of the local image due to lens perspective; then, the execution subject converts a pixel coordinate system in the local image into a spherical coordinate system through spherical projection on the basis of the local image after distortion removal; then, the execution subject may project the local image to a panoramic image coordinate system through a vector propagation algorithm and a triangular transformation process, where the obtained local image already satisfies a stitching condition, and then the execution subject only needs to stitch the local image into a panoramic image (the panoramic image is only a mapping transformation representing "stitching", and is not the panoramic image to be generated in this embodiment).
It should be noted that, the above example is to illustrate the process of the panorama mapping processing of the local image, and in practice, the type and the order of the mapping transformations involved in the process may be adjusted according to the requirement, which is not limited by the present disclosure.
In this embodiment, the panoramic image stitching policy represents a constraint condition for stitching the local images into the panoramic image, and may include, for example, a pixel size of the panoramic image, a resolution of the panoramic image, a size of a coincidence area between adjacent local images, and the like.
Step 320, extracting the position transformation information corresponding to each mapping transformation process in the panorama mapping transformation.
In this embodiment, the position transformation information may represent a position transformation policy that local pixel points in the local image follow in the mapping transformation process, for example, a mapping function shown in the following formula (1) may be adopted.
(x′,y′)=P(x,y) (1)
Where, (x ', y') denotes pixel coordinates after mapping, (x, y) denotes coordinates before mapping, and P may denote a position transformation matrix.
And step 330, carrying out merging interpolation processing on each position transformation information to obtain position mapping information.
In this embodiment, the execution subject may reversely derive the mapping position of the panorama pixel in the partial image by merging interpolation processing according to the order of mapping transformation involved in step 320 to obtain the position mapping information.
As an example, in step 320, four types of mapping transformation processes, namely, distortion removal, sphere projection, triangle transformation, and stitching, are performed on the local image by the subject, and the corresponding position mapping information is: warp _1, warp _2, warp _3, and warp _ 4. Each piece of position mapping information represents a corresponding relationship between a pixel coordinate after mapping transformation and a pixel coordinate before mapping transformation, for example, warp _1 represents a corresponding relationship between a coordinate of a local pixel after distortion removal and an original coordinate of the local pixel in a local image, and warp _4 represents a coordinate of the local pixel in a panoramic image coordinate system (i.e., a coordinate of a panoramic pixel) and a coordinate of the local pixel before stitching (i.e., a coordinate of the local pixel after triangular transformation). The execution subject can firstly determine the coordinate of the local pixel point after triangular transformation according to the coordinate of the local pixel point in the panoramic coordinate system and warp _ 4; then, according to the coordinates of the local pixel points after triangular transformation and warp _3, the coordinates of the local pixel points after spherical projection are determined, namely the mapping relation between the coordinates of the local pixel points under the panoramic coordinate system and the coordinates of the local pixel points after spherical projection is obtained, the step is repeated, and finally the corresponding relation between the coordinates of the local pixel points under the panoramic coordinate system (namely the coordinates of the panoramic pixel points in the panoramic image) and the coordinates of the local pixel points in the local image can be obtained.
In the process of combining the position transformation information, because the coordinates before mapping transformation are all integer coordinates (namely, each coordinate corresponds to one pixel point), when the coordinates inversely deduced according to the position transformation information are floating point numbers, the coordinates can be corrected into integers in an interpolation mode, so as to determine the corresponding relation of the pixel points before and after mapping transformation. The strategy of interpolation can be that the coordinate value is rounded down, rounded up, and the nearest integer coordinate is taken. For example, the panoramic pixel point (4, 5) determines that the coordinates of the pixel point after the triangular transformation are (1.3, 6.5) through warp _4, and then the coordinates (1.3, 6.5) can be corrected to (1, 6) in a downward rounding manner, so as to obtain the corresponding relationship between the panoramic pixel point (4, 5) and the local pixel point (1, 6) in the stitching process.
The process shown in fig. 3 embodies the step of generating the position mapping information, and the mapping relationship between the positions of the panorama pixel points and the positions of the local pixel points can be determined by extracting the position mapping information corresponding to each mapping transformation process in the panorama mapping transformation and performing merging interpolation on each position mapping information to determine the position mapping information, thereby achieving decoupling between the position transformation and the pixel value interpolation in the panorama image stitching process.
In some embodiments of the present disclosure, the position mapping information of the present disclosure may be in the form of data of an image, where the position mapping information is a position mapping image of three channels; the positions of the pixel points in the position mapping image correspond to the positions of the panoramic pixel points in the panoramic image one by one, and the pixel values of the pixel points in the position mapping image comprise the number of the local image where the mapping position corresponding to the panoramic pixel point is located and the coordinates of the mapping position.
Fig. 4 shows a position mapping image and a partial region in a panoramic image in an embodiment of the present disclosure, in the example shown in fig. 4, fig. 4(a) is a partial image, fig. 4(b) is a corresponding region of the partial image fig. 4(a) in the panoramic image, and fig. 4(c) is a corresponding region of the partial image fig. 4(a) in a position mapping relationship diagram.
In this embodiment, the three channels of the position mapping image may respectively represent the number of the partial image where the mapping position corresponding to the panorama pixel point is located, a first coordinate value (e.g., abscissa) of the mapping position, and a second coordinate value (e.g., ordinate) of the mapping position.
As an example, the pixel value of the c-th pixel point F in the position mapping image is: [1, 20, 30], the local image number at which the mapping position of the d-th panorama pixel point G on the c-th line in the panorama image is located is 1, and the coordinate of the mapping position in the local image number 1 is (20, 30).
In this embodiment, the mapping relationship between the positions of the pixel points in the panoramic image and the local image is represented in the form of image data, which can facilitate the storage and reading of position mapping information.
Referring next to fig. 5, fig. 5 illustrates a flow chart of storing a location-mapped image in an embodiment of a method of generating a panoramic image of the present disclosure, as illustrated in fig. 5, the flow chart including the steps of:
step 510, determining a product of the coordinates of the mapping position in the position mapping image and a preset coefficient, and determining the product as encoding coordinates.
Wherein the value of the preset coefficient is 10nAnd n is a positive integer.
For example, when the coordinates of the mapping position are (1.23, 3.33) and n is 2, the encoding coordinates are (123, 333).
In this embodiment, the preset coefficient represents the magnification of the coordinate value of the mapping position, and may be set empirically or according to the storage requirement of the position mapping image.
In some optional embodiments of this embodiment, the value of n may be determined according to the size of the local image and the storage type of the position mapping image, so as to take storage precision and storage space into consideration.
And step 520, replacing the coordinates of the mapping position in the position mapping image with the encoding coordinates to obtain an encoded position mapping image.
Step 530, storing the corresponding relation between the encoded position mapping image and the local image and the encoded position mapping image to a preset storage address.
As an example, the encoded position-mapped image may be stored in a hard disk or a cloud, and a correspondence relationship between the encoded position-mapped image and the local image may be used as an index value, so as to retrieve the encoded position-mapped image from the storage address. Therefore, repeated reading can be realized, and the utilization rate of the coded position mapping image is improved.
In practice, the coordinate values of the mapped positions in the position-mapped image will usually include floating point numbers, and storing the position-mapped image directly will reduce the accuracy of the position-mapped image due to the rounding process used when storing the floating point numbers. For this problem, in the present embodiment, the coordinates of the mapping position in the position mapping image are replaced with the encoded coordinates, and the number of bits after the decimal point in the coordinate values can be reduced, thereby reducing the precision loss of the position mapping image in the storage process.
The encoded position mapping image may refer to fig. 6, where fig. 6 shows a schematic diagram of the encoded position mapping image in an embodiment of the method for generating a panoramic image of the present disclosure, and in fig. 6, the encoded position mapping image covers all partial images constituting the panoramic image, and an area between two adjacent boundary lines corresponds to one partial image, for example, the areas 610, 620, and 630 correspond to three partial images, respectively. The positions of the pixel points in the coded position mapping image correspond to the positions of the panoramic pixel points in the panoramic image one by one, and the pixel values of the pixel points in the coded mapping image are the coordinates of the coded local pixel points in the local image.
With continuing reference to fig. 7 on the basis of fig. 5, fig. 7 shows a flowchart for obtaining location mapping information in an embodiment of the method for generating a panoramic image of the present disclosure, as shown in fig. 7, the flowchart includes the following steps:
step 710, based on the local image, obtaining the encoded position mapping image from the storage address.
In this embodiment, the execution subject may retrieve the encoded position map image corresponding to the partial image from the storage address using the partial image as an index.
And 720, determining the product of the encoding coordinate and the reciprocal of the preset coefficient, and replacing the encoding coordinate with the product to obtain the decoded position mapping image.
As an example, the encoding coordinates are (123, 333), and the value of n in the preset coefficient is 2, the total coordinates of the decoded position mapping image are (1.23, 3.33).
Step 730, the decoded position mapping image is determined as the position mapping information corresponding to the local image.
As can be seen from fig. 7, the process shown in fig. 7 embodies the steps of obtaining the pre-stored encoded position mapping image and obtaining the position mapping information through decoding, when the local images need to be repeatedly spliced into the panoramic image, only reading and decoding need to be repeated, the calculation step of repeatedly generating the position mapping information is omitted, and the generation efficiency of the panoramic image is improved.
In some optional implementations of this embodiment, before determining the decoded position mapping image as the position mapping information corresponding to the local image, the method further includes: and when the resolution of the predetermined panoramic image is different from the resolution of the decoded position mapping image, sampling the decoded position mapping image, and adjusting the resolution of the decoded position mapping image to the resolution of the panoramic image.
In a specific example, assuming that the resolution of the predetermined panoramic image is 800 × 1600 and the resolution of the decoded position-mapped image is 400 × 800, at this time, the execution main body may perform upsampling on the decoded position-mapped image, increase the resolution of the decoded position-mapped image to 800 × 1600, and determine a pixel value of a new pixel point generated by sampling in an interpolation manner.
In the present embodiment, the position map image after decoding is adjusted according to the resolution of the panoramic image, and the consistency between the position map information and the panoramic image can be determined.
Exemplary devices
Referring now to fig. 8, fig. 8 illustrates a schematic structural diagram of an embodiment of an apparatus for generating a panoramic image according to the present disclosure. As shown in fig. 8, the apparatus includes: the obtaining unit 810 is configured to obtain a local image to be spliced and position mapping information corresponding to the local image, wherein the position mapping information represents a mapping relation between positions of panoramic pixel points and positions of the local pixel points, the panoramic pixel points are pixel points in the panoramic image to be generated, and the local pixel points are pixel points in the local image; a mapping unit 820 configured to map the panoramic pixel to the local image based on the position mapping information, so as to obtain a mapping position of the panoramic pixel in the local image; a determining unit 830 configured to determine a target pixel value of the mapped location in the partial image; a generating unit 840 configured to determine the target pixel value as a pixel value of the panoramic pixel point in the panoramic image, resulting in the panoramic image.
In one embodiment, the location mapping information is generated by: based on a preset panoramic image splicing strategy, carrying out panoramic mapping transformation on the local image, wherein the panoramic mapping transformation represents a set of mapping transformation related to panoramic image splicing; extracting position transformation information respectively corresponding to each mapping transformation processing in the panoramic mapping transformation; and carrying out merging interpolation processing on each position transformation information to obtain position mapping information.
In one embodiment, the position mapping information is a three-channel position mapping image; the positions of the pixel points in the position mapping image correspond to the positions of the panoramic pixel points in the panoramic image one by one, and the pixel values of the pixel points in the position mapping image comprise the number of the local image where the mapping position corresponding to the panoramic pixel point is located and the coordinates of the mapping position.
In one embodiment, the apparatus further comprises: an encoding unit configured to determine a product of coordinates of a mapping position in the position mapping image and a preset coefficient whose value is 10, and determine the product as encoding coordinatesn(n is a positive integer); a replacement unit configured to map a position in the imageReplacing the coordinates of the mapping position with coding coordinates to obtain a coded position mapping image; and the storage unit is configured to store the corresponding relation between the coded position mapping image and the local image and the coded position mapping image to a preset storage address.
In one embodiment, the obtaining unit 810 further includes: an acquisition module configured to acquire an encoded position mapping image from a storage address based on the partial image; a decoding module configured to determine a product of the encoded coordinates and a reciprocal of a preset coefficient and replace the encoded coordinates with the product, resulting in a decoded position mapping image; and the determining module is configured to determine the decoded position mapping image as the position mapping information corresponding to the local image.
In one embodiment, the obtaining unit 810 further includes: an adjustment module configured to: when the resolution of the predetermined panoramic image is different from the resolution of the decoded position mapping image, sampling the decoded position mapping image, and adjusting the resolution of the decoded position mapping image to the resolution of the panoramic image.
In one embodiment, the determining unit 830 further includes an interpolation module configured to, when the coordinates of the mapping position are floating point numbers, interpolate the mapping position based on pixel values in the neighborhood of the mapping position to obtain a target pixel value of the mapping position.
Exemplary electronic device
Next, an electronic apparatus according to an embodiment of the present disclosure is described with reference to fig. 9. The electronic device may be either or both of the first device and the second device, or a stand-alone device separate from them, which stand-alone device may communicate with the first device and the second device to receive the acquired input signals therefrom.
FIG. 9 illustrates a block diagram of an electronic device in accordance with an embodiment of the disclosure.
As shown in fig. 9, the electronic device includes one or more processors and memory.
The processor may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device to perform desired functions.
The memory may store one or more computer program products, which may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. One or more computer program products may be stored on the computer-readable storage medium and executed by a processor to implement the methods of generating panoramic images of the various embodiments of the present disclosure described above and/or other desired functions.
In one example, the electronic device may further include: an input device and an output device, which are interconnected by a bus system and/or other form of connection mechanism (not shown).
The input device may also include, for example, a keyboard, a mouse, and the like.
The output device may output various information including the determined distance information, direction information, and the like to the outside. The output devices may include, for example, a display, speakers, a printer, and a communication network and remote output devices connected thereto, among others.
Of course, for simplicity, only some of the components of the electronic device relevant to the present disclosure are shown in fig. 9, omitting components such as buses, input/output interfaces, and the like. In addition, the electronic device may include any other suitable components, depending on the particular application.
In addition to the above methods and apparatus, embodiments of the present disclosure may also be a computer program product comprising computer program instructions which, when executed by a processor, cause the processor to perform the steps in the method of generating a panoramic image according to various embodiments of the present disclosure described in the above section of this specification.
The computer program product may write program code for carrying out operations for embodiments of the present disclosure in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present disclosure may also be a computer-readable storage medium having stored thereon computer program instructions which, when executed by a processor, cause the processor to perform the steps in the method of generating a panoramic image according to various embodiments of the present disclosure described in the above section of the present specification.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing describes the general principles of the present disclosure in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present disclosure are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present disclosure. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the disclosure is not intended to be limited to the specific details so described.
In the present specification, the embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts in the embodiments are referred to each other. For the system embodiment, since it basically corresponds to the method embodiment, the description is relatively simple, and reference may be made to the partial description of the method embodiment for relevant points.
The block diagrams of devices, apparatuses, systems referred to in this disclosure are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
The methods and apparatus of the present disclosure may be implemented in a number of ways. For example, the methods and apparatus of the present disclosure may be implemented by software, hardware, firmware, or any combination of software, hardware, and firmware. The above-described order for the steps of the method is for illustration only, and the steps of the method of the present disclosure are not limited to the order specifically described above unless specifically stated otherwise. Further, in some embodiments, the present disclosure may also be embodied as programs recorded in a recording medium, the programs including machine-readable instructions for implementing the methods according to the present disclosure. Thus, the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.
It is also noted that in the apparatus, devices, and methods of the present disclosure, various components or steps may be broken down and/or re-combined. These decompositions and/or recombinations are to be considered equivalents of the present disclosure.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the disclosure. Thus, the present disclosure is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, the description is not intended to limit embodiments of the disclosure to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (10)

1. A method of generating a panoramic image, comprising:
acquiring a local image to be spliced and position mapping information corresponding to the local image, wherein the position mapping information represents the mapping relation between the positions of panoramic pixel points and the positions of the local pixel points, the panoramic pixel points are pixel points in the panoramic image to be generated, and the local pixel points are pixel points in the local image;
mapping the panoramic pixel point to the local image based on the position mapping information to obtain a mapping position of the panoramic pixel point in the local image;
determining a target pixel value of the mapping position in the local image;
and determining the target pixel value as the pixel value of the panoramic pixel point in the panoramic image to obtain the panoramic image.
2. The method of claim 1, wherein the location mapping information is generated by:
based on a preset panoramic image splicing strategy, carrying out panoramic mapping transformation on the local image, wherein the panoramic mapping transformation represents a set of mapping transformation related to panoramic image splicing;
extracting position transformation information respectively corresponding to each mapping transformation processing in the panoramic mapping transformation;
and carrying out merging interpolation processing on the position transformation information to obtain the position mapping information.
3. The method according to claim 1 or 2, wherein the position mapping information is a position mapping image of three channels;
the positions of the pixel points in the position mapping image correspond to the positions of the panoramic pixel points in the panoramic image one by one, and the pixel values of the pixel points in the position mapping image comprise the number of the local image where the mapping position corresponding to the panoramic pixel point is located and the coordinates of the mapping position.
4. The method of claim 3, further comprising:
determining the product of the coordinates of the mapping position in the position mapping image and a preset coefficient, and determining the product as a coding coordinate, wherein the value of the preset coefficient is 10nN is a positive integer;
replacing the coordinates of the mapping position in the position mapping image with the coding coordinates to obtain a coded position mapping image;
and storing the corresponding relation between the coded position mapping image and the local image and the coded position mapping image to a preset storage address.
5. The method according to claim 4, wherein obtaining the position mapping information corresponding to the partial image comprises:
based on the local image, acquiring the coded position mapping image from the storage address;
determining a product of the encoding coordinate and the reciprocal of the preset coefficient, and replacing the encoding coordinate with the product to obtain a decoded position mapping image;
and determining the decoded position mapping image as the position mapping information corresponding to the local image.
6. The method of claim 5, wherein before determining the decoded position-mapped image as the position-mapped information corresponding to the local image, the method further comprises:
when the predetermined resolution of the panoramic image is different from the resolution of the decoded position mapping image, sampling the decoded position mapping image, and adjusting the resolution of the decoded position mapping image to the resolution of the panoramic image.
7. The method of one of claims 1 to 6, wherein determining a target pixel value of the mapped location in the local image comprises:
and when the coordinates of the mapping position are floating point numbers, performing interpolation processing on the mapping position based on pixel values in the neighborhood of the mapping position to obtain a target pixel value of the mapping position.
8. An apparatus for generating a panoramic image, comprising:
the device comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is configured to acquire local images to be spliced and position mapping information corresponding to the local images, the position mapping information represents the mapping relation between the positions of panoramic pixel points and the positions of the local pixel points, the panoramic pixel points are pixel points in the panoramic images to be generated, and the local pixel points are pixel points in the local images;
a mapping unit configured to map the panoramic pixel to the local image based on the position mapping information, so as to obtain a mapping position of the panoramic pixel in the local image;
a determination unit configured to determine a target pixel value of the mapping position in the partial image;
a generating unit configured to determine the target pixel value as a pixel value of the panoramic pixel point in the panoramic image, so as to obtain the panoramic image.
9. An electronic device, comprising:
a memory for storing a computer program product;
a processor for executing the computer program product stored in the memory, and when executed, implementing the method of any of the preceding claims 1-7.
10. A computer program product comprising computer program instructions, characterized in that the computer program instructions, when executed by a processor, implement the method of any of the preceding claims 1-7.
CN202210375846.8A 2022-04-11 2022-04-11 Method, apparatus, device and program product for generating panoramic image Pending CN114677280A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210375846.8A CN114677280A (en) 2022-04-11 2022-04-11 Method, apparatus, device and program product for generating panoramic image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210375846.8A CN114677280A (en) 2022-04-11 2022-04-11 Method, apparatus, device and program product for generating panoramic image

Publications (1)

Publication Number Publication Date
CN114677280A true CN114677280A (en) 2022-06-28

Family

ID=82079061

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210375846.8A Pending CN114677280A (en) 2022-04-11 2022-04-11 Method, apparatus, device and program product for generating panoramic image

Country Status (1)

Country Link
CN (1) CN114677280A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115861050A (en) * 2022-08-29 2023-03-28 如你所视(北京)科技有限公司 Method, apparatus, device and storage medium for generating panoramic image

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105635551A (en) * 2014-10-29 2016-06-01 浙江大华技术股份有限公司 Method of dome camera for generating panoramic image, and dome camera
CN106412669A (en) * 2016-09-13 2017-02-15 微鲸科技有限公司 Method and device for rendering panoramic video
US20200074593A1 (en) * 2017-03-01 2020-03-05 Peking University Shenzhen Graduate School Panoramic image mapping method, apparatus, and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105635551A (en) * 2014-10-29 2016-06-01 浙江大华技术股份有限公司 Method of dome camera for generating panoramic image, and dome camera
CN106412669A (en) * 2016-09-13 2017-02-15 微鲸科技有限公司 Method and device for rendering panoramic video
US20200074593A1 (en) * 2017-03-01 2020-03-05 Peking University Shenzhen Graduate School Panoramic image mapping method, apparatus, and device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115861050A (en) * 2022-08-29 2023-03-28 如你所视(北京)科技有限公司 Method, apparatus, device and storage medium for generating panoramic image

Similar Documents

Publication Publication Date Title
CN111275626B (en) Video deblurring method, device and equipment based on ambiguity
EP2850835B1 (en) Estimation, encoding and decoding of motion information in multidimensional signals through motion zones, and of auxiliary information through auxiliary zones
CN111429354B (en) Image splicing method and device, panorama splicing method and device, storage medium and electronic equipment
CN114399597B (en) Method and device for constructing scene space model and storage medium
US20220301121A1 (en) Method and apparatus for correcting face distortion, electronic device, and storage medium
CN111986239B (en) Point cloud registration method and device, computer readable storage medium and electronic equipment
US11769291B2 (en) Method and device for rendering point cloud-based data
CN111709879B (en) Image processing method, image processing device and terminal equipment
CN111402404B (en) Panorama complementing method and device, computer readable storage medium and electronic equipment
CN114677280A (en) Method, apparatus, device and program product for generating panoramic image
Greisen et al. Algorithm and VLSI architecture for real-time 1080p60 video retargeting
CN113112561A (en) Image reconstruction method and device and electronic equipment
CN110619670A (en) Face interchange method and device, computer equipment and storage medium
CN113592994B (en) Method, apparatus and storage medium for texture mapping
CN111383158A (en) Remote sensing image preprocessing method
CN111818333B (en) Intra-frame prediction method, device, terminal and storage medium
CN115861417A (en) Parking space reconstruction method and device, electronic equipment and storage medium
CN115809959A (en) Image processing method and device
US11550387B2 (en) Stereo correspondence search
CN112381713B (en) Image stitching method and device, computer readable storage medium and electronic equipment
CN113470155B (en) Texture image processing method and device, electronic equipment and storage medium
CN115361536B (en) Panoramic image compression method and device, intelligent device and storage medium
CN114022619B (en) Image pose optimization method and apparatus, device, storage medium, and program product
CN115861050A (en) Method, apparatus, device and storage medium for generating panoramic image
CN115880140A (en) Method and device for determining overlapping area of images, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20220628

RJ01 Rejection of invention patent application after publication