CN114627000A - Image processing method and device and electronic equipment - Google Patents

Image processing method and device and electronic equipment Download PDF

Info

Publication number
CN114627000A
CN114627000A CN202210109463.6A CN202210109463A CN114627000A CN 114627000 A CN114627000 A CN 114627000A CN 202210109463 A CN202210109463 A CN 202210109463A CN 114627000 A CN114627000 A CN 114627000A
Authority
CN
China
Prior art keywords
image
viewpoint
panoramic
under
view
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210109463.6A
Other languages
Chinese (zh)
Inventor
李政
陈刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202210109463.6A priority Critical patent/CN114627000A/en
Publication of CN114627000A publication Critical patent/CN114627000A/en
Priority to PCT/CN2022/138573 priority patent/WO2023142732A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T5/73
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Abstract

The embodiment of the application provides an image processing method, an image processing device and electronic equipment, wherein in the method, an image block corresponding to a panoramic image under each viewpoint and a mapping relation between the characteristics of an image at each view angle under each viewpoint and an identifier of the image block corresponding to the panoramic image under each viewpoint are stored in the electronic equipment, the image at each view angle under each viewpoint and the image block corresponding to the panoramic image under each viewpoint are obtained based on the panoramic image under each viewpoint, and the panoramic image under each viewpoint is a high-definition image.

Description

Image processing method and device and electronic equipment
Technical Field
The embodiment of the application relates to the field of image processing, in particular to an image processing method and device and electronic equipment.
Background
With the development of terminals, many terminals can support high-magnification photographing at present, and the magnification is 30 times, 50 times and the like. However, due to the structural limitation of some terminals, for example, the mobile phone needs to have a thinner body, when the terminal takes a picture with a high magnification, the definition of the shot image is low and the details are blurred.
In order to improve the definition of images shot by the terminal with high magnification, a large number of high-definition images of different shooting positions and different shooting angles in the same shooting position can be stored in advance. After the terminal shoots with high magnification, the high-definition image with the maximum similarity to the image shot by the terminal can be acquired from the pre-stored high-definition images, and then the terminal can display the high-definition image. At present, a large number of high-definition images need to be stored in the mode, and the storage cost is high.
Disclosure of Invention
The embodiment of the application provides an image processing method and device and electronic equipment, and storage overhead can be reduced.
In a first aspect, an execution subject for executing the method may be an electronic device or a chip in the electronic device, and the following description takes a cloud electronic device as an example. The electronic equipment stores image blocks corresponding to panoramic images under all viewpoints, mapping relations between the characteristics of the images of all the viewpoints and the identifications of the image blocks corresponding to the panoramic images under all the viewpoints, the images of all the viewpoints and the image blocks corresponding to the panoramic images under all the viewpoints are obtained based on the panoramic images under all the viewpoints, and the panoramic images under all the viewpoints are high-definition images and are obtained based on images shot by equipment which can acquire high-definition images such as a single-lens reflex camera.
In the method, the electronic device may acquire a first image to be processed, may extract features in the first image, and further acquire similarity between the features of the image of each view angle under each viewpoint and the features of the first image, where the image of the view angle with the highest similarity has the highest similarity with the features of the first image. The electronic device may determine, according to the feature of the image of the perspective corresponding to the maximum similarity and the mapping relationship, the target identifier of the feature mapping of the image of the perspective corresponding to the maximum similarity.
Because the image blocks corresponding to the panoramic image of each viewpoint are obtained based on the panoramic image of each viewpoint, after the target identifier of the feature mapping of the image of the view angle corresponding to the maximum similarity is obtained, the second image can be obtained according to the image blocks corresponding to the target identifier, and the definition of the second image is higher than that of the first image. And the second image is obtained according to the image block in the panoramic image to which the image block corresponding to the target identifier belongs.
The definition of the panoramic image at each viewpoint is greater than or equal to a preset definition, that is, the definition of the panoramic image at each viewpoint is higher than the definition of the first image, and the image block corresponding to the panoramic image at each viewpoint is obtained based on the panoramic image at each viewpoint, so that the definition of the second image obtained based on the image block corresponding to the target identifier is greater than or equal to the preset definition. Therefore, the terminal can obtain the second image with high definition by adopting the method, and because the image blocks corresponding to the panoramic image under each viewpoint and the mapping relation between the characteristics of the image of each view angle under each viewpoint and the identification of the image block corresponding to the panoramic image under each viewpoint are stored in the electronic equipment, compared with the mode of storing high-definition images with different view angles under each viewpoint in the prior art, the storage cost can be reduced.
In one possible implementation, in order to reduce the calculation amount of the similarity of the electronic device, the electronic device may further acquire the position (i.e., the viewpoint) of the device that captured the first image when acquiring the first image. Therefore, the electronic device can determine the target viewpoint within the preset range from the position, and further acquire the similarity between the feature of the image of each view angle under the target viewpoint and the feature of the first image. Therefore, the calculation amount of the similarity calculated by the terminal can be reduced, and the similarity between the feature of the image of each view angle under the target view point within the preset range from the position of the terminal and the feature of the first image only needs to be calculated, without acquiring the similarity between the feature of the image of each view angle under each view point and the feature of the first image.
In a possible implementation manner, the mapping relationship includes a first index relationship and a second index relationship, and the second index relationship is: the feature of the image of each view angle under each viewpoint and the mapping relationship of the central point of the image of each view angle under each viewpoint are as follows, and the first index relationship is as follows: and mapping relation between the central point of the image of each view angle under each view point and the identifier of the image block corresponding to the panoramic image under each view point. After obtaining the features of the image of the view angle corresponding to the maximum similarity, the electronic device may determine the center point of the feature mapping of the image of the view angle corresponding to the maximum similarity according to the features of the image of the view angle corresponding to the maximum similarity and the second index relationship, and then determine the target identifier according to the center point of the feature mapping of the image of the view angle corresponding to the maximum similarity and the first index relationship.
In an embodiment, the features of the image of each view angle at each viewpoint, the first index relationship, and the second index relationship stored in the electronic device may be preset in the electronic device by a worker, or acquired by the electronic device.
In an embodiment, the electronic device may obtain, according to the panoramic image at each viewpoint, the feature of the image at each viewpoint, the first index relationship, and the second index relationship, and further store the feature of the image at each viewpoint, the first index relationship, and the second index relationship.
The electronic device may perform back projection transformation on the panoramic image at each viewpoint to obtain images (high-overlap images) at multiple viewing angles at each viewpoint, and coordinate positions of center points of the images at each viewing angle at the corresponding panoramic image at each viewpoint, where an overlap ratio between the images at adjacent viewing angles at each viewpoint is greater than a preset overlap ratio, so as to extract features of the images at each viewing angle at each viewpoint.
The electronic device may directly perform back projection transformation on the panoramic image from each viewpoint.
Alternatively, in an embodiment, the electronic device may obtain the panoramic image from each viewpoint according to the low-overlap image, and then perform back projection transformation on the panoramic image from each viewpoint. In this embodiment, the electronic device may adopt a panoramic image stitching technique to obtain the panoramic image at each viewpoint according to the pre-acquired images at the multiple viewpoints at each viewpoint, where an overlap ratio between the pre-acquired images at adjacent viewpoints at each viewpoint is smaller than the preset overlap ratio, that is, a low-overlap image.
Specifically, the electronic device may slide in the panoramic image at each viewpoint by using a sliding window with a second preset size, and sequentially obtain, by using back projection transformation, an image of a viewing angle corresponding to a part of the panoramic image in the sliding window and a coordinate position of a central point of an image of a viewing angle corresponding to the part of the panoramic image in the corresponding panoramic image, where the image of each viewing angle at each viewpoint has the second preset size.
The electronic device may construct the second index relationship according to the coordinate position of the center point of the image of each view angle at the corresponding panoramic image and the characteristics of the image of each view angle at each view angle. And the number of the first and second groups,
the electronic device may cut the panoramic image at each viewpoint to obtain an image block corresponding to the panoramic image at each viewpoint, and construct the first index relationship according to the coordinate position of the center point of the image at each viewpoint in the corresponding panoramic image and the image block corresponding to the panoramic image at each viewpoint. In a possible implementation manner, image blocks corresponding to the panoramic image under each view point have a first preset size.
In the above example, the electronic device uses a panoramic image stitching technique to obtain the panoramic image from each viewpoint according to the pre-acquired images from the multiple viewpoints at each viewpoint, that is, the pre-acquired images from each viewpoint at the first view plane are projected onto the second view plane to which the panoramic image belongs, so as to obtain the panoramic image from each viewpoint. In the projection process, the electronic device may further obtain a transformation relationship between the first view plane and the second view plane.
In a possible implementation manner, after obtaining the target identifier, the electronic device may project, according to the transformation relationship, the image block corresponding to the target identifier to the first view plane by using back projection transformation, so as to obtain the second image.
Therefore, the electronic equipment can obtain the second image which is positioned on the same view plane with the image shot by the terminal, the user does not have the view plane difference when watching, the user does not perceive the conversion from the first image to the second image, and the user experience can be improved.
In a possible scenario, the electronic device may be a cloud, and the terminal may capture the first image, but the definition of the first image is not high, so that in the scenario, the terminal may send the first image to the cloud and the position of the terminal when capturing the first image, and thus, the cloud may obtain the first image and capture the position of the first image, and further obtain the second image by using the method described in the possible implementation manner. After the cloud obtains the second image, the cloud can send the second image to the terminal, so that the terminal can display and store the second image, and therefore a user can see the high-definition second image on the terminal, and user experience can be improved.
In a second aspect, an execution subject for executing the method may be a terminal or a chip in the terminal, and the terminal is taken as an example for description below.
The terminal uses a first image shot by a first multiplying power, the first multiplying power is larger than or equal to a preset multiplying power, and the terminal sends the first image to the electronic equipment. The terminal receives a second image from the electronic equipment, and can display the second image in response to an image display instruction, wherein the definition of the second image is higher than that of the first image.
In a possible implementation manner, a terminal stores image blocks corresponding to a panoramic image under each viewpoint, mapping relationships between features of images of each view angle under each viewpoint and identifiers of the image blocks corresponding to the panoramic image under each viewpoint, the images of each view angle under each viewpoint and the image blocks corresponding to the panoramic image under each viewpoint are obtained based on the panoramic image under each viewpoint, and the definition of the panoramic image under each viewpoint is greater than or equal to a preset definition.
The terminal responds to a first image shot by using a first magnification, can acquire the similarity of the characteristics of the image of each view angle under each view point and the characteristics of the first image, and determines the target identifier of the characteristic mapping of the image of the view angle corresponding to the maximum similarity according to the characteristics of the image of the view angle corresponding to the maximum similarity and the mapping relation. And the terminal acquires a second image according to the image block corresponding to the target identifier, wherein the definition of the second image is higher than that of the first image, and the second image can be displayed in response to an image display instruction.
In a possible implementation manner, the acquiring similarity between the feature of the image of each view angle at each viewpoint and the feature of the first image includes: and determining a target viewpoint within a preset range from the position of the terminal, and acquiring the similarity between the characteristics of the image of each view angle under the target viewpoint and the characteristics of the first image.
In a possible implementation manner, the mapping relationship includes a first index relationship and a second index relationship, and the second index relationship is: the feature of the image of each view angle under each viewpoint and the mapping relationship of the central point of the image of each view angle under each viewpoint, and the first index relationship is: and mapping relation between the central point of the image of each view angle under each view point and the identifier of the image block corresponding to the panoramic image under each view point.
The determining, according to the feature of the image of the perspective corresponding to the maximum similarity and the mapping relationship, the target identifier of the feature mapping of the image of the perspective corresponding to the maximum similarity includes: determining a central point of feature mapping of the image of the visual angle corresponding to the maximum similarity according to the feature of the image of the visual angle corresponding to the maximum similarity and the second index relationship; and determining the target identification according to the central point of the feature mapping of the image of the visual angle corresponding to the maximum similarity and the first index relation.
In one possible implementation, the method further includes: acquiring the characteristics of the image of each view angle under each viewpoint, the first index relationship and the second index relationship according to the panoramic image under each viewpoint; and storing the characteristics of the images of each view angle under each viewpoint, the first index relationship and the second index relationship.
In a possible implementation manner, the obtaining, according to the panoramic image at each viewpoint, the feature of the image at each viewpoint includes: performing back projection transformation on the panoramic image under each viewpoint to obtain images of a plurality of viewing angles under each viewpoint and coordinate positions of the central points of the images of each viewing angle under each viewpoint in the corresponding panoramic image, wherein the overlapping rate of the images of the adjacent viewing angles under each viewpoint is greater than a preset overlapping rate; and extracting the characteristics of the image of each view angle under each viewpoint.
In a possible implementation manner, the obtaining images of multiple viewing angles at each viewing point by using back projection transformation on the panoramic image at each viewing point, and the coordinate position of the center point of the image of each viewing angle at the corresponding panoramic image at each viewing point includes: and in the panoramic image under each viewpoint, sliding a sliding window with a second preset size in the panoramic image, and sequentially obtaining the images of the visual angles corresponding to part of the panoramic image in the sliding window and the coordinate positions of the central points of the images of the visual angles corresponding to the part of the panoramic image in the corresponding panoramic image by adopting back projection transformation, wherein the images of each visual angle under each viewpoint have the second preset size.
In a possible implementation manner, the obtaining the second index relationship includes: and constructing the second index relationship according to the coordinate position of the central point of the image of each view angle under each viewpoint in the corresponding panoramic image and the characteristics of the image of each view angle under each viewpoint.
In one possible implementation, the obtaining the first index relationship includes: cutting the panoramic image under each viewpoint to obtain image blocks corresponding to the panoramic image under each viewpoint; and constructing the first index relationship according to the coordinate position of the central point of the image of each view angle under each view point in the corresponding panoramic image and the image block corresponding to the panoramic image under each view point.
In a possible implementation manner, image blocks corresponding to the panoramic image under each view point have a first preset size.
In a possible implementation manner, before the obtaining, according to the panoramic image at each viewpoint, the feature of the image at each viewpoint, the first index relationship, and the second index relationship, the method further includes: and acquiring the panoramic image under each viewpoint according to the pre-acquired images of the plurality of viewpoints under each viewpoint by adopting a panoramic image splicing technology, wherein the overlapping rate of the pre-acquired images of the adjacent viewpoints under each viewpoint is smaller than the preset overlapping rate.
In a possible implementation manner, the acquiring, by using a panoramic image stitching technique, a panoramic image at each viewpoint according to pre-acquired images at a plurality of viewing angles at each viewpoint includes: and projecting the pre-collected image of each view angle under each viewpoint in the first view plane to a second view plane to which the panoramic image belongs to obtain the panoramic image under each viewpoint and the transformation relation between the first view plane and the second view plane.
In a possible implementation manner, the obtaining a second image according to the image block corresponding to the target identifier includes: and projecting the image block corresponding to the target identifier to the first view plane by adopting back projection transformation according to the transformation relation to obtain the second image.
In a third aspect, an embodiment of the present application provides an image processing apparatus, which may be an electronic device or a chip in the electronic device. The image processing apparatus includes:
and the processing module is used for acquiring the similarity of the features of the image of each view angle under each view point and the features of the first image, determining a target identifier of feature mapping of the image of the view angle corresponding to the maximum similarity according to the features of the image of the view angle corresponding to the maximum similarity and the mapping relation, and acquiring a second image according to an image block corresponding to the target identifier, wherein the definition of the second image is higher than that of the first image.
In a possible implementation manner, the processing module is specifically configured to acquire the first image, capture a position of the first image, determine a target viewpoint within a preset range from the position, and acquire similarity between features of the image at each viewpoint under the target viewpoint and features of the first image.
In a possible implementation manner, the mapping relationship includes a first index relationship and a second index relationship, and the second index relationship is: the feature of the image of each view angle under each viewpoint and the mapping relationship of the central point of the image of each view angle under each viewpoint are as follows, and the first index relationship is as follows: and mapping relation between the central point of the image of each view angle under each view point and the identifier of the image block corresponding to the panoramic image under each view point.
And the processing module is specifically configured to determine, according to the feature of the image of the view angle corresponding to the maximum similarity and the second index relationship, a center point of the feature mapping of the image of the view angle corresponding to the maximum similarity, and determine the target identifier according to the center point of the feature mapping of the image of the view angle corresponding to the maximum similarity and the first index relationship.
In a possible implementation manner, the processing module is further configured to obtain, according to the panoramic image at each viewpoint, a feature of an image of each view angle at each viewpoint, the first index relationship, and the second index relationship.
And the storage module is used for storing the characteristics of the images of each view angle under each viewpoint, the first index relationship and the second index relationship.
In a possible implementation manner, the processing module is specifically configured to perform back projection transformation on the panoramic image at each viewpoint to obtain images of multiple viewing angles at each viewpoint and coordinate positions of center points of the images of each viewing angle at the corresponding panoramic image at each viewpoint, where an overlap ratio between the images of adjacent viewing angles at each viewpoint is greater than a preset overlap ratio; and extracting the characteristics of the image of each view angle under each viewpoint.
In a possible implementation manner, the processing module is specifically configured to slide in the panoramic image by using a sliding window with a second preset size in the panoramic image at each viewpoint, and sequentially obtain, by using back projection transformation, the images at the viewing angles corresponding to part of the panoramic image in the sliding window and the coordinate positions of the central points of the images at the viewing angles corresponding to part of the panoramic image in the corresponding panoramic image, where the image at each viewing angle at each viewpoint has the second preset size.
In a possible implementation manner, the processing module is specifically configured to construct the second index relationship according to a coordinate position of a central point of the image of each view angle at each viewpoint in the corresponding panoramic image and a feature of the image of each view angle at each viewpoint.
In a possible implementation manner, the processing module is specifically configured to cut the panoramic image at each viewpoint to obtain an image block corresponding to the panoramic image at each viewpoint, and construct the first index relationship according to the coordinate position of the center point of the image at each viewpoint in the corresponding panoramic image and the image block corresponding to the panoramic image at each viewpoint.
In a possible implementation manner, image blocks corresponding to the panoramic image under each view point have a first preset size.
In a possible implementation manner, the processing module is further configured to acquire the panoramic image at each viewpoint according to the pre-acquired images of the multiple viewpoints at each viewpoint by using a panoramic image stitching technique, where an overlap ratio between the pre-acquired images at adjacent viewpoints at each viewpoint is smaller than the preset overlap ratio.
In a possible implementation manner, the processing module is specifically configured to project a pre-acquired image of each view angle at each viewpoint in a first view plane to a second view plane to which the panoramic image belongs, so as to obtain the panoramic image at each viewpoint and a transformation relationship between the first view plane and the second view plane.
In a possible implementation manner, the processing module is specifically configured to project the image block corresponding to the target identifier to the first view plane by using back projection transformation according to the transformation relation, so as to obtain the second image.
In a possible implementation manner, the transceiver module is configured to receive a first image from a terminal and a position of the terminal when the terminal captures the first image, and send the second image to the terminal.
In a fourth aspect, embodiments of the present application provide an image processing apparatus, which may be a terminal or a chip in the terminal. The image processing apparatus includes:
in a possible implementation manner, image blocks corresponding to the panoramic image at each viewpoint, and mapping relationships between the features of the image at each view angle at each viewpoint and the identifiers of the image blocks corresponding to the panoramic image at each viewpoint are stored in the terminal, and the image at each view angle at each viewpoint and the image blocks corresponding to the panoramic image at each viewpoint are obtained based on the panoramic image at each viewpoint.
The shooting module is used for shooting a first image by using a first multiplying power, and the first multiplying power is smaller than a preset multiplying power.
And the processing module is used for acquiring the similarity between the characteristics of the image of each view angle under each view point and the characteristics of the first image, determining a target identifier of the characteristic mapping of the image of the view angle corresponding to the maximum similarity according to the characteristics of the image of the view angle corresponding to the maximum similarity and the mapping relation, and acquiring a second image according to the image block corresponding to the target identifier. The sharpness of the second image is higher than the sharpness of the first image.
And the display module is used for responding to the image display instruction and displaying the second image.
In a possible implementation manner, the processing module is specifically configured to determine a target viewpoint within a preset range from the position of the terminal, and acquire similarity between features of an image of each view angle at the target viewpoint and features of the first image.
In a possible implementation manner, the mapping relationship includes a first index relationship and a second index relationship, and the second index relationship is: the feature of the image of each view angle under each viewpoint and the mapping relationship of the central point of the image of each view angle under each viewpoint are as follows, and the first index relationship is as follows: and mapping relation between the central point of the image of each view angle under each view point and the identifier of the image block corresponding to the panoramic image under each view point.
And the processing module is specifically configured to determine, according to the feature of the image of the view angle corresponding to the maximum similarity and the second index relationship, a center point of the feature mapping of the image of the view angle corresponding to the maximum similarity, and determine the target identifier according to the center point of the feature mapping of the image of the view angle corresponding to the maximum similarity and the first index relationship.
In a possible implementation manner, the processing module is further configured to obtain, according to the panoramic image at each viewpoint, a feature of an image of each view angle at each viewpoint, the first index relationship, and the second index relationship.
And the storage module is used for storing the characteristics of the images of each view angle under each viewpoint, the first index relationship and the second index relationship.
In a possible implementation manner, the processing module is specifically configured to perform back projection transformation on the panoramic image at each viewpoint to obtain images of multiple viewing angles at each viewpoint and coordinate positions of center points of the images of each viewing angle at the corresponding panoramic image at each viewpoint, where an overlap ratio between the images of adjacent viewing angles at each viewpoint is greater than a preset overlap ratio; and extracting the characteristics of the image of each view angle under each view point.
In a possible implementation manner, the processing module is specifically configured to slide in the panoramic image by using a sliding window with a second preset size in the panoramic image at each viewpoint, and sequentially obtain, by using back projection transformation, the coordinate positions of the images of the view angles corresponding to part of the panoramic image in the sliding window and the coordinate positions of the center points of the images of the view angles corresponding to part of the panoramic image in the corresponding panoramic image, where the image of each view angle at each viewpoint has the second preset size.
In a possible implementation manner, the processing module is specifically configured to construct the second index relationship according to a coordinate position of a central point of the image of each view angle at each viewpoint in the corresponding panoramic image and a feature of the image of each view angle at each viewpoint.
In a possible implementation manner, the processing module is specifically configured to cut the panoramic image at each viewpoint to obtain an image block corresponding to the panoramic image at each viewpoint, and construct the first index relationship according to the coordinate position of the center point of the image at each viewpoint in the corresponding panoramic image and the image block corresponding to the panoramic image at each viewpoint.
In a possible implementation manner, image blocks corresponding to the panoramic image under each view point have a first preset size.
In a possible implementation manner, the processing module is further configured to acquire the panoramic image at each viewpoint according to the pre-acquired images of the multiple viewpoints at each viewpoint by using a panoramic image stitching technique, where an overlap ratio between the pre-acquired images at adjacent viewpoints at each viewpoint is smaller than the preset overlap ratio.
In a possible implementation manner, the processing module is specifically configured to project a pre-acquired image of each view angle at each viewpoint in a first view plane to a second view plane to which the panoramic image belongs, so as to obtain the panoramic image at each viewpoint and a transformation relationship between the first view plane and the second view plane.
In a possible implementation manner, the processing module is specifically configured to project the image block corresponding to the target identifier to the first view plane by using back projection transformation according to the transformation relation, so as to obtain the second image.
In a fifth aspect, an embodiment of the present application provides an electronic device, which may be the cloud and the terminal. The electronic device may include: a processor, a memory. The memory is for storing computer executable program code, the program code comprising instructions; when executed by a processor, the instructions cause the electronic device to perform the method as in the first aspect, the second aspect.
In a sixth aspect, embodiments of the present application provide a computer program product containing instructions, which when run on a computer, cause the computer to perform the method in the first and second aspects.
In a seventh aspect, an embodiment of the present application provides a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the instructions are executed on a computer, the computer is caused to perform the methods in the first and second aspects.
For each possible implementation manner of the second aspect to the seventh aspect, the beneficial effects thereof may refer to the beneficial effects brought by the first aspect, and details are not repeated herein.
Drawings
Fig. 1 is a schematic view of a scenario applicable to the embodiment of the present application;
fig. 2 is a schematic diagram of high-definition images at different shooting angles at a shooting position stored in a cloud in the prior art;
FIG. 3 is a diagram illustrating an image processing method according to the prior art;
fig. 4 is a schematic diagram of a cloud storage image block and an index relationship provided in an embodiment of the present application;
fig. 5 is another schematic diagram of a cloud storage image block and an index relationship provided in an embodiment of the present application;
fig. 6 is a schematic diagram of images at different viewing angles from a cloud-side acquisition viewpoint according to the embodiment of the present application;
FIG. 7 is a flowchart illustrating an embodiment of an image processing method according to an embodiment of the present application;
fig. 8 is a schematic diagram illustrating a variation of the photographing interface according to the embodiment of the present application;
FIG. 9 is a flowchart illustrating an embodiment of an image processing method according to an embodiment of the present application;
fig. 10 is a schematic diagram of a panoramic image stored in a cloud according to an embodiment of the present application;
fig. 11 is a schematic flowchart of another embodiment of an image processing method according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The examples of the present application relate to the terms:
single viewpoint: the viewpoint can understand the position of the photographing device (such as a mobile phone) when photographing, and a single viewpoint is a position.
Panoramic image stitching technology: a plurality of images are spliced into a large-scale image. In the embodiment of the present application, a plurality of high definition images are spliced into one panoramic image. Briefly describing the principle of the panoramic image stitching technology, the panoramic image stitching may include, but is not limited to, 4 steps, where the 4 steps are: detecting and extracting features and key points of the images, matching the key points of the two images, estimating a homography matrix by using a random sample consensus (RANSAC), and splicing the images.
In one embodiment, the implementation of the panoramic image stitching technique may include: scale-invariant feature transform (SIFT) local descriptors are used to detect keypoints and features (feature descriptors or SIFT features) in the images and to match feature descriptors between the two images, i.e. to match keypoints of the two images using features. Next, the RANSAC algorithm is used to estimate homography (homographic evaluation) using the matched key points on the two images, i.e. one of the images is matched with the other image through correlation.
After the homography matrix is estimated, perspective transformation (perspective transformation) may be used, such as inputting the homography matrix, the image to be distorted, and the shape of the output image, and further determining the derived shape of the output image by obtaining the sum of the widths of the two images and then using the height of the images, which may specifically refer to the related description in the prior art of perspective transformation. The perspective transformation can be understood as: the image is projected to a new viewing plane (viewing plane), also called projective mapping or projective transformation.
Projection transformation: reference may be made to the description of the perspective transformation.
And (3) back projection transformation: the projective transformation refers to a process of projecting an image to a new viewing plane, and the back-projective transformation refers to a process of projecting an image of the new viewing plane to an original viewing plane of the image. It should be understood that, in the process of projection transformation, a transformation relation (such as a transformation matrix) between the original view plane and the new view plane may be obtained, and the process of back projection transformation uses the "transformation relation between the original view plane and the new view plane" to project the image of the new view plane to the original view plane of the image.
In an embodiment, the backprojection transformation may be referred to as backprojection mapping, backprojection transformation, or inverse perspective transformation, which may particularly be referred to in the related art description of backprojection transformation.
High-definition images: if the image with high definition is shot by a single lens reflex, the definition of the high definition image is greater than the preset definition. In an embodiment, if the resolutions of the images are the same, the higher the code rate is, the higher the definition is, and in such a scenario, the preset code rate may be used to represent the preset definition. The parameters characterizing the sharpness are not limited in the embodiments of the present application.
Photographing multiplying power: refers to zoom magnification.
High multiplying power: and the zooming multiplying power adopted during photographing is larger than the preset multiplying power. The preset magnification depends on the photographing capability of the terminal, and the preset magnifications of different terminals may be the same or different, and in an embodiment, the preset magnification may be 5.
Fig. 1 is a schematic view of a scene applicable to the embodiment of the present application. Fig. 1 illustrates a comparison between a mobile phone and a single lens reflex camera as a terminal, and illustrates an example in which both the mobile phone and the single lens reflex camera shoot a computer screen. Referring to a in fig. 1, when a user uses the single lens reflex to take a picture with a magnification of 30, a high-definition image can be obtained, for example, the user can clearly see the characters "one, two, three and four" on the computer screen in the image. When a user uses a mobile phone to take a picture with a magnification of 30 (i.e. 30x in b in fig. 1), the resolution of the picture obtained by taking the picture is low, and the user cannot clearly see the characters on the computer screen, but only can see a few shadow squares, as shown in b in fig. 1. It should be understood that for convenience of explaining images photographed by the mobile phone and the single lens reflex camera, images photographed by the mobile phone and the single lens reflex camera at the magnification of 30 are shown on the right side of a in fig. 1 and the right side of b in fig. 1, respectively.
It should be understood that, in the embodiment of the present application, the user may use high-magnification photographing to understand that: and when the user takes a picture, the shooting magnification is larger than the preset magnification. The preset magnification may refer to the related description in the above explanation of terms.
In order to improve the definition of an image obtained by a terminal by using high-magnification shooting, a large number of high-definition images can be stored in a cloud in advance in the prior art. The high-definition image comprises: the high-definition images shot at different shooting positions and the high-definition images shot at different shooting angles at the same shooting position. For example, fig. 2 is a schematic diagram of high-definition images captured at different capturing angles at a capturing position a stored in a cloud in the prior art, and it should be understood that the object captured by the black rectangle in fig. 2 is illustrated by taking 6 high-definition images as an example. In one embodiment, the overlapping rate of the pictures of the high-definition image stored in the cloud in the prior art is greater than or equal to a first overlapping rate, such as 80%. In one embodiment, the shooting position may be referred to as a viewpoint, and the shooting angle may be referred to as an angle of view, in other words, in the prior art, the cloud stores high-definition images shot from different viewpoints and high-definition images shot from different angles of view of the same viewpoint.
Referring to fig. 3, in the prior art, when a terminal uses a high-magnification camera to take a picture, the terminal may send the taken picture to a cloud, and the cloud acquires the similarity between each high-definition picture stored in the cloud and the picture from the terminal, so as to feed back the high-definition picture with the maximum similarity to the terminal. After the terminal receives the high-definition image from the cloud, the high-definition image can be displayed, and a user can see that the terminal uses high-magnification shooting to obtain the high-definition image. For example, in fig. 3, the terminal sends the low-definition image shown in b in fig. 1 to the cloud, and the cloud may feed back the high-definition image to the terminal, such as an image with "one, two, three, and four" characters. In the method in the prior art, although the terminal can shoot high-definition images when shooting at a high magnification, the cloud end needs to store a large number of high-definition images, a large amount of storage space is occupied, and the storage cost of the cloud end is high.
In one embodiment, the high definition images stored in the cloud may be reduced, such as storing high definition images with an overlap ratio less than a second overlap ratio, such as 20%. Therefore, the number of the high-definition images stored in the cloud is small, and the shooting angles corresponding to the images at the same shooting position are reduced, so that based on the method for comparing the similarity between each high-definition image stored in the cloud and the image from the terminal, the shooting angles of the high-definition images fed back to the terminal are different from the shooting angles of the images shot by the actual terminal, so that a user looks like the images shot at different shooting angles, the feedback accuracy of the high-definition images is low, and the user experience is low.
Based on the above problem, on one hand, in the embodiment of the application, panoramic images (or image blocks divided from the panoramic images) at different single viewpoints (or viewpoints) can be stored in the cloud, and for a single viewpoint, high-definition images stored in the cloud are changed from multiple high-definition images into one panoramic image (or multiple image blocks corresponding to one panoramic image), so that the storage overhead of the cloud can be reduced. On the other hand, in order to guarantee that accurate high-definition images are fed back to the terminal, high-definition images at different shooting angles under the same viewpoint still need to be stored, in the embodiment of the application, on the basis of reducing the cloud storage overhead, the characteristics of the high-definition images at different viewpoints and different shooting angles under the same viewpoint can be stored, and thus on the basis of guaranteeing the feedback accuracy, the cloud storage overhead is also reduced.
In an embodiment, a terminal in the embodiment of the present application may be referred to as a user equipment, and the terminal has a photographing function and supports high-magnification photographing. When the terminal in the embodiment of the application performs photographing with a high magnification (for example, the photographing magnification is greater than the preset magnification), the definition of the photographed image is low. For example, the terminal may be a mobile phone, a tablet device (PAD), a Personal Digital Assistant (PDA), a handheld device with a wireless communication function, a computing device, or a wearable device, a Virtual Reality (VR) terminal device, an Augmented Reality (AR) terminal device, a terminal in a smart home (smart home), and the like, and a form of the terminal is not particularly limited in this embodiment.
In one embodiment, the cloud may be a server, or a cluster of servers. For example, the server may be a server corresponding to the photographing application program or a server corresponding to the application program with the photographing function, and the form of the cloud is not specifically limited in the embodiment of the present application.
Before introducing the image processing method provided by the embodiment of the present application, first, contents stored in a cloud are explained:
in an embodiment, a plurality of image blocks corresponding to panoramic images under different viewpoints are stored in the cloud, wherein the plurality of image blocks corresponding to panoramic images under the same viewpoint have no overlap or have an overlap ratio smaller than a third overlap ratio. Illustratively, the third overlap ratio may be 20%, 10%, etc. smaller values. It should be understood that the panoramic image is a high-definition image, and the plurality of image blocks corresponding to the panoramic image are also high-definition image blocks.
In one embodiment, panoramic images from different viewpoints are stored in the cloud.
In the embodiment of the application, for a viewpoint, the cloud end does not store high-definition images of multiple viewing angles at the viewpoint, but stores a panoramic image at the viewpoint or multiple image blocks corresponding to the panoramic image, so that the storage overhead of the cloud end can be reduced. The following embodiments describe the image processing method provided in the embodiments of the present application by taking one viewpoint as an example.
In an embodiment, panoramic images at different viewpoints in the embodiment of the present application may be obtained by shooting with a panoramic camera, or obtained by stitching low-overlap high-definition images at different viewpoints. Wherein the overlap ratio between the low-overlap high-definition images is less than the second overlap ratio. The following description will take the example of obtaining panoramic images from different viewpoints by stitching low-overlap high-definition images from different viewpoints.
Referring to fig. 4, the process of cloud-storing content may include the following steps:
s401, the cloud end adopts a panoramic image splicing technology to splice low-overlap high-definition images at the same viewpoint to obtain panoramic images at different viewpoints.
In one embodiment, the low-overlap high-definition images from the same viewpoint may be captured by a single lens reflex or other photographing device capable of taking high-definition images. For example, a single lens reflex camera may be used to capture high-definition images of different viewing angles from the same viewpoint in advance, so as to obtain high-definition images of different viewing angles from different viewpoints. In one embodiment, the low-overlap high-definition images at each viewpoint may be referred to as pre-captured images for multiple perspectives at each viewpoint.
The high-definition images at the adjacent view angles acquired by the single-lens reflex camera are less than the second overlapping rate, or the high-definition images at the same view angle acquired by the single-lens reflex camera can be selected from the high-definition images at the same view angle, so that the overlapping rate of the high-definition images at the adjacent view angles is less than the second overlapping rate, and the low-overlapping high-definition images at the same view angle are obtained. The purpose that the overlapping rate of the low-overlapping high-definition images is smaller than the second overlapping rate is to: the calculation amount of panoramic image splicing performed by the cloud is reduced, and the splicing efficiency is improved. In an embodiment, it can be further said that an overlapping rate between adjacent viewing angle high definition images is smaller than a preset overlapping rate, and the preset overlapping rate is greater than or equal to the second overlapping rate and smaller than the first overlapping rate.
For low-overlapping high-definition images at the same viewpoint, the cloud end can adopt a panoramic image splicing technology to obtain panoramic images at the viewpoint. According to the panoramic image splicing technology, the cloud end can obtain panoramic images at different viewpoints. The panoramic image stitching technique may particularly refer to the related description in the term definitions.
Wherein, S401 is as S1 in fig. 5, and fig. 5 is a simplified flowchart of fig. 4.
Fig. 6 is a schematic diagram illustrating that the cloud acquires images from different viewing angles at the same viewpoint according to the embodiment of the present application. Referring to a in fig. 6, taking an example that the low-overlap high-definition image at a viewpoint includes 2 sheets, the cloud executes S401 to obtain a panoramic image at the viewpoint, as shown in b in fig. 6.
S402, the cloud cuts the panoramic image under each viewpoint to obtain an image block corresponding to the panoramic image under each viewpoint.
Taking a viewpoint as an example, the cloud may cut the panoramic image under the viewpoint into image blocks with a preset size, so as to obtain a plurality of image blocks corresponding to the viewpoint. In an embodiment, each of the image blocks has the same size, such as 800px 900px, that is, each of the image blocks has a first predetermined size, and the first predetermined size may be understood as having a first predetermined width and a first predetermined height. Where 1px represents one pixel. In one embodiment, the size of each image block may be different.
In one embodiment, two adjacent image blocks corresponding to the same view point have no overlap, i.e. do not contain the same area. In one embodiment, the overlapping ratio between two adjacent image blocks corresponding to the same view point may be smaller than the third overlapping ratio.
In an embodiment, after the cloud cuts the panoramic image into image blocks, each image block may be numbered. For example, the cut image blocks may be numbered according to the rows and columns in the panoramic image, for example, an image block is located in the first row and the first column in the panoramic image, and the image block may be numbered as row 1 and column 1. For example, the image blocks may be numbered in an order from 1 to N, where the image block in the first row and the first column is numbered as 1, the image block in the first row and the second column is numbered as 2, and N is an integer greater than 1. The method for numbering image blocks in the embodiments of the present application is not limited, and the following embodiments illustrate the example of numbering image blocks in rows and columns. In one embodiment, the row, column number, or number of "1-N" of an image block may be referred to as the identification of the image block.
In the embodiment of the present application, the panoramic image under each viewpoint is cut into image blocks in order to: the cloud end is convenient to load the image blocks, the whole panoramic image is not directly loaded by the cloud end, and the loading time of the image blocks is less than that of the whole panoramic image, so that the loading speed of the cloud end can be increased, the speed of the cloud end feeding back a high-definition image to the terminal can be increased, and the method can be specifically described with reference to fig. 7.
Where S402 is as S2 in fig. 5.
Referring to fig. 6, the cloud performs S402, and may cut the panoramic image into 8 image blocks, as shown in c in fig. 6.
And S403, the cloud adopts back projection transformation to obtain images of multiple viewing angles corresponding to the panoramic image under each viewpoint.
In S401, in the process that the cloud acquires panoramic images at different viewpoints by using a panoramic image stitching technique, a transformation relationship from a first view plane where a low-overlap high-definition image is located to a second view plane where the panoramic image is located at each viewpoint may be acquired, and then the cloud may project each portion of the panoramic image to the first view plane by using the transformation relationship at each viewpoint to obtain images at different viewing angles.
In an embodiment, the cloud end may sequentially project, from left to right and from top to bottom of the panoramic image, a part of the panoramic image in the sliding window of the second preset size onto the second view plane, so as to obtain images of multiple view angles. The images at each viewing angle are high-definition images, and the images at each viewing angle have the same size, that is, a second preset size, for example, the images at each viewing angle have a second preset width and a second preset height. In an embodiment, the overlapping rate between the images of the adjacent viewing angles corresponding to the panoramic image at the same viewpoint is greater than the first overlapping rate, that is, the cloud controls the overlapping rate between the sliding window and a position on the sliding window to remain greater than the first overlapping rate each time the sliding window slides, so as to obtain the images of the adjacent viewing angles corresponding to the panoramic image at each viewpoint.
In S403, in the back projection transformation process, when the partial panoramic image in the sliding window with the second preset size is projected onto the second view plane, a one-to-one mapping relationship between each pixel point on the partial panoramic image and a pixel point on the image at the corresponding view angle may be obtained, and then the cloud may obtain the coordinate position of the center point of the image at the corresponding view angle in the panoramic image in the process. In one embodiment, the coordinate position of the center point in the panoramic image may be a latitude and longitude coordinate.
It should be understood that the center point of the image may be understood as: the physical center point of the image.
It should be understood that there is no chronological distinction between S402 and S403, and both may be performed simultaneously.
Where S403 is as S3 in fig. 5.
Referring to fig. 6, for example, the cloud performs S403, and may perform back projection transformation on the panoramic image to obtain images of 4 views corresponding to the viewpoint, as shown by d in fig. 6.
S404, the cloud establishes a first index relation between the center point of the image of each view angle and the image block according to the coordinate position of the center point of the image of each view angle in the panoramic image.
The first index relationship may be understood as: the image of each view angle corresponds to which image blocks in the panoramic image, namely, the identification mapping relation between the image of each view angle and the image blocks is established.
In an embodiment, since the image of each view angle has the second preset size, all image blocks corresponding to the image of each view angle may be acquired on the premise that the coordinate position of the center point of the image of each view angle in the panoramic image is known. For example, the cloud end may determine the coordinate positions of the four vertices of the image of each view angle in the panoramic image based on the second preset size of the image of each view angle and the coordinate positions of the center point of the image of each view angle in the panoramic image. Furthermore, for an image at a viewing angle, the cloud may determine, according to four vertices of the image at each viewing angle and the position coordinates of the central point in the panoramic image, the image block corresponding to the image at the viewing angle in the panoramic image.
In one embodiment, the cloud may store the first index relationship. In the first index relationship, the image of each view angle may be characterized by the coordinate position of the center point of the image of each view angle in the panoramic image, and the image blocks may be characterized by the numbers of the image blocks, that is, the first index relationship may include: and the mapping relation between the coordinate position of the central point of the image of each view angle in the panoramic image and the number of the image blocks.
Illustratively, referring to fig. 6, as the panoramic image corresponds to 4 views, the coordinate positions of the center points of the 4 views in the panoramic image are (warp 1, weft 1), (warp 2, weft 2), (warp 3, weft 3), and (warp 4, weft 4). Wherein, the image blocks corresponding to the images (warp 1 and weft 1) are as follows: the image blocks corresponding to the images of (row 1, column 1), (row 1, column 2), and (warp 2, weft 2) are: the image blocks corresponding to the images of (row 1, column 3), (row 1, column 4), and (warp 3, weft 3) are: the image blocks corresponding to the images of (row 2, column 1), (row 2, column 2), (warp 4, weft 4) are: (Row 2, column 3), (Row 2, column 4). Accordingly, the first index relationship stored in the cloud may be as shown in table one:
watch 1
Figure BDA0003494667630000111
Figure BDA0003494667630000121
Where S404 is as S4 in fig. 5.
S405, the cloud acquires the characteristics of the image of each view angle under each view point to establish a second index relationship between the characteristics of the image of each view angle and the coordinate position of the central point of the image of each view angle in the panoramic image.
For the image of each view angle under each view point, the cloud end can acquire the characteristics of the image of each view angle. In one embodiment, the features of the image of each view are embodied as feature vectors, such as feature vectors that may be 2048-dimensional. That is, the cloud may obtain a feature vector of an image of each view angle at each viewpoint.
In one embodiment, the cloud may use a neural network model to extract features of the image from each perspective. Illustratively, the neural network model may include, but is not limited to: convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and long-term-memory (LSTM).
In the embodiment of the application, the cloud end can acquire the characteristics of the image of each visual angle, and then can establish the second index relationship between the characteristics of the image of each visual angle and the coordinate position of the central point of each visual angle in the panoramic image according to the characteristics of the image of each visual angle. In one embodiment, the cloud may store a second index relationship, in which the central point of the image of each view angle may be characterized by the coordinate position of the central point of the image of each view angle in the panoramic image, and the feature vector of the image of each view angle may be characterized by the feature vector of the image of each view angle, that is, the second index relationship may include: and the coordinate position of the central point of each view angle in the panoramic image and the mapping relation of the characteristics of the image of each view angle.
In an embodiment, the cloud may further obtain a third index relationship according to the first index relationship and the second index relationship. If the third index relationship is: mapping relation of the features of the image and the number of the image blocks of each view. That is to say, the cloud may merge the first index relationship and the second index relationship according to the coordinate position of the central point of each view angle in the panoramic image, and map the features of the images and the numbers of the image blocks having the central points of the same coordinate position to obtain a third index relationship.
It should be understood that there is no distinction between S404 and S405, and both may be performed simultaneously.
Where S405 is as S5 in fig. 5.
In summary, in an embodiment, the cloud end may store a plurality of image blocks under each view point, the first index relationship, and the second index relationship. Alternatively, in an embodiment, the cloud end may store a plurality of image blocks and the third index relationship in each view point. Therefore, compared with the mode that high-definition images with different view angles under all view points are stored in the cloud in the prior art, the storage overhead can be reduced.
Based on the related introduction of the content stored in the cloud, the image processing method provided by the embodiment of the present application is described below with reference to specific embodiments. The following several embodiments may be combined with each other and may not be described in detail in some embodiments for the same or similar concepts or processes. Fig. 7 is a flowchart illustrating an embodiment of an image processing method according to an embodiment of the present application. It should be understood that fig. 7 illustrates an example of the interaction between the terminal and the cloud.
Referring to fig. 7, an image processing method provided in an embodiment of the present application may include:
s701, the terminal responds to the photographing instruction and photographs to obtain a first image.
In an embodiment, the photographing instruction may be an instruction triggered by a photographing interface displayed by the user operating the terminal, for example, the photographing interface includes a photographing control, and the user operating the photographing control may trigger the photographing instruction to be input to the terminal. In one embodiment, the photographing instruction may be triggered by a user voice, such as the user speaking "take a picture," which may trigger the input of the photographing instruction to the terminal. In an embodiment, a user can also trigger to input a photographing instruction to the terminal in a user-defined mode or a mode of operating other shortcut keys, and the mode of triggering the photographing instruction by the user is not limited in the embodiment of the application.
The terminal responds to the photographing instruction and can photograph to obtain a first image.
Fig. 8 is a schematic diagram of a variation of the photographing interface according to the embodiment of the present application. Shown in a of fig. 8 is a photographing interface, which includes a preview box 81, a photographing control 82, and a magnification adjustment bar 83. The user adjusts the magnification adjustment bar 83 to change the photographing magnification of the terminal, and for example, the user adjusts the photographing magnification to 30. The user clicks the photographing control 82, and the corresponding terminal responds to the photographing instruction, so that the first image can be photographed at the photographing magnification of 30.
S702, the terminal sends the first image to the cloud.
S703, the cloud acquires the characteristics of the first image.
The manner in which the cloud acquires the features of the first image may refer to the manner in which the cloud acquires the description of the features of the image of each view angle from each viewpoint in S405.
S704, the cloud acquires the similarity between the features of the image of each view angle under each view point and the features of the first image.
In an embodiment, the cloud may obtain a cosine included angle or an euclidean distance between the feature of the image of each view angle at each viewpoint and the feature of the first image, so as to obtain a similarity between the feature of the image of each view angle at each viewpoint and the feature of the first image. The smaller the cosine included angle and the smaller the Euclidean distance, the greater the representation similarity.
In an embodiment, in order to reduce the amount of similarity calculation of the cloud, the terminal may upload the location of the terminal when sending the first image to the cloud, that is, the above S702 may be replaced with: the terminal sends the first image and the position of the terminal to the cloud. Accordingly, S704 may be replaced with: the cloud acquires the similarity between the feature of the image of each view angle in a preset range from the position of the terminal and the feature of the first image. In one embodiment, a viewpoint within a preset range from the position of the terminal may be referred to as a target viewpoint.
In this embodiment, because the cloud stores the features of the images of the different viewpoints, the cloud may determine, based on the position of the terminal, a viewpoint (i.e., a target viewpoint) within a preset distance range from the position of the terminal, and then obtain the similarity between the features of the image of each viewpoint (i.e., the target viewpoint) and the features of the first image, which may avoid calculating the feature similarity of all viewpoints, and may improve the calculation efficiency of the cloud.
In one embodiment, the terminal may perform S702-S708 no matter how large the terminal uses the photographing magnification. In one embodiment, the terminal may send the first magnification when sending the first image and the location of the terminal to the cloud. Accordingly, the cloud performs S703-S708 in response to the first magnification being greater than or equal to the preset magnification, and the cloud performs S703-S708 in response to the first magnification being less than the preset magnification, because the terminal can acquire the first image with high definition, the cloud may not perform S703-S708, so as to save computing resources of the cloud.
In an embodiment, because the terminal may obtain the high-definition image with the low magnification, the terminal may perform S702 in a scene in which the terminal takes a picture with the high magnification because the terminal does not need to interact with the cloud to obtain the high-definition image. In this embodiment, S701 may be replaced with: and responding to a photographing instruction, and photographing at a first magnification to obtain a first image, wherein the first magnification is larger than a preset magnification. Accordingly, S702 may be replaced with: and the terminal responds that the first multiplying power is larger than the preset multiplying power, and sends the first image and the position of the terminal to the cloud.
S705, the cloud determines the position coordinates of the central point of the feature mapping corresponding to the maximum similarity in the panoramic image according to the features corresponding to the maximum similarity and the second index relationship.
The second index relationship is: and the coordinate position of the central point of the image of each view angle in the panoramic image and the mapping relation of the characteristics of the image. After the cloud acquires the similarity between the feature of the image at each view angle and the feature of the first image at each view angle (or the similarity between the feature of the image at each view angle at the view angle within the preset distance range of the position of the terminal and the feature of the first image), the cloud may determine the maximum similarity, and then determine the feature corresponding to the maximum similarity.
Therefore, the cloud end can obtain the position coordinates of the central point of the feature mapping corresponding to the maximum similarity in the panoramic image according to the stored second index relation and the features corresponding to the maximum similarity.
S706, the cloud determines the identifier of the image block mapped by the central point according to the position coordinate of the central point of the feature mapping corresponding to the maximum similarity in the panoramic image and the first index relationship.
The first index relationship is: and the coordinate position of the central point of the image of each view angle in the panoramic image and the mapping relation of the image blocks. Therefore, after the position coordinates of the central point in the panoramic image corresponding to the feature corresponding to the maximum similarity are obtained at the cloud, the identifier of the image block mapped by the position coordinates of the central point in the panoramic image, that is, the identifier of the image block mapped by the feature corresponding to the maximum similarity, can be obtained according to the position coordinates of the central point in the panoramic image and the first index relationship. In one embodiment, the identifier of the image block of the feature map corresponding to the maximum similarity may be referred to as a target identifier.
For example, if the position coordinates of the central point of the feature map corresponding to the maximum similarity in the panoramic image are (warp 1, weft 1), the numbers of the image blocks of the (warp 1, weft 1) map are (row 1, column 1), (row 1, column 2) based on the table one (first index relationship).
In one embodiment, if a third index relationship is stored in the cloud, the third index relationship is: and mapping relation between the characteristics of the image and the number of the image block. In this embodiment, after obtaining the feature corresponding to the maximum similarity, the cloud may obtain, according to the third index relationship, an identifier of the image block of the feature mapping corresponding to the maximum similarity. Accordingly, in this embodiment, S705 and S706 may be replaced with: and the cloud end determines the identifier of the image block of the feature mapping corresponding to the maximum similarity according to the feature corresponding to the maximum similarity and the third index relation.
And S707, the cloud end obtains a second image according to the image block mapped by the central point by adopting back projection transformation.
And the image block mapped by the central point is the image block corresponding to the target identifier. The cloud end stores a plurality of image blocks corresponding to the panoramic image under each viewpoint, after the target identification of the image block mapped by the central point is determined, the image blocks corresponding to the target identification can be spliced, and then back projection transformation is adopted to obtain a second image. And the definition of the second image is higher than that of the first image. In one embodiment, the sharpness of the second image is greater than a predetermined sharpness.
The cloud end can map the image blocks corresponding to the spliced central points to the first view plane according to the transformation relation between the first view plane and the second view plane so as to obtain a second image, namely the image of the view plane shot by the terminal.
In an embodiment, the cloud may splice the image blocks mapped by the central point according to the numbers of the image blocks mapped by the central point. For example, if the numbers of the (1, weft 1) mapped image blocks are (row 1, column 1), (row 1, column 2), the cloud may stitch the image blocks with the numbers of (row 1, column 1), (row 1, column 2) in the order of row and column to obtain the second image.
In an embodiment, when there are overlapping areas between image blocks, the cloud may overlap the overlapping area in the image block of (row 1, column 1) with the overlapping area in the image block of (row 1, column 2) to splice the image blocks of (row 1, column 1) and (row 1, column 2). The cloud end may determine an overlapping area in the image block of (row 1, column 1) and the image block of (row 1, column 2) according to a similarity of pixels in the image block of (row 1, column 1) and the image block of (row 1, column 2), for example, an area with a similarity of 100% is used as the overlapping area.
And S708, the cloud sends the second image to the terminal.
Correspondingly, the terminal receives a second image from the cloud.
S709, the terminal displays the second image in response to the image display instruction.
After the terminal receives the second image from the cloud, the second image can be displayed based on the operation of the user. Or the terminal receives the second image from the cloud and can display the second image.
In one embodiment, the image display instruction may be an instruction triggered by a user operating a photographing interface. If the photographing interface comprises an image display control, the user can trigger the input of an image display instruction to the terminal by operating the image display control. In an embodiment, the image display instruction may also be triggered by a user voice, and the method for receiving the photographing instruction by the terminal is not limited in the embodiment of the present application.
For example, referring to a in fig. 8, after the user clicks the photographing control 82, the terminal and the cloud end interact to perform S701-S708, and after the terminal receives the second image from the cloud end, the second image may be stored in a local image database (e.g., an album). As shown in b of fig. 8, the image display control 84 is included on the photographing interface, and the user clicks the image display control 84, and the terminal can display the high-definition second image, as shown in c of fig. 8. Different from b in fig. 1, when the terminal uses the first magnification (high magnification) to photograph, the image obtained by photographing has high definition, for example, the user can clearly see the characters on the computer screen in the second image.
In one embodiment, steps S701-S709 shown in FIG. 7 may be reduced to that shown in FIG. 9.
In the embodiment of the application, when the terminal uses high-magnification to take a picture, the terminal can send the first image taken to the cloud, the cloud determines the image corresponding to the maximum similarity according to the features of the first image and the similarity of the images at multiple visual angles at the stored viewpoints, and then the image block corresponding to the first image is obtained based on the first index relation and the second index relation, so that the spliced image block is subjected to back projection transformation to obtain the second image with high definition, so that the terminal can display the second image with high definition, and the purpose that the terminal can obtain the image with high definition when the terminal uses high-magnification to take a picture is achieved. On the other hand, the image blocks, the first index relationship and the second index relationship under each view point are stored in the cloud, or the image blocks and the third index relationship under each view point are stored in the cloud. Therefore, compared with the mode that high-definition images at different view angles under various viewpoints are stored in the cloud in the prior art, the storage overhead of the cloud can be reduced.
In the embodiment shown in fig. 7, the image blocks corresponding to the panoramic image at each view point and the first index relationship and the second index relationship are stored in the cloud, or the image blocks corresponding to the panoramic image at each view point and the third index relationship are stored in the cloud. In one embodiment, panoramic images from various viewpoints may be stored in the cloud. For example, the panoramic images under various viewpoints stored in the cloud may be as shown in fig. 10, and it should be understood that in fig. 10, the panoramic images under different viewpoints are represented by including different shapes (such as black rectangles, black triangles, etc.) in the panoramic images.
In this embodiment, according to the descriptions in S701 to S706, the cloud end may determine an image block corresponding to the first image (i.e., the number of the image block of the feature map corresponding to the maximum similarity), and if the image block stored in the cloud end is the panoramic image at each viewpoint, the cloud end may cut the image block corresponding to the number in the panoramic image at the viewpoint according to the number of the image block corresponding to the first image, and perform inverse projection transformation to obtain the second image. If the cloud end is used, the panoramic image under the viewpoint can be loaded firstly, then the image blocks corresponding to the numbers of the image blocks corresponding to the first image in the panoramic image are cut, and the second image is obtained through projection transformation.
For example, if the number of the image block corresponding to the first image is (row 1, column 1), (row 1, column 2), the cloud may cut the image blocks numbered as (row 1, column 1) and (row 1, column 2) in the panoramic image according to the number of the image block and a first preset size of the image block, and perform a projective transformation to obtain a second image.
Compared with the embodiment shown in fig. 7, in the embodiment shown in fig. 7, because the image blocks corresponding to the panoramic image are stored in the cloud, when the cloud determines the number of the image block corresponding to the first image, the corresponding image blocks can be directly loaded for splicing, back projection transformation, and the like. However, the speed of loading the image blocks by the cloud is much faster than the speed of loading the whole panoramic image, so that the cloud loading efficiency in the embodiment shown in fig. 7 is high, and the second image can be fed back to the terminal more quickly.
According to the image processing method provided by the embodiment of the application, the panoramic image under each viewpoint can be stored in the cloud, after the serial number of the image block corresponding to the first image is obtained at the cloud, the corresponding image block can be cut in the panoramic image to which the serial number belongs, and the second image is obtained through projection transformation. Compared with the mode that the cloud end loads the image blocks to perform back projection transformation in the embodiment shown in fig. 7, in the embodiment of the present application, the whole panoramic image needs to be loaded by the cloud end, and then the image blocks are cut in the panoramic image, so that the loading time is long, the loading efficiency is low, and the efficiency of feeding back the second image to the terminal is relatively low.
In an embodiment, the terminal may store a plurality of image blocks, the first index relationship, and the second index relationship in each view, or store a plurality of image blocks and the third index relationship in each view, or store a panoramic image in each view, and when the terminal obtains the first image by taking a picture with high magnification, the terminal may execute S703-S707 to obtain a high-definition second image, and then the terminal may display the second image in response to the image display instruction.
In the above embodiments, the cloud interaction of the terminal is taken as an example, which illustrates a scene in which the cloud can process an image from the terminal, and a scene in which the terminal can process a captured image. In summary, for an electronic device (the electronic device may be a cloud, a terminal, or other device with processing capability), referring to fig. 11, the image processing method provided in the embodiment of the present application may further include:
s1101, acquiring a first image to be processed.
When the electronic device is a cloud, the manner of acquiring the to-be-processed first image by the cloud may be: the terminal shoots the first image and then sends the first image to the cloud, and the relevant description in S701-S702 can be referred to. In an embodiment, the first image to be processed may also be uploaded to the cloud by the user, or the first image is an image stored locally in the cloud.
When the electronic device is a terminal, the terminal may capture the first image, or the first image may be an image stored locally in the terminal.
When the electronic device is other device with processing capability, the device may capture the first image, or upload the first image to the device by a user, or the first image may be an image stored locally in the device, or the first image may be an image transmitted from other electronic device.
In the embodiment of the present application, a manner of acquiring the to-be-processed first image by the electronic device is not limited.
And S1102, acquiring the similarity between the characteristics of the image of each view angle under each viewpoint and the characteristics of the first image.
The electronic device performs the step of S1102, and may refer to the relevant description in S703-S704.
S1103, determining the target identifier of the feature mapping of the image of the visual angle corresponding to the maximum similarity according to the features of the image of the visual angle corresponding to the maximum similarity and the mapping relation.
In one embodiment, the mapping relationship may be a third indexing relationship. The third index relationship is: mapping relation between the features of the image and the number of the image block. In this embodiment, after obtaining the features corresponding to the maximum similarity, the cloud may obtain the identifier of the image block of the feature mapping corresponding to the maximum similarity according to the third index relationship.
In one embodiment, the mapping relationship may include a first index relationship and a second index relationship. The first index relationship is: the coordinate position of the central point of the image of each visual angle in the panoramic image and the mapping relation of the image blocks, and the second index relation is as follows: and the coordinate position of the central point of the image of each view angle in the panoramic image and the mapping relation of the characteristics of the image. In this embodiment, after acquiring the similarity between the feature of the image of each view angle at each viewpoint and the feature of the first image (or the similarity between the feature of the image of each view angle at the viewpoint within the preset distance range of the position of the terminal and the feature of the first image), the electronic device may determine the maximum similarity, and further determine the feature corresponding to the maximum similarity. Further, the electronic device may obtain, according to the stored second index relationship and the feature corresponding to the maximum similarity, the position coordinate of the central point of the feature mapping corresponding to the maximum similarity in the panoramic image, and further obtain, according to the position coordinate of the central point in the panoramic image and the first index relationship, the identifier of the image block mapped by the position coordinate of the central point in the panoramic image.
And the identifier of the image block mapped by the position coordinates of the central point in the panoramic image is the target identifier.
And S1104, acquiring a second image according to the image block corresponding to the target identifier, wherein the definition of the second image is greater than that of the first image.
In an embodiment, the electronic device may splice image blocks corresponding to the target identifier to obtain the second image.
Alternatively, in an embodiment, the electronic device may process the image block corresponding to the target identifier in the manner in S707 to obtain the second image.
After the electronic device obtains the second image, because the second image is obtained based on the corresponding image block under the viewpoint, the definition of the second image is higher than that of the first image, so that the electronic device can process the first image and obtain an image with higher definition.
In an embodiment, after the electronic device obtains the second image, the electronic device may store the second image, or transmit the second image to other electronic devices, and this embodiment of the application does not limit post-processing of the second image. For the scene of interaction between the cloud and the terminal, the cloud can send the second image to the terminal for display and storage.
In the embodiment of the application, the electronic device stores image blocks corresponding to the panoramic images under all the viewpoints, mapping relations between the characteristics of the images at all the viewpoints and the identifications of the image blocks corresponding to the panoramic images under all the viewpoints, the images at all the viewpoints and the image blocks corresponding to the panoramic images under all the viewpoints are obtained based on the panoramic images under all the viewpoints, and the panoramic images under all the viewpoints are high-definition images.
Fig. 12 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application. The image processing apparatus may be a cloud, a terminal, or an electronic device in the above embodiments, or a chip in the cloud, or a chip in the terminal, or a chip in the electronic device, and is used to implement the image processing method provided in the embodiment of the present application. The electronic equipment stores image blocks corresponding to the panoramic image under each viewpoint, mapping relations between the characteristics of the image of each viewpoint and the identification of the image block corresponding to the panoramic image under each viewpoint, and the image of each viewpoint and the image block corresponding to the panoramic image under each viewpoint are obtained based on the panoramic image under each viewpoint.
Referring to fig. 12, the image processing apparatus 1200 includes: a processing module 1201, a storage module 1202, and a transceiver module 1203.
The processing module 1201 is configured to acquire a first image to be processed, acquire similarity between features of the image of each view angle at each view point and the features of the first image, determine a target identifier of feature mapping of the image of the view angle corresponding to the maximum similarity according to the features of the image of the view angle corresponding to the maximum similarity and the mapping relationship, and acquire a second image according to an image block corresponding to the target identifier, where the definition of the second image is higher than that of the first image.
In a possible implementation manner, the processing module 1201 is specifically configured to acquire the first image, capture a position of the first image, determine a target viewpoint within a preset range from the position, and acquire similarity between features of the image of each view angle at the target viewpoint and features of the first image.
In a possible implementation manner, the mapping relationship includes a first index relationship and a second index relationship, and the second index relationship is: the feature of the image of each view angle under each viewpoint and the mapping relationship of the central point of the image of each view angle under each viewpoint are as follows, and the first index relationship is as follows: and mapping relation between the central point of the image of each view angle under each view point and the identifier of the image block corresponding to the panoramic image under each view point.
The processing module 1201 is specifically configured to determine, according to the feature of the image of the view angle corresponding to the maximum similarity and the second index relationship, a center point of the feature mapping of the image of the view angle corresponding to the maximum similarity, and determine the target identifier according to the center point of the feature mapping of the image of the view angle corresponding to the maximum similarity and the first index relationship.
In a possible implementation manner, the processing module 1201 is further configured to obtain, according to the panoramic image at each viewpoint, a feature of an image of each view angle at each viewpoint, the first index relationship, and the second index relationship.
A storage module 1202, configured to store the feature of the image of each view under each viewpoint, the first index relationship, and the second index relationship.
In a possible implementation manner, the processing module 1201 is specifically configured to perform back projection transformation on the panoramic image at each viewpoint to obtain images of multiple viewing angles at each viewpoint and a coordinate position of a central point of the image at each viewing angle at the corresponding panoramic image at each viewpoint, where an overlap ratio between the images at adjacent viewing angles at each viewpoint is greater than a preset overlap ratio; and extracting the characteristics of the image of each view angle under each viewpoint.
In a possible implementation manner, the processing module 1201 is specifically configured to slide in the panoramic image by using a sliding window with a second preset size in the panoramic image at each viewpoint, and sequentially obtain, by using back projection transformation, a coordinate position of an image of a view angle corresponding to a part of the panoramic image in the sliding window and a coordinate position of a center point of an image of a view angle corresponding to the part of the panoramic image in the corresponding panoramic image, where an image of each view angle at each viewpoint has the second preset size.
In a possible implementation manner, the processing module 1201 is specifically configured to construct the second index relationship according to the coordinate position of the central point of the image of each view angle at each viewpoint in the corresponding panoramic image and the feature of the image of each view angle at each viewpoint.
In a possible implementation manner, the processing module 1201 is specifically configured to cut the panoramic image at each viewpoint to obtain an image block corresponding to the panoramic image at each viewpoint, and construct the first index relationship according to the coordinate position of the central point of the image at each viewpoint in the corresponding panoramic image and the image block corresponding to the panoramic image at each viewpoint.
In a possible implementation manner, image blocks corresponding to the panoramic image under each view point have a first preset size.
In a possible implementation manner, the processing module 1201 is further configured to acquire the panoramic image at each viewpoint according to the pre-acquired images of the multiple viewpoints at each viewpoint by using a panoramic image stitching technique, where an overlap ratio between the pre-acquired images at adjacent viewpoints at each viewpoint is smaller than the preset overlap ratio.
In a possible implementation manner, the processing module 1201 is specifically configured to project a pre-acquired image of each view angle at each view point in a first view plane to a second view plane to which a panoramic image belongs, so as to obtain the panoramic image at each view point and a transformation relationship between the first view plane and the second view plane.
In a possible implementation manner, the processing module 1201 is specifically configured to project, according to the transformation relationship, the image block corresponding to the target identifier to the first view plane by using back projection transformation, so as to obtain the second image.
In a possible implementation manner, the electronic device is a cloud, and the transceiver module 1203 is configured to receive a first image from a terminal and a position of the terminal when the terminal captures the first image, and send the second image to the terminal.
The image processing apparatus provided by the embodiment of the application is used for executing the image processing method in the above embodiment, and has the same implementation principle and technical effect as the above embodiment.
In an embodiment, an embodiment of the present application further provides an electronic device, referring to fig. 13, where the electronic device may be the cloud terminal in the foregoing embodiment or the electronic device in fig. 11, and the electronic device may include: a processor (e.g., CPU)1301, and a memory 1302. The memory 1302 may include a random-access memory (RAM) and may further include a non-volatile memory (NVM), such as at least one disk memory, and the memory 1302 may store various instructions for performing various processing functions and implementing the method steps of the present application.
In one embodiment, the electronic device may include a screen 1303 for displaying an interface, an image, and the like of the electronic device.
Optionally, the electronic device related to the present application may further include: a power supply 1304, a communication bus 1305, and a communication port 1306. The communication port 1306 is used for enabling connection communication between the electronic device and other peripherals. In an embodiment of the present application, the memory 1302 is used for storing computer executable program code, which includes instructions; when the processor executes the instructions, the instructions cause the processor of the electronic device to execute the actions in the above method embodiments, which have similar implementation principles and technical effects, and are not described herein again.
It should be noted that the modules or components described in the above embodiments may be one or more integrated circuits configured to implement the above methods, for example: one or more Application Specific Integrated Circuits (ASICs), or one or more microprocessors (DSPs), or one or more Field Programmable Gate Arrays (FPGAs), etc. For another example, when one of the above modules is implemented in the form of a processing element scheduler code, the processing element may be a general-purpose processor, such as a Central Processing Unit (CPU) or other processor that can call program code, such as a controller. As another example, these modules may be integrated together, implemented in the form of a system-on-a-chip (SOC).
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions according to the embodiments of the present application are all or partially generated when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
The term "plurality" herein means two or more. The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship; in the formula, the character "/" indicates that the preceding and following related objects are in a relationship of "division". It is to be understood that the terms "first," "second," and the like, in the description of the present application, are used for distinguishing between descriptions and not necessarily for describing a sequential or chronological order, or for indicating or implying a relative importance.
It is to be understood that the various numerical references referred to in the embodiments of the present application are merely for descriptive convenience and are not intended to limit the scope of the embodiments of the present application.
It should be understood that, in the embodiment of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiment of the present application.

Claims (17)

1. An image processing method is applied to an electronic device, wherein image blocks corresponding to a panoramic image of each viewpoint, and a mapping relationship between features of an image of each view angle of each viewpoint and an identifier of the image block corresponding to the panoramic image of each viewpoint are stored in the electronic device, and the image of each view angle of each viewpoint and the image block corresponding to the panoramic image of each viewpoint are obtained based on the panoramic image of each viewpoint, and the method includes:
acquiring a first image to be processed;
acquiring the similarity between the characteristics of the image of each view angle under each viewpoint and the characteristics of the first image;
determining a target identifier of feature mapping of the image of the visual angle corresponding to the maximum similarity according to the feature of the image of the visual angle corresponding to the maximum similarity and the mapping relation;
and acquiring a second image according to the image block corresponding to the target identifier, wherein the definition of the second image is higher than that of the first image.
2. The method of claim 1, wherein the obtaining a first image to be processed comprises:
acquiring the first image and shooting the position of the first image;
the obtaining of the similarity between the feature of the image of each view angle at each viewpoint and the feature of the first image includes:
determining a target viewpoint within a preset range from the position;
and acquiring the similarity between the characteristics of the image of each view angle under the target viewpoint and the characteristics of the first image.
3. The method of claim 1 or 2, wherein the mapping relationship comprises a first index relationship and a second index relationship, and wherein the second index relationship is: the feature of the image of each view angle under each viewpoint and the mapping relationship of the central point of the image of each view angle under each viewpoint are as follows, and the first index relationship is as follows: the mapping relation between the central point of the image of each view angle under each view point and the identification of the image block corresponding to the panoramic image under each view point;
the determining, according to the feature of the image of the perspective corresponding to the maximum similarity and the mapping relationship, the target identifier of the feature mapping of the image of the perspective corresponding to the maximum similarity includes:
determining a central point of feature mapping of the image of the visual angle corresponding to the maximum similarity according to the feature of the image of the visual angle corresponding to the maximum similarity and the second index relationship;
and determining the target identification according to the central point of the feature mapping of the image of the visual angle corresponding to the maximum similarity and the first index relation.
4. The method of claim 3, wherein prior to acquiring the first image to be processed, further comprising:
acquiring the characteristics of the image of each view angle under each viewpoint, the first index relationship and the second index relationship according to the panoramic image under each viewpoint;
and storing the characteristics of the images of each view angle under each viewpoint, the first index relationship and the second index relationship.
5. The method according to claim 4, wherein the obtaining the feature of the image of each view angle at each view point according to the panoramic image at each view point comprises:
performing back projection transformation on the panoramic image under each viewpoint to obtain images of a plurality of view angles under each viewpoint and coordinate positions of the central points of the images of each view angle under each viewpoint on the corresponding panoramic image, wherein the overlapping rate of the images of the adjacent view angles under each viewpoint is greater than a preset overlapping rate;
and extracting the characteristics of the image of each view angle under each viewpoint.
6. The method of claim 5, wherein the applying a back projection transformation to the panoramic image from each viewpoint to obtain images from a plurality of viewpoints, and the coordinate position of the center point of each image from each viewpoint in the corresponding panoramic image comprises:
and in the panoramic image under each viewpoint, sliding a sliding window with a second preset size in the panoramic image, and sequentially obtaining the images of the visual angles corresponding to part of the panoramic image in the sliding window and the coordinate positions of the central points of the images of the visual angles corresponding to the part of the panoramic image in the corresponding panoramic image by adopting back projection transformation, wherein the images of each visual angle under each viewpoint have the second preset size.
7. The method of claim 5 or 6, wherein obtaining the second index relationship comprises:
and constructing the second index relationship according to the coordinate position of the central point of the image of each view angle under each viewpoint in the corresponding panoramic image and the characteristics of the image of each view angle under each viewpoint.
8. The method according to any one of claims 5-7, wherein obtaining the first index relationship comprises:
cutting the panoramic image under each viewpoint to obtain image blocks corresponding to the panoramic image under each viewpoint;
and constructing the first index relationship according to the coordinate position of the central point of the image of each view angle under each view point in the corresponding panoramic image and the image block corresponding to the panoramic image under each view point.
9. The method of claim 8, wherein the image blocks corresponding to the panoramic image under each view point have a first preset size.
10. The method according to any one of claims 5 to 9, wherein before obtaining the features of the image of each view angle at each view point, the first index relationship, and the second index relationship according to the panoramic image at each view point, the method further comprises:
and acquiring the panoramic image under each viewpoint according to the pre-acquired images of the plurality of viewpoints under each viewpoint by adopting a panoramic image splicing technology, wherein the overlapping rate of the pre-acquired images of the adjacent viewpoints under each viewpoint is less than the preset overlapping rate.
11. The method of claim 10, wherein the obtaining the panoramic image from each viewpoint according to the pre-collected images from the plurality of viewing angles from each viewpoint by using the panoramic image stitching technique comprises:
and projecting the pre-collected image of each view angle under each viewpoint in the first view plane to a second view plane to which the panoramic image belongs to obtain the panoramic image under each viewpoint and the transformation relation between the first view plane and the second view plane.
12. The method according to claim 11, wherein the obtaining a second image according to the image block corresponding to the target identifier comprises:
and projecting the image block corresponding to the target identifier to the first view plane by adopting back projection transformation according to the transformation relation to obtain the second image.
13. The method of claim 1, wherein the electronic device is a cloud, and wherein the acquiring the first image and capturing the location of the first image comprise:
receiving a first image from a terminal and the position of the terminal when the terminal shoots the first image;
after the acquiring the second image, the method further comprises:
and sending the second image to the terminal.
14. An image processing apparatus, wherein an image block corresponding to a panoramic image from each viewpoint, and a mapping relationship between a feature of an image at each view angle from each viewpoint and an identifier of an image block corresponding to the panoramic image from each viewpoint are stored in an electronic device, and the image at each view angle from each viewpoint and the image block corresponding to the panoramic image from each viewpoint are obtained based on the panoramic image from each viewpoint, the apparatus comprising:
a processing module to:
acquiring a first image to be processed;
acquiring the similarity between the characteristics of the image of each view angle under each viewpoint and the characteristics of the first image;
determining a target identifier of feature mapping of the image of the visual angle corresponding to the maximum similarity according to the feature of the image of the visual angle corresponding to the maximum similarity and the mapping relation;
and acquiring a second image according to the image block corresponding to the target identifier, wherein the definition of the second image is higher than that of the first image.
15. An electronic device, comprising: a processor and a memory;
the memory stores computer-executable instructions;
the processor executing the computer-executable instructions stored by the memory causes the processor to perform the method of any of claims 1-13.
16. A computer-readable storage medium, in which a computer program or instructions are stored which, when executed, implement the method of any one of claims 1-13.
17. A computer program product comprising a computer program or instructions which, when executed by a processor, performs the method of any one of claims 1 to 13.
CN202210109463.6A 2022-01-28 2022-01-28 Image processing method and device and electronic equipment Pending CN114627000A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210109463.6A CN114627000A (en) 2022-01-28 2022-01-28 Image processing method and device and electronic equipment
PCT/CN2022/138573 WO2023142732A1 (en) 2022-01-28 2022-12-13 Image processing method and apparatus, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210109463.6A CN114627000A (en) 2022-01-28 2022-01-28 Image processing method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN114627000A true CN114627000A (en) 2022-06-14

Family

ID=81899073

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210109463.6A Pending CN114627000A (en) 2022-01-28 2022-01-28 Image processing method and device and electronic equipment

Country Status (2)

Country Link
CN (1) CN114627000A (en)
WO (1) WO2023142732A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023142732A1 (en) * 2022-01-28 2023-08-03 华为技术有限公司 Image processing method and apparatus, and electronic device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107566724B (en) * 2017-09-13 2020-07-07 维沃移动通信有限公司 Panoramic image shooting method and mobile terminal
KR102287133B1 (en) * 2018-11-30 2021-08-09 한국전자기술연구원 Method and apparatus for providing free viewpoint video
CN112989092A (en) * 2019-12-13 2021-06-18 华为技术有限公司 Image processing method and related device
CN111815752B (en) * 2020-07-16 2022-11-29 展讯通信(上海)有限公司 Image processing method and device and electronic equipment
CN114627000A (en) * 2022-01-28 2022-06-14 华为技术有限公司 Image processing method and device and electronic equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023142732A1 (en) * 2022-01-28 2023-08-03 华为技术有限公司 Image processing method and apparatus, and electronic device

Also Published As

Publication number Publication date
WO2023142732A1 (en) 2023-08-03

Similar Documents

Publication Publication Date Title
US10915998B2 (en) Image processing method and device
CN109474780B (en) Method and device for image processing
EP2328125A1 (en) Image splicing method and device
CN106981078B (en) Sight line correction method and device, intelligent conference terminal and storage medium
US9697581B2 (en) Image processing apparatus and image processing method
US11044398B2 (en) Panoramic light field capture, processing, and display
CN112862897B (en) Phase-shift encoding circle-based rapid calibration method for camera in out-of-focus state
CN109495733B (en) Three-dimensional image reconstruction method, device and non-transitory computer readable storage medium thereof
CN114640833A (en) Projection picture adjusting method and device, electronic equipment and storage medium
CN108805799B (en) Panoramic image synthesis apparatus, panoramic image synthesis method, and computer-readable storage medium
KR100934211B1 (en) How to create a panoramic image on a mobile device
CN109661815A (en) There are the robust disparity estimations in the case where the significant Strength Changes of camera array
CN113643414A (en) Three-dimensional image generation method and device, electronic equipment and storage medium
CN110266926B (en) Image processing method, image processing device, mobile terminal and storage medium
WO2023142732A1 (en) Image processing method and apparatus, and electronic device
CN113298187B (en) Image processing method and device and computer readable storage medium
CN113793392A (en) Camera parameter calibration method and device
CN115705651A (en) Video motion estimation method, device, equipment and computer readable storage medium
CN112598571B (en) Image scaling method, device, terminal and storage medium
WO2023221969A1 (en) Method for capturing 3d picture, and 3d photographic system
CN110177216B (en) Image processing method, image processing device, mobile terminal and storage medium
US11240477B2 (en) Method and device for image rectification
CN115174878B (en) Projection picture correction method, apparatus and storage medium
CN113436247B (en) Image processing method and device, electronic equipment and storage medium
US10832425B2 (en) Image registration method and apparatus for terminal, and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination