WO2023142732A1 - Image processing method and apparatus, and electronic device - Google Patents

Image processing method and apparatus, and electronic device Download PDF

Info

Publication number
WO2023142732A1
WO2023142732A1 PCT/CN2022/138573 CN2022138573W WO2023142732A1 WO 2023142732 A1 WO2023142732 A1 WO 2023142732A1 CN 2022138573 W CN2022138573 W CN 2022138573W WO 2023142732 A1 WO2023142732 A1 WO 2023142732A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
viewpoint
under
panoramic
images
Prior art date
Application number
PCT/CN2022/138573
Other languages
French (fr)
Chinese (zh)
Inventor
李政
陈刚
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2023142732A1 publication Critical patent/WO2023142732A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Definitions

  • the embodiments of the present application relate to the field of image processing, and in particular, to an image processing method, device, and electronic equipment.
  • terminals can support high-magnification photography at present, such as 30 times and 50 times.
  • high-magnification photography such as 30 times and 50 times.
  • the captured images have low definition and fuzzy details.
  • a large number of high-definition images of different shooting positions and different shooting angles at the same shooting position may be stored in advance. After taking pictures with a high magnification, the terminal can obtain the high-definition image with the greatest similarity to the image taken by the terminal among the pre-stored high-definition images, so that the terminal can display the high-definition image.
  • this method needs to store a large number of high-definition images, and the storage overhead is large.
  • Embodiments of the present application provide an image processing method, device, and electronic device, which can reduce storage overhead.
  • the embodiment of the present application provides an image processing method, and the execution subject of the method may be an electronic device or a chip in the electronic device, and the cloud electronic device is taken as an example for description below.
  • the electronic device stores the image blocks corresponding to the panoramic images under each viewpoint, and the mapping relationship between the features of the images of each viewpoint under the viewpoints and the identifiers of the image blocks corresponding to the panoramic images under the viewpoints, and the viewpoints
  • the images of each angle of view and the image blocks corresponding to the panoramic images at each viewpoint are all obtained based on the panoramic images at each viewpoint, and the panoramic images at each viewpoint are high-definition images. If it is based on a single-lens reflex camera, etc., it can obtain high-definition images.
  • the image is obtained from the image captured by the device.
  • the electronic device can obtain the first image to be processed, and can extract the features in the first image, and then obtain the similarity between the features of the image of each viewing angle under each viewpoint and the feature of the first image , where the image with the largest similarity has the highest similarity with the feature of the first image.
  • the electronic device may determine the target identifier of the feature map of the image of the viewing angle corresponding to the maximum similarity according to the feature of the image of the viewing angle corresponding to the maximum similarity and the mapping relationship.
  • the corresponding image block can be identified according to the target , acquiring a second image, where the definition of the second image is higher than the definition of the first image.
  • the second image is obtained according to the image block in the panoramic image to which the image block corresponding to the target identifier belongs.
  • the resolution of the panoramic images at each viewpoint is greater than or equal to the preset resolution, that is, the resolution of the panoramic images at each viewpoint is higher than that of the first image, the corresponding panoramic images at each viewpoint
  • the image blocks are obtained based on the panoramic images under the various viewpoints, so the definition of the second image obtained based on the image blocks corresponding to the target identifier is greater than or equal to the preset definition.
  • the terminal can obtain a high-definition second image, and because the electronic device stores the image blocks corresponding to the panoramic images at each viewpoint, as well as the features of the images at each viewpoint at each viewpoint and the panoramic images at each viewpoint.
  • the mapping relationship of the identifiers of the image blocks corresponding to the image can reduce the storage overhead compared with the way of storing high-definition images of different viewing angles under each viewpoint in the prior art.
  • the electronic device when the electronic device acquires the first image, it may also acquire the position (ie, viewpoint) of the device that captures the first image. In this way, the electronic device may determine a target viewpoint within a preset range from the location, and then acquire the similarity between the feature of the image at each viewing angle under the target viewpoint and the feature of the first image. This can reduce the calculation amount of the terminal to calculate the similarity, and only need to calculate the similarity between the feature of the image of each viewing angle and the feature of the first image at the target viewpoint within the preset range from the position of the terminal, without obtaining A degree of similarity between the feature of the image of each viewing angle under each viewpoint and the feature of the first image.
  • the mapping relationship includes a first index relationship and a second index relationship
  • the second index relationship is: the feature of the image of each viewing angle under each viewpoint and the The mapping relationship of the central point of the image of each viewing angle
  • the first index relationship is: the mapping relationship between the central point of the image of each viewing angle under each viewing point and the identifier of the image block corresponding to the panoramic image under each viewing point .
  • the electronic device may determine the viewing angle corresponding to the maximum similarity according to the features of the image of the viewing angle corresponding to the maximum similarity and the second index relationship.
  • the center point of the feature map of the image of the image, and then according to the center point of the feature map of the image of the viewing angle corresponding to the maximum similarity, and the first index relationship, the target identifier is determined.
  • the features of the images of each viewpoint under each viewpoint, the first index relationship, and the second index relationship stored in the electronic device may be preset in the electronic device by the staff, or acquired by the electronic device itself.
  • the electronic device may acquire, according to the panoramic images under the various viewpoints, the features of the images of each viewpoint under the viewpoints, the first index relationship, and the second index relationship, Furthermore, image features of each viewing angle under each viewing point, the first index relationship, and the second index relationship are stored.
  • the electronic device may use back-projection transformation on the panoramic images under each viewpoint to obtain images of multiple viewing angles (highly overlapping images) under each viewpoint, and the center point of the image of each viewing angle under each viewpoint is in the corresponding The coordinate position of the panoramic image, the overlap ratio between the images of the adjacent viewing angles under each viewpoint is greater than the preset overlapping ratio, and then the features of the images of each viewing angle under the various viewpoints are extracted.
  • the electronic device may directly perform back-projection transformation on the panoramic images at each viewpoint.
  • the electronic device may acquire panoramic images at various viewpoints according to the low-overlapping images, and then perform back-projection transformation on the panoramic images at various viewpoints.
  • the electronic device may use the panorama image stitching technology to obtain the panoramic images under each viewpoint according to the pre-collected images of multiple viewpoints under each viewpoint, and the pre-collected images of adjacent viewpoints under each viewpoint
  • the overlapping ratio between the images is less than the preset overlapping ratio, that is, low overlapping images.
  • the electronic device may use a sliding window with a second preset size to slide in the panoramic image in the panoramic image at each viewpoint, and use back-projection transformation to sequentially obtain partial panoramic images in the sliding window
  • the center point of the image of the corresponding viewing angle and the image of the corresponding viewing angle of the partial panoramic image is at the coordinate position of the corresponding panoramic image, and the image of each viewing angle under each viewing point has the second preset size.
  • the electronic device may construct the second index relationship according to the coordinate position of the center point of the image of each angle of view under each viewpoint in the corresponding panoramic image, and the characteristics of the image of each angle of view under each viewpoint. as well as,
  • the electronic device may cut the panoramic image under each viewpoint to obtain image blocks corresponding to the panoramic image under each viewpoint, and according to the coordinate position of the center point of the image of each angle of view under each viewpoint is in the corresponding panoramic image , and the image blocks corresponding to the panoramic images under the various viewpoints, construct the first index relationship.
  • the image blocks corresponding to the panoramic images under each viewpoint have a first preset size.
  • the electronic device adopts the panorama image stitching technology, and according to the pre-collected images of multiple viewing angles under each viewpoint, obtains the panoramic images under each viewpoint, that is, the panoramic images under each viewpoint in the first viewing plane.
  • the pre-collected image of each viewing angle is projected onto the second viewing plane to which the panoramic image belongs, so as to obtain the panoramic image at each viewing point.
  • the electronic device may also obtain the transformation relationship between the first viewing plane and the second viewing plane.
  • the electronic device may use back projection transformation to project the image block corresponding to the target identifier to the first viewing plane according to the transformation relationship, to obtain the second Two images.
  • the electronic device can obtain the second image on the same viewing plane as the image captured by the terminal, and there is no difference in the viewing plane when the user watches it.
  • the conversion from the first image to the second image is not perceived by the user, which can improve user experience .
  • the electronic device can be the cloud, and the terminal can capture the first image, but the definition of the first image is not high. Therefore, in this scenario, the terminal can send the first image to the cloud and capture the first image.
  • the image is the location of the terminal.
  • the cloud can obtain the first image and the location where the first image was shot, and then use the method described in the above possible implementation manners to obtain the second image.
  • the cloud After the cloud obtains the second image, it can send the second image to the terminal, so that the terminal can display and store the second image, so that the user can see the high-definition second image on the terminal, and can Improve user experience.
  • the embodiment of the present application provides an image processing method, and the executing subject for executing the method may be a terminal or a chip in the terminal, and the following uses a terminal as an example for description.
  • the terminal uses the first image captured by the first magnification, the first magnification is greater than or equal to the preset magnification, and the terminal sends the first image to the electronic device.
  • the terminal receives the second image from the electronic device, and may display the second image in response to the image display instruction, and the definition of the second image is higher than that of the first image.
  • the terminal stores the image blocks corresponding to the panoramic images under each viewpoint, and the features of the images of each viewpoint under the viewpoints and the identification of the image blocks corresponding to the panoramic images under each viewpoint
  • the mapping relationship of each angle of view under each viewpoint and the image block corresponding to the panoramic image under each viewpoint are obtained based on the panoramic image under each viewpoint, and the definition of the panoramic image under each viewpoint is greater than or equal to Preset sharpness.
  • the terminal may acquire the similarity between the features of the image of each viewing angle under the various viewpoints and the feature of the first image, and the The feature, and the mapping relationship, determine the target identifier of the feature map of the image of the viewing angle corresponding to the maximum similarity.
  • the terminal acquires a second image according to the image block corresponding to the target identifier, and the definition of the second image is higher than that of the first image, and then can display the second image in response to an image display instruction.
  • the acquiring the similarity between the feature of the image of each viewing angle under each viewing point and the feature of the first image includes: determining a distance within a preset range from the terminal A target viewpoint, and obtaining a similarity between the features of the image of each viewing angle under the target viewpoint and the features of the first image.
  • the mapping relationship includes a first index relationship and a second index relationship
  • the second index relationship is: the feature of the image of each viewing angle under each viewpoint and the The mapping relationship of the central point of the image of each viewing angle
  • the first index relationship is: the mapping relationship between the central point of the image of each viewing angle under each viewing point and the identifier of the image block corresponding to the panoramic image under each viewing point .
  • determining the target identifier of the feature map of the image of the angle of view corresponding to the maximum similarity includes: according to the image of the angle of view corresponding to the maximum similarity
  • the feature of the image, and the second index relationship determine the center point of the feature map of the image of the viewing angle corresponding to the maximum similarity; according to the center point of the feature map of the image of the viewing angle corresponding to the maximum similarity, and the The first index relationship is used to determine the target identifier.
  • the method further includes: according to the panoramic images under each viewpoint, acquiring the features of images of each viewpoint under each viewpoint, the first index relationship, and the second Index relationship: store the feature of the image of each viewing angle under each viewpoint, the first index relationship, and the second index relationship.
  • the acquiring the feature of the image of each view angle under each view point according to the panoramic image under each view point includes: applying back projection transformation to the panoramic image under each view point to obtain The images of multiple viewing angles under each viewing point, and the center point of the image of each viewing angle under each viewing point are at the coordinate position of the corresponding panoramic image, and the overlapping ratio between images of adjacent viewing angles under each viewing point is greater than Presetting the overlapping ratio; extracting the feature of the image of each view angle under each view point.
  • the back-projection transformation is used for the panoramic images under each viewpoint to obtain images of multiple viewing angles under each viewpoint, and the center point of the image of each viewing angle under each viewpoint
  • the coordinate position of the corresponding panoramic image includes: in the panoramic image under each viewpoint, using a sliding window with a second preset size to slide in the panoramic image, and using back-projection transformation to sequentially obtain the sliding window
  • the center point of the image of the angle of view corresponding to the partial panoramic image and the image of the angle of view corresponding to the partial panoramic image is at the coordinate position of the corresponding panoramic image, and the image of each angle of view under each viewpoint has the second preset size.
  • obtaining the second index relationship includes: according to the coordinate position of the center point of the image of each viewing angle under the various viewpoints in the corresponding panoramic image, and the The feature of the image of the viewing angle is used to construct the second index relationship.
  • obtaining the first index relationship includes: cutting the panoramic image under each viewpoint to obtain image blocks corresponding to the panoramic image under each viewpoint;
  • the center point of the image of each viewing angle is at the coordinate position of the corresponding panoramic image, and the image block corresponding to the panoramic image under each viewing angle, to construct the first index relationship.
  • the image blocks corresponding to the panoramic images under each viewpoint have a first preset size.
  • the first index relationship, and the second index relationship according to the panoramic images under each view point before acquiring the features of images of each view angle under each view point, the first index relationship, and the second index relationship according to the panoramic images under each view point, It also includes: using panoramic image stitching technology, according to the pre-collected images of multiple viewing angles under each viewpoint, obtaining the panoramic image under each viewpoint, and the overlap between the pre-collected images of adjacent viewing angles under each viewpoint rate is less than the preset overlap rate.
  • using the panoramic image stitching technology to acquire the panoramic images under each viewpoint according to the pre-collected images of multiple viewpoints under each viewpoint includes: placing the panoramic image in the first viewing plane
  • the pre-collected images of each viewing angle under each viewpoint are projected onto the second viewing plane to which the panoramic image belongs, so as to obtain the panoramic image under each viewing point, and the first viewing plane and the second viewing plane transform relationship.
  • the acquiring the second image according to the image block corresponding to the target identifier includes: projecting the image block corresponding to the target identifier onto the image block corresponding to the target identifier according to the transformation relationship The first viewing plane is used to obtain the second image.
  • the embodiment of the present application provides an image processing apparatus, and the image processing apparatus may be an electronic device or a chip in the electronic device.
  • the image processing device includes:
  • a processing module configured to obtain the similarity between the features of the image of each viewing angle under the various viewpoints and the features of the first image, and determine the The target identifier of the feature map of the image of the viewing angle corresponding to the maximum similarity, and according to the image block corresponding to the target identifier, acquire a second image, the definition of the second image is higher than the definition of the first image .
  • the processing module is specifically configured to acquire the first image, a location where the first image is taken, determine a target viewpoint within a preset range from the location, and acquire the A degree of similarity between the feature of the image of each viewing angle under the target viewpoint and the feature of the first image.
  • the mapping relationship includes a first index relationship and a second index relationship
  • the second index relationship is: the feature of the image of each viewing angle under each viewpoint and the The mapping relationship of the central point of the image of each viewing angle
  • the first index relationship is: the mapping relationship between the central point of the image of each viewing angle under each viewing point and the identifier of the image block corresponding to the panoramic image under each viewing point .
  • the processing module is specifically configured to determine the center point of the feature map of the image of the viewing angle corresponding to the maximum similarity according to the features of the image of the viewing angle corresponding to the maximum similarity and the second index relationship, and according to the The center point of the image feature map of the viewing angle corresponding to the maximum similarity, and the first index relationship determine the target identifier.
  • the processing module is further configured to acquire, according to the panoramic images under each viewpoint, the features of images of each viewpoint under each viewpoint, the first index relationship, and the second index relationship.
  • a storage module configured to store features of images of each view angle under each view point, the first index relationship, and the second index relationship.
  • the processing module is specifically configured to perform back-projection transformation on the panoramic images under each viewpoint to obtain images of multiple angles of view under each viewpoint, and images of each angle of view under each viewpoint.
  • the center point of the image is at the coordinate position of the corresponding panoramic image, and the overlapping ratio between images of adjacent viewing angles under each viewing point is greater than a preset overlapping ratio; extracting features of images of each viewing angle under each viewing point.
  • the processing module is specifically configured to use a sliding window with a second preset size to slide in the panoramic image in the panoramic image at each viewpoint, and use back-projection transformation to sequentially obtain
  • the image of the angle of view corresponding to the partial panoramic image in the sliding window and the center point of the image of the angle of view corresponding to the partial panoramic image are at the coordinate position of the corresponding panoramic image, and the image of each angle of view under each viewpoint has the Second default size.
  • the processing module is specifically configured to, according to the coordinate position of the center point of the image of each angle of view under each viewpoint in the corresponding panoramic image, and the coordinate position of the image of each angle of view under each viewpoint features, constructing the second index relationship.
  • the processing module is specifically configured to cut the panoramic image under each viewpoint to obtain image blocks corresponding to the panoramic image under each viewpoint, and The center point of the image is at the coordinate position of the corresponding panoramic image, and the image blocks corresponding to the panoramic image under each viewpoint are used to construct the first index relationship.
  • the image blocks corresponding to the panoramic images under each viewpoint have a first preset size.
  • the processing module is further configured to use the panoramic image stitching technology to acquire the panoramic images under each viewpoint according to the pre-collected images of multiple viewpoints under each viewpoint, and the panoramic images under each viewpoint are The overlap ratio between pre-collected images of adjacent viewing angles is smaller than the preset overlap ratio.
  • the processing module is specifically configured to project the pre-collected image of each viewing angle under each viewpoint in the first viewing plane to the second viewing plane to which the panoramic image belongs, so as to obtain the The panoramic image under each viewpoint, and the transformation relationship between the first viewing plane and the second viewing plane.
  • the processing module is specifically configured to, according to the transformation relationship, project the image block corresponding to the target identifier to the first viewing plane by using back-projection transformation to obtain the second image.
  • the transceiver module is configured to receive the first image from the terminal and the location of the terminal when the terminal captures the first image, and send the second image to the terminal.
  • the embodiment of the present application provides an image processing device, and the image processing device may be a terminal or a chip in the terminal.
  • the image processing device includes:
  • the terminal stores the image blocks corresponding to the panoramic images under each viewpoint, and the features of the images of each viewpoint under the viewpoints and the identification of the image blocks corresponding to the panoramic images under each viewpoint The mapping relationship between the images of each viewing angle under the viewpoints and the image blocks corresponding to the panoramic images under the viewpoints are all obtained based on the panoramic images under the viewpoints.
  • the photographing module is used to capture the first image with the first magnification, and the first magnification is smaller than the preset magnification.
  • a processing module configured to obtain the similarity between the features of the image of each viewing angle under the various viewpoints and the features of the first image, and determine according to the features of the image of the viewing angle corresponding to the maximum similarity and the mapping relationship Acquiring a second image according to the object identifier of the feature map of the image of the viewing angle corresponding to the maximum similarity, and the image block corresponding to the object identifier.
  • the definition of the second image is higher than that of the first image.
  • the display module is configured to display the second image in response to the image display instruction.
  • the processing module is specifically configured to determine a target viewpoint within a preset range from the terminal, and acquire features of an image of each viewing angle under the target viewpoint and the first image similarity of features.
  • the mapping relationship includes a first index relationship and a second index relationship
  • the second index relationship is: the feature of the image of each viewing angle under each viewpoint and the The mapping relationship of the central point of the image of each viewing angle
  • the first index relationship is: the mapping relationship between the central point of the image of each viewing angle under each viewing point and the identifier of the image block corresponding to the panoramic image under each viewing point .
  • the processing module is specifically configured to determine the center point of the feature map of the image of the viewing angle corresponding to the maximum similarity according to the features of the image of the viewing angle corresponding to the maximum similarity and the second index relationship, and according to the The center point of the image feature map of the viewing angle corresponding to the maximum similarity, and the first index relationship determine the target identifier.
  • the processing module is further configured to acquire, according to the panoramic images under each viewpoint, the features of images of each viewpoint under each viewpoint, the first index relationship, and the second index relationship.
  • a storage module configured to store features of images of each view angle under each view point, the first index relationship, and the second index relationship.
  • the processing module is specifically configured to perform back-projection transformation on the panoramic images under each viewpoint to obtain images of multiple angles of view under each viewpoint, and images of each angle of view under each viewpoint.
  • the center point of the image is at the coordinate position of the corresponding panoramic image, and the overlapping ratio between images of adjacent viewing angles under each viewing point is greater than a preset overlapping ratio; extracting features of images of each viewing angle under each viewing point.
  • the processing module is specifically configured to use a sliding window with a second preset size to slide in the panoramic image in the panoramic image at each viewpoint, and use back-projection transformation to sequentially obtain
  • the image of the angle of view corresponding to the partial panoramic image in the sliding window and the center point of the image of the angle of view corresponding to the partial panoramic image are at the coordinate position of the corresponding panoramic image, and the image of each angle of view under each viewpoint has the Second default size.
  • the processing module is specifically configured to, according to the coordinate position of the center point of the image of each angle of view under each viewpoint in the corresponding panoramic image, and the coordinate position of the image of each angle of view under each viewpoint features, constructing the second index relationship.
  • the processing module is specifically configured to cut the panoramic image under each viewpoint to obtain image blocks corresponding to the panoramic image under each viewpoint, and The center point of the image is at the coordinate position of the corresponding panoramic image, and the image blocks corresponding to the panoramic image under each viewpoint are used to construct the first index relationship.
  • the image blocks corresponding to the panoramic images under each viewpoint have a first preset size.
  • the processing module is further configured to use the panoramic image stitching technology to acquire the panoramic images under each viewpoint according to the pre-collected images of multiple viewpoints under each viewpoint, and the panoramic images under each viewpoint are The overlap ratio between pre-collected images of adjacent viewing angles is smaller than the preset overlap ratio.
  • the processing module is specifically configured to project the pre-collected image of each viewing angle under each viewpoint in the first viewing plane to the second viewing plane to which the panoramic image belongs, so as to obtain the The panoramic image under each viewpoint, and the transformation relationship between the first viewing plane and the second viewing plane.
  • the processing module is specifically configured to, according to the transformation relationship, project the image block corresponding to the target identifier to the first viewing plane by using back-projection transformation to obtain the second image.
  • the embodiment of the present application provides an electronic device, and the electronic device may be the above-mentioned cloud or terminal.
  • the electronic device may include: a processor and a memory.
  • the memory is used to store computer-executable program codes, and the program codes include instructions; when the processor executes the instructions, the instructions cause the electronic device to execute the methods in the first aspect and the second aspect.
  • the embodiments of the present application provide a computer program product including instructions, which, when run on a computer, cause the computer to execute the methods in the first aspect and the second aspect above.
  • the embodiment of the present application provides a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the computer-readable storage medium is run on a computer, the computer executes the above-mentioned first aspect and the second aspect. method.
  • FIG. 1 is a schematic diagram of a scene applicable to an embodiment of the present application
  • FIG. 2 is a schematic diagram of high-definition images at different shooting angles at a shooting position stored in the cloud in the prior art
  • Fig. 3 is a schematic diagram of an image processing method in the prior art
  • Fig. 4 is a schematic diagram of cloud storage image block and index relationship provided by the embodiment of the present application.
  • Fig. 5 is another schematic diagram of cloud storage image block and index relationship provided by the embodiment of the present application.
  • FIG. 6 is a schematic diagram of images of different viewing angles acquired by the cloud under the viewpoint provided by the embodiment of the present application.
  • FIG. 7 is a schematic flowchart of an embodiment of an image processing method provided in the embodiment of the present application.
  • FIG. 8 is a schematic diagram of a variation of the camera interface provided by the embodiment of the present application.
  • FIG. 9 is a schematic flowchart of an embodiment of an image processing method provided in an embodiment of the present application.
  • FIG. 10 is a schematic diagram of a panoramic image stored in the cloud provided by an embodiment of the present application.
  • FIG. 11 is a schematic flowchart of another embodiment of the image processing method provided by the embodiment of the present application.
  • FIG. 12 is a schematic structural diagram of an image processing device provided in an embodiment of the present application.
  • FIG. 13 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • the point of view can understand the position of the camera device (such as a mobile phone) when taking pictures, and a single point of view is a position.
  • Panoramic image stitching technology it is to stitch multiple images into a large-scale image. Exemplarily, in this embodiment of the application, multiple high-definition images are spliced into one panoramic image.
  • Panoramic image stitching may include but not limited to 4 steps, the 4 steps are: detecting and extracting image features and key points, matching key points of two images, using random sampling
  • the consensus algorithm random sample consensus, RANSAC) estimates the homography matrix, and stitches the image.
  • the specific implementation of the panoramic image stitching technology may include: using a scale-invariant feature transform (scale-invariant feature transform, SIFT) local description operator to detect key points and features (feature descriptors or SIFT features) in the image ), and matching feature descriptors between two images, that is, using features to match keypoints of two images. Then, the RANSAC algorithm is used to estimate the homography matrix (homography estimation) using the key points matched on the two images, that is, to match one of the images with the other image through correlation.
  • scale-invariant feature transform scale-invariant feature transform, SIFT
  • SIFT scale-invariant feature transform
  • perspective transformation can be used. For example, you can input the homography matrix, the image you want to distort, and the shape of the output image, and then use the height of the image by obtaining the sum of the widths of the two images
  • Perspective transformation can be understood as: projecting an image to a new viewing plane. Perspective transformation is also called projective mapping or projection transformation.
  • Projection transformation You can refer to the description of perspective transformation.
  • Projection transformation refers to the process of projecting an image to a new viewing plane
  • back projection transformation refers to projecting the image of the new viewing plane to the original viewing plane of the image. It should be understood that in the process of projection transformation, the transformation relationship (such as transformation matrix) between the original viewing plane and the new viewing plane can be obtained, and the process of back-projection transformation adopts the "original viewing plane to new viewing plane The transformation relationship between ", project the image of the new viewing plane to the original viewing plane of the image.
  • the back-projection transformation may be called back-projection mapping, inverse perspective transformation, or inverse perspective transformation.
  • back-projection mapping inverse perspective transformation
  • inverse perspective transformation inverse perspective transformation
  • High-definition image such as a high-definition image captured by a SLR camera
  • the definition of the high-definition image is greater than the preset definition.
  • the preset code rate can be used to represent the preset resolution.
  • the parameters representing clarity are not limited in the embodiments of the present application.
  • Photo magnification Refers to the zoom magnification.
  • High magnification The zoom magnification used when taking pictures is greater than the preset magnification.
  • the preset magnification depends on the camera capability of the terminal, and the preset magnifications of different terminals may be the same or different. In one embodiment, the preset magnification may be 5, for example.
  • FIG. 1 is a schematic diagram of a scene applicable to an embodiment of the present application.
  • Figure 1 compares the terminal with a mobile phone and a SLR camera, and uses both the mobile phone and the SLR camera to capture computer screens as an example.
  • the user uses a SLR camera to take pictures with a magnification of 30, he can get a high-definition image. For example, the user can clearly see the text "one two three four" on the computer screen in the image.
  • the user takes a photo with a high magnification, which may be understood as: the user takes a photo with a magnification greater than a preset magnification.
  • the preset magnification refer to the related description in the above term explanation.
  • High-definition images include: high-definition images taken at different shooting locations, and high-definition images taken at the same shooting location at different shooting angles.
  • FIG. 2 is a schematic diagram of high-definition images taken at different shooting angles at the shooting position of A stored in the cloud in the prior art. It should be understood that the black rectangle in FIG. illustrate.
  • the overlap rate of the frames of the high-definition images stored in the cloud is greater than or equal to the first overlap rate, such as 80%.
  • the shooting location may be called a viewpoint
  • the shooting angle may be called a viewing angle.
  • high-definition images taken from different viewpoints and high-definition images taken from different viewpoints at the same viewpoint are stored in the cloud.
  • the captured image can be sent to the cloud, and the cloud obtains the similarity between each high-definition image stored in the cloud and the image from the terminal, and then the highest similarity High-definition images are fed back to the terminal.
  • the terminal After the terminal receives the high-definition image from the cloud, it can display the high-definition image, and the user can see the high-definition image obtained by the terminal taking pictures with a high magnification.
  • the terminal sends the low-resolution image shown in b in FIG.
  • the cloud can feed back a high-resolution image to the terminal, such as an image displaying the words "one two three four".
  • a high-resolution image such as an image displaying the words "one two three four".
  • the number of high-definition images stored in the cloud may be reduced, for example, storing high-definition images with an overlapping rate less than a second overlapping rate, such as 20%.
  • a second overlapping rate such as 20%.
  • panoramic images (or image blocks divided by panoramic images) under different single viewpoints (or viewpoints) can be stored in the cloud.
  • the high-definition images stored in the cloud Changing multiple high-definition images into one panoramic image (or multiple image blocks corresponding to one panoramic image) can reduce storage overhead on the cloud.
  • the terminal in the embodiment of the present application may be called a user equipment, and the terminal has a camera function and supports high-magnification camera.
  • the terminal in the embodiment of the present application uses a high magnification to take a picture, the definition of the image obtained by taking the picture is low.
  • a high magnification can be understood as a photographing magnification greater than a preset magnification.
  • the terminal can be a mobile phone, a tablet computer (portable android device, PAD), a personal digital assistant (PDA), a handheld device with a wireless communication function, a computing device, or a wearable device.
  • the cloud may be a server, or a server cluster.
  • the server may be a server corresponding to a photographing application program, or a server corresponding to an application program carrying a photographing function, and the embodiment of the present application does not specifically limit the form of the cloud.
  • multiple image blocks corresponding to panoramic images under different viewpoints are stored in the cloud, wherein there is no overlap between the multiple image blocks corresponding to panoramic images under the same viewpoint or the overlapping ratio is less than the third overlapping ratio.
  • the third overlapping ratio may be a smaller value such as 20%, 10%, or the like. It should be understood that the panoramic image is a high-definition image, and multiple image blocks corresponding to the panoramic image are also high-definition image blocks.
  • panoramic images from different viewpoints are stored in the cloud.
  • a viewpoint because the cloud does not store multiple high-definition images of the viewpoint under the viewpoint, but stores a panoramic image under the viewpoint, or multiple image blocks corresponding to the panoramic image, it can reduce Storage overhead in the cloud.
  • a viewpoint is taken as an example to introduce the image processing method provided in the embodiments of the present application.
  • the panoramic images under different viewpoints in the embodiment of the present application may be obtained by shooting with a panoramic camera, or spliced from low-overlapping high-definition images under different viewpoints.
  • the overlap ratio between the low-overlap high-definition images is smaller than the second overlap ratio.
  • the panoramic images under different viewpoints obtained by splicing low-overlapping high-definition images under different viewpoints are taken as an example for illustration.
  • the process of storing content in the cloud may include the following steps:
  • the cloud uses panoramic image stitching technology to stitch low-overlap high-definition images under the same viewpoint to obtain panoramic images under different viewpoints.
  • low-overlap high-definition images under the same viewpoint can be captured by a camera device capable of capturing high-definition images, such as a single-lens reflex camera.
  • a single-lens reflex camera may be used in advance to capture high-definition images of different viewing angles at the same viewpoint, and then obtain high-definition images of different viewing angles at different viewpoints.
  • the high-definition images with low overlap at each viewpoint may be referred to as pre-collected images of multiple viewing angles at each viewpoint.
  • the overlap rate between the high-definition images of adjacent viewing angles collected by the single-lens reflex camera at the same viewpoint is less than the second overlapping ratio, or the high-definition image can be selected from the high-definition images collected by the single-lens reflex camera under the same viewpoint, so that the adjacent viewing angles
  • the overlapping rate between high-definition images is smaller than the second overlapping rate, so as to obtain low-overlapping high-definition images under the same viewpoint.
  • the purpose of the overlap rate of the low-overlap high-definition image being smaller than the second overlap rate is to reduce the calculation amount of panoramic image stitching in the cloud and improve stitching efficiency.
  • the overlap rate between the high-definition images of adjacent viewing angles is less than a preset overlap rate, and the preset overlap rate is greater than or equal to the second overlap rate and less than the first overlap rate.
  • the cloud can use panoramic image stitching technology to obtain panoramic images at this viewpoint.
  • the cloud can obtain panoramic images from different viewpoints.
  • the panoramic image stitching technology please refer to the relevant description in the definition of terms.
  • S401 is shown as S1 in FIG. 5
  • FIG. 5 is a simplified flow diagram of FIG. 4 .
  • FIG. 6 is a schematic diagram of the cloud acquiring images of different viewing angles under the same viewpoint provided by the embodiment of the present application. Referring to a in FIG. 6 , it is illustrated by taking two low-overlapping high-definition images at a viewpoint as an example. The cloud executes S401 to obtain a panoramic image at the viewpoint, as shown in b in FIG. 6 .
  • the cloud segmentes the panoramic images under each viewpoint to obtain image blocks corresponding to the panoramic images under each viewpoint.
  • the cloud can cut the panoramic image under the viewpoint into image blocks with a preset size to obtain multiple image blocks corresponding to the viewpoint.
  • the size of each image block is the same, such as 800px*900px, that is, each image block has a first preset size, and the first preset size can be understood as having a first preset width and a first preset size. a preset height. Among them, 1px represents a pixel. In one embodiment, the size of each image block can be different.
  • the overlapping ratio between two adjacent image blocks corresponding to the same viewpoint may be smaller than the third overlapping ratio.
  • each image block may be numbered. Exemplarily, it can be numbered according to the row and column of the cut image block in the panoramic image. For example, if an image block is located in the first row and first column of the panoramic image, the image block can be numbered as row 1 and column 1 .
  • the image blocks can be numbered in the order of 1 to N, for example, the number of the image block in the first row and the first column is 1, the number of the image block in the first row and the second column is 2, and N is greater than 1 integer.
  • the numbering of image blocks in rows and columns is used as an example for illustration.
  • the row, column number or "1-N" number of the image block may be referred to as the identifier of the image block.
  • the purpose of cutting the panoramic images under each viewpoint into image blocks is to facilitate the cloud to load the image blocks, instead of directly loading the entire panoramic image directly from the cloud, because the loading time of the image blocks is shorter than that of the entire panoramic image. Therefore, the loading speed of the cloud can be improved, and the speed of the high-definition image feedback from the cloud to the terminal can be improved.
  • FIG. 7 refers to the relevant description in FIG. 7 .
  • S402 is shown as S2 in FIG. 5 .
  • the cloud executes S402 to cut the panoramic image into 8 image blocks, as shown in c in FIG. 6 .
  • the cloud adopts back-projection transformation to obtain images of multiple viewing angles corresponding to the panoramic image at each viewing point.
  • the cloud when the cloud adopts panoramic image mosaic technology to obtain panoramic images under different viewpoints, it can obtain the transformation relationship between the first viewing plane where the low-overlap high-definition image is located at each viewpoint and the second viewing plane where the panoramic image is located, and then the cloud Each part of the panoramic image can be projected onto the first viewing plane by using the transformation relationship at each viewpoint, so as to obtain images of different viewing angles.
  • the cloud can sequentially project part of the panoramic images in the sliding window of the second preset size to the second viewing plane according to the order of the panoramic images from left to right and from top to bottom, so as to obtain multiple viewing angles Image.
  • the image of each viewing angle is a high-definition image
  • the size of the image of each viewing angle is the same, that is, the second preset size, for example, the image of each viewing angle has a second preset width and a second preset height.
  • the overlapping rate between the images of adjacent viewing angles corresponding to the panoramic image under the same viewpoint is greater than the first overlapping rate, that is to say, the cloud control sliding window can be separated from the previous position of the sliding window every time it slides. The overlap rate between them is kept greater than the first overlap rate, so as to obtain images of adjacent viewing angles corresponding to the panoramic images at each viewing point.
  • the distance between each pixel on the partial panoramic image and the image corresponding to the viewing angle can be obtained.
  • the one-to-one mapping relationship of pixels, and in this process, the cloud can obtain the coordinate position of the center point of the image corresponding to the perspective in the panoramic image.
  • the coordinate position of the center point in the panoramic image may be a latitude and longitude coordinate.
  • center point of the image can be understood as: the physical center point of the image.
  • S403 is shown as S3 in FIG. 5 .
  • the cloud executes S403 to perform back-projection transformation on the panoramic image to obtain images of four viewing angles corresponding to the viewpoint, as shown in d in FIG. 6 .
  • the cloud establishes a first index relationship between the center point of the image of each angle of view and the image block according to the coordinate position of the center point of the image of each angle of view under each viewpoint in the panoramic image.
  • the first index relationship can be understood as: which image blocks in the panoramic image correspond to the images of each viewing angle, that is, to establish an identification mapping relationship between images of each viewing angle and image blocks.
  • the image of each viewing angle can be acquired on the premise that the coordinate position of the center point of the image of each viewing angle in the panoramic image is known All corresponding image blocks.
  • the cloud can determine the four vertices of the image of each viewing angle in the panoramic image The coordinate position in .
  • the cloud can determine the corresponding image block of the image of this viewing angle in the panoramic image according to the four vertices of the image of each viewing angle and the position coordinates of the center point in the panoramic image.
  • the cloud may store the first index relationship.
  • the image of each perspective can be represented by the coordinate position of the center point of the image of each perspective in the panoramic image
  • the image block can be represented by the number of the image block, that is to say, in the first index relationship can Including: the mapping relationship between the coordinate position of the center point of the image of each viewing angle in the panoramic image and the number of the image block.
  • the coordinate positions of the center points of the images of the 4 viewing angles in the panoramic image are (longitude 1, latitude 1), (longitude 2, latitude 2), (longitude 3, latitude 3), and (longitude 4, latitude 4).
  • the image block corresponding to the image of (longitude 1, latitude 1) is: (row 1, column 1), (row 1, column 2), and the image block corresponding to the image of (longitude 2, latitude 2) is: (row 1.
  • the image block corresponding to the image of column 3), (row 1, column 4), (longitude 3, latitude 3) is: (row 2, column 1), (row 2, column 2), (longitude 4, latitude 4)
  • the image blocks corresponding to the image are: (row 2, column 3), (row 2, column 4). Accordingly, the first index relationship of cloud storage can be shown in Table 1:
  • the coordinate position of the center point of the image of each viewing angle in the panoramic image image blocks (longitude 1, latitude 1) (row 1, column 1), (row 1, column 2) (longitude 2, latitude 2) (row 1, column 3), (row 1, column 4) (longitude 3, latitude 3) (row 2, column 1), (row 2, column 2) (longitude 4, latitude 4) (row 2, column 3), (row 2, column 4)
  • S404 is shown as S4 in FIG. 5 .
  • the cloud acquires the feature of the image of each viewing angle under each viewing point, so as to establish a second index relationship between the feature of the image of each viewing angle and the coordinate position of the center point of the image of each viewing angle in the panoramic image.
  • the cloud can obtain the features of the images of each viewing angle.
  • the feature of the image of each viewing angle is embodied as a feature vector, for example, the feature vector may be a 2048-dimensional feature vector. That is to say, the cloud can obtain feature vectors of images of each view angle under each view point.
  • the cloud may use a neural network model to extract features of images of each viewing angle.
  • the neural network model may include but not limited to: convolutional neural networks (convolutional neural networks, CNN), recurrent neural networks (recurrent neural network, RNN) and long short-term memory (long short-term memory, LSTM).
  • the cloud can acquire the features of the image of each viewing angle, and then can establish the features of the image of each viewing angle and the coordinate position of the center point of each viewing angle in the panoramic image according to the features of the image of each viewing angle
  • the second index relation of can store a second index relationship, in the second index relationship, the center point of the image of each angle of view can be represented by the coordinate position of the center point of the image of each angle of view in the panoramic image, so as to
  • the feature vector of the image of each viewing angle characterizes the features of the image of each viewing angle, that is, the second index relationship may include: the coordinate position of the center point of each viewing angle in the panoramic image and the features of the image of each viewing angle mapping relationship.
  • the cloud can also obtain a third index relationship according to the first index relationship and the second index relationship.
  • the third index relationship is: the mapping relationship between the feature of the image of each viewing angle and the number of the image block. That is to say, the cloud can merge the first index relationship and the second index relationship based on the coordinate position of the center point of each viewing angle in the panoramic image, and combine the features of the center point with the same coordinate position and the image block The number is mapped to obtain the third index relationship.
  • S405 is shown as S5 in FIG. 5 .
  • multiple image blocks under each viewpoint, the first index relationship and the second index relationship may be stored in the cloud.
  • multiple image blocks and the third index relationship under each viewpoint may be stored in the cloud. In this way, compared with the prior art in which high-definition images of different viewing angles are stored in the cloud, storage overhead can be reduced.
  • FIG. 7 is a schematic flow chart of an embodiment of an image processing method provided in an embodiment of the present application. It should be understood that in FIG. 7, the perspective of interaction between the terminal and the cloud is taken as an example for illustration.
  • the image processing method provided by the embodiment of the present application may include:
  • the terminal responds to a photographing instruction to obtain a first image by photographing.
  • the photographing instruction may be an instruction triggered by the user operating the photographing interface displayed on the terminal.
  • the photographing interface includes a photographing control
  • the user's operation of the photographing control may trigger the input of the photographing instruction to the terminal.
  • the photographing instruction can be triggered by the user's voice, for example, the user says "photographing", which can trigger the input of the photographing instruction to the terminal.
  • the user can also trigger the input of a photographing instruction to the terminal in a customized manner or by operating other shortcut keys, and the embodiment of the present application does not limit the manner in which the user triggers the photographing instruction.
  • the terminal may take a photo to obtain the first image in response to the photo-taking instruction.
  • FIG. 8 is a schematic diagram of a variation of the camera interface provided by the embodiment of the present application. Shown in a in FIG. 8 is the photographing interface, which includes a preview frame 81 , a photographing control 82 and a magnification adjustment bar 83 .
  • the user adjusts the magnification adjustment bar 83 to change the photographing magnification of the terminal.
  • the user adjusts the photographing magnification to 30 as an example for illustration.
  • the terminal sends the first image to the cloud.
  • the cloud acquires features of the first image.
  • the cloud obtains the similarity between the feature of the image of each viewing angle under each viewing point and the feature of the first image.
  • the cloud can obtain the cosine angle or Euclidean distance between the image features of each viewing angle under each viewpoint and the feature of the first image, so as to obtain the features of the image of each viewing angle under each viewing point and the first The similarity of image features.
  • the terminal when the terminal sends the first image to the cloud, it can upload the location of the terminal, that is, the above S702 can be replaced by: the terminal sends the first image and the location of the terminal to the cloud .
  • S704 may be replaced by: the cloud obtains the similarity between the feature of the image of each viewing angle and the feature of the first image at a viewpoint within a preset range from the position of the terminal.
  • a viewpoint within a preset range from the location of the terminal may be referred to as a target viewpoint.
  • the cloud can first determine the viewpoints within a preset distance range of the terminal based on the location of the terminal, wherein the location of the terminal Viewpoints within the preset distance range of can be understood as target viewpoints. Furthermore, the terminal device can obtain the similarity between the feature of the image of each view angle under the target viewpoint and the feature of the first image, which can avoid calculating the similarity of features under all viewpoints, and can improve the computing efficiency of the cloud.
  • the terminal can execute S702-S708.
  • the terminal when the terminal sends the first image and the location of the terminal to the cloud, it may send the first magnification.
  • the cloud executes S703-S708 in response to the first magnification being greater than or equal to the preset magnification, and the cloud responds to the first magnification being smaller than the preset magnification, because the terminal itself can obtain the first high-definition image, so the cloud does not need to execute S703 -S708 to save computing resources in the cloud.
  • the terminal can obtain high-definition images with low magnification, there is no need for the terminal to interact with the cloud to obtain high-definition images. Therefore, in the scene where the terminal takes pictures at high magnification, the terminal can perform S702.
  • S701 may be replaced by: in response to the photographing instruction, photographing at a first magnification to obtain a first image, where the first magnification is greater than a preset magnification.
  • S702 may be replaced by: the terminal sends the first image and the location of the terminal to the cloud in response to the first magnification being greater than the preset magnification.
  • the cloud determines the position coordinates of the center point of the feature map corresponding to the maximum similarity in the panoramic image according to the feature corresponding to the maximum similarity and the second index relationship.
  • the second index relationship is: a mapping relationship between the coordinate position of the center point of the image of each viewing angle in the panoramic image and the feature of the image.
  • the cloud can obtain the position coordinates of the center point of the feature map corresponding to the maximum similarity in the panoramic image according to the stored second index relationship and the feature corresponding to the maximum similarity.
  • the cloud determines the identity of the image block mapped by the center point according to the position coordinates of the center point of the feature map corresponding to the maximum similarity in the panoramic image and the first index relationship.
  • the first index relationship is: a mapping relationship between the coordinate position of the center point of the image of each viewing angle in the panoramic image and the image block. Therefore, after the position coordinates of the central point corresponding to the feature corresponding to the maximum similarity in the panoramic image are obtained in the cloud, the position coordinates of the central point in the panoramic image and the first index relationship can be used to obtain the position coordinates of the central point in the panoramic image.
  • the identity of the image block mapped to the position coordinates of that is, the identity of the image block of the feature map corresponding to the maximum similarity.
  • the identification of the image block of the feature map corresponding to the maximum similarity may be referred to as an object identification.
  • the (longitude 1, latitude 1) can be obtained 1)
  • the numbers of the mapped image blocks are (row 1, column 1), (row 1, column 2).
  • the third index relationship is: a mapping relationship between image features and image block numbers.
  • the cloud can obtain the identifier of the image block of the feature map corresponding to the maximum similarity according to the third index relationship.
  • S705 and S706 may be replaced by: the cloud determines the identity of the image block of the feature map corresponding to the maximum similarity according to the feature corresponding to the maximum similarity and the third index relationship.
  • the cloud adopts back-projection transformation, and obtains the second image according to the image block mapped by the center point.
  • the image block mapped by the center point is the image block corresponding to the target identifier.
  • the cloud stores multiple image blocks corresponding to the panorama image at each viewpoint. After the target identifier of the image block mapped by the center point is determined in the cloud, the image blocks corresponding to the target identifier can be spliced, and then back-projection transformation is used to obtain the second image.
  • the definition of the second image is higher than that of the first image. In an embodiment, the resolution of the second image is greater than the preset resolution.
  • the cloud can map the image block corresponding to the spliced center point to the first viewing plane according to the transformation relationship between the first viewing plane and the second viewing plane, so as to obtain the second image, which is the image of the viewing plane captured by the terminal .
  • the cloud may stitch the image blocks mapped with the center point according to the numbers of the image blocks mapped with the center point.
  • the cloud can be spliced in the order of rows and columns, and the splicing number is (row 1, column 1), (row 1, column 2) image blocks to obtain the second image.
  • the cloud may cover the overlapping area in the image block (row 1, column 1) with the overlapping area in the image block (row 1, column 2) to Image blocks numbered (row 1, column 1), (row 1, column 2) are spliced.
  • the cloud can determine the image block of (row 1, column 1) and the image block of (row 1, column 2) according to the similarity of the pixels in the image block of (row 1, column 1) and the image block of (row 1, column 2) ) in the image blocks, for example, an area with a similarity of 100% is used as an overlapping area.
  • the cloud sends the second image to the terminal.
  • the terminal receives the second image from the cloud.
  • the terminal displays the second image in response to the image display instruction.
  • the terminal may display the second image based on the user's operation. Alternatively, the terminal may display the second image after receiving the second image from the cloud.
  • the image display instruction may be an instruction triggered by the user operating the camera interface.
  • the camera interface includes an image display control, and the user's operation of the image display control can trigger the input of an image display instruction to the terminal.
  • the image display instruction may also be triggered by the user's voice, and the embodiment of the present application does not limit the way the terminal receives the photographing instruction.
  • the terminal and the cloud interact to execute S701-S708, and after the terminal receives the second image from the cloud, the second image can be stored in the local image database (such as a photo album).
  • the camera interface includes an image display control 84 , and the user clicks on the image display control 84 , and the terminal can display a high-definition second image, as shown in c in FIG. 8 .
  • the difference from b in Figure 1 above is that when the terminal uses the first magnification (the first magnification is a high magnification) to take pictures, the image obtained by taking pictures has high definition, for example, the user can clearly see the computer screen in the second image on the text.
  • the first magnification is a high magnification
  • steps S701-S709 shown in FIG. 7 can be simplified as shown in FIG. 9 .
  • the terminal when it uses a high magnification to take pictures, it can send the first image captured to the cloud, and the cloud determines the maximum similarity based on the characteristics of the first image and the stored similarity of images from multiple perspectives at each viewpoint
  • the corresponding image and then based on the first index relationship and the second index relationship, the image block corresponding to the first image is obtained, and then the image block is spliced and back-projected to obtain the second image with high definition, so that the terminal can display the image with high
  • the high-definition second image achieves the purpose that the terminal can obtain a high-definition image when taking pictures at a high magnification.
  • the cloud stores multiple image blocks under each viewpoint, the first index relationship and the second index relationship, or the cloud stores multiple image blocks under each viewpoint and the third index relationship.
  • the cloud stores high-definition images of different viewing angles under various viewpoints, the storage cost of the cloud can be reduced.
  • panoramic images at various viewpoints may be stored in the cloud.
  • the panoramic images at various viewpoints stored in the cloud may be shown in FIG. 10 . It should be understood that in FIG. 10 , the panoramic images include different shapes (such as black rectangles, black triangles, etc.) to represent the panoramic images at different viewpoints.
  • the cloud can determine the image block corresponding to the first image (that is, the number of the image block of the feature map corresponding to the maximum similarity). Then, according to the number of the image block corresponding to the first image, the cloud can cut out the image block corresponding to the number from the panoramic image under the viewpoint, and perform back projection transformation to obtain the second image. For example, the cloud may first load the panoramic image under the viewpoint, and then cut the image block corresponding to the number of the image block corresponding to the first image in the panoramic image, and perform projection transformation to obtain the second image.
  • the cloud can convert the panorama according to the image block number and the first preset size of the image block Image blocks numbered (row 1, column 1) and (row 1, column 2) in the image are cut out, and projectively transformed to obtain a second image.
  • the image block corresponding to the panoramic image is stored in the cloud, when the number of the image block corresponding to the first image is determined in the cloud, it can be directly loaded.
  • the corresponding image blocks are spliced, back-projected, etc., and in the embodiment of the present application, because the cloud is stored as a panoramic image, after the cloud determines the number of the image block corresponding to the first image, it needs to first load the number of the image block.
  • the panoramic image to which the number belongs, and then the corresponding image block is cut out in the panoramic image.
  • the speed of loading image blocks in the cloud is much faster than the speed of loading the entire panoramic image. Therefore, in the embodiment shown in FIG. 7 , the loading efficiency of the cloud is high, and the second image can be fed back to the terminal faster.
  • the panoramic images under each viewpoint can be stored in the cloud, and after the number of the image block corresponding to the first image is obtained in the cloud, the corresponding image block can be cut in the panoramic image to which the number belongs, and The projective transformation results in the second image.
  • the image processing method provided by the embodiment of the present application can also achieve the purpose of obtaining high-definition images when the terminal uses high-magnification photography.
  • the cloud because the cloud still needs to load the entire panoramic image, and then cut the image blocks in the panoramic image, the loading time is long, the loading efficiency is low, and the efficiency of feeding back the second image to the terminal is relatively low.
  • the terminal may store multiple image blocks under each viewpoint, the first index relationship and the second index relationship, or, the terminal may store multiple image blocks under each viewpoint and the third index relationship, or , the terminal can store panoramic images at various viewpoints, and when the terminal uses a high-magnification camera to obtain the first image, the terminal can execute S703-S707 to obtain a high-definition second image, and then the terminal can respond to the image display command, display second image.
  • the cloud interaction of the terminal is taken as an example to illustrate the scene where the cloud can process the image from the terminal, and the scene where the terminal can process the captured image.
  • the electronic device may be a cloud, a terminal, or other devices with processing capabilities.
  • the image processing method provided by the embodiment of the present application may also include:
  • the way for the cloud to acquire the first image to be processed may be: the terminal sends the first image to the cloud after taking the first image, and reference may be made to related descriptions in S701-S702.
  • the user may also upload the first image to be processed to the cloud, or the first image is an image locally stored in the cloud.
  • the terminal may capture the first image, or the first image may be used as an image locally stored in the terminal.
  • the device may capture the first image, or the user may upload the first image to the device, or the first image may be an image stored locally on the device, or the first image may be For images transmitted from other electronic devices.
  • the mapping relationship may be a third index relationship.
  • the third index relationship is: a mapping relationship between image features and image block numbers.
  • the cloud after the cloud obtains the feature corresponding to the maximum similarity, it can obtain the identifier of the image block of the feature map corresponding to the maximum similarity according to the third index relationship.
  • the mapping relationship may include a first index relationship and a second index relationship.
  • the first index relationship is: the coordinate position of the center point of the image of each viewing angle in the panoramic image and the mapping relationship of the image block
  • the second index relationship is: the coordinate position and the coordinate position of the center point of the image of each viewing angle in the panoramic image
  • the mapping relationship of image features the electronic device obtains the similarity between the feature of the image of each viewing angle under each viewpoint and the feature of the first image (or the similarity of the image of each viewing angle within the preset distance range of the position of the terminal) After the similarity between the feature and the feature of the first image), the maximum similarity can be determined, and then the feature corresponding to the maximum similarity can be determined.
  • the electronic device can obtain the position coordinates of the central point of the feature map corresponding to the maximum similarity in the panoramic image according to the stored second index relationship and the feature corresponding to the maximum similarity, and then according to the position coordinates of the central point in the panoramic image
  • the position coordinates, and the first index relationship obtain the identity of the image block mapped to the position coordinates of the central point in the panoramic image.
  • the identifier of the image block mapped to the position coordinates of the central point in the panoramic image is the target identifier.
  • the electronic device may stitch the image blocks corresponding to the target identifier to acquire the second image.
  • the electronic device may process the image block corresponding to the target identifier in the manner in S707, so as to acquire the second image.
  • the electronic device After the electronic device obtains the second image, because the second image is acquired based on the corresponding image block under the viewpoint, the definition of the second image is higher than that of the first image, so the electronic device can realize the accuracy of the first image. processing to obtain a higher-resolution image.
  • the electronic device may store the second image, or transmit the second image to other electronic devices.
  • This embodiment of the present application does not limit the post-processing of the second image.
  • the cloud can send the second image to the terminal for display and storage.
  • the electronic device stores the image blocks corresponding to the panoramic images under each viewpoint, and the mapping relationship between the features of the images of each viewpoint under each viewpoint and the identifiers of the image blocks corresponding to the panoramic images under each viewpoint, and each viewpoint
  • the images of each angle of view and the image blocks corresponding to the panoramic images of each viewpoint are obtained based on the panoramic images of each viewpoint, and the panoramic images of each viewpoint are high-definition images. In this way, the storage overhead can be reduced.
  • the electronic device can also process the first image with low definition to obtain the second image with higher definition.
  • FIG. 12 is a schematic structural diagram of an image processing device provided by an embodiment of the present application.
  • the image processing apparatus may be the cloud, terminal, or electronic device in the above embodiments, or a chip in the cloud, or a chip in the terminal, or a chip in the electronic device, and is used to implement the image processing method provided in the embodiment of the present application.
  • the electronic device stores the image blocks corresponding to the panoramic images under each viewpoint, and the mapping relationship between the features of the images of each viewpoint under the viewpoints and the identifiers of the image blocks corresponding to the panoramic images under each viewpoint, the The images of each viewing angle at each viewpoint and the image blocks corresponding to the panoramic images at each viewpoint are obtained based on the panoramic images at each viewpoint.
  • an image processing device 1200 includes: a processing module 1201 , a storage module 1202 , and a transceiver module 1203 .
  • the processing module 1201 is configured to obtain the first image to be processed, and obtain the similarity between the features of the image of each viewing angle under the various viewpoints and the feature of the first image, and according to the image of the viewing angle corresponding to the maximum similarity features, and the mapping relationship, determine the target identifier of the feature map of the image corresponding to the maximum similarity, and obtain a second image according to the image block corresponding to the target identifier, and the clarity of the second image higher than the resolution of the first image.
  • the processing module 1201 is specifically configured to acquire the first image, and a location where the first image is taken, and determine a target viewpoint within a preset range from the location, and acquire the A degree of similarity between the feature of the image of each viewing angle under the target viewpoint and the feature of the first image.
  • the mapping relationship includes a first index relationship and a second index relationship
  • the second index relationship is: the feature of the image of each viewing angle under each viewpoint and the The mapping relationship of the central point of the image of each viewing angle
  • the first index relationship is: the mapping relationship between the central point of the image of each viewing angle under each viewing point and the identifier of the image block corresponding to the panoramic image under each viewing point .
  • the processing module 1201 is specifically configured to determine the center point of the feature map of the image of the viewing angle corresponding to the maximum similarity according to the feature of the image of the viewing angle corresponding to the maximum similarity and the second index relationship, and according to the The center point of the feature map of the image of the viewing angle corresponding to the maximum similarity and the first index relationship are used to determine the target identifier.
  • the processing module 1201 is further configured to acquire, according to the panoramic images under each viewpoint, the feature of the image of each viewpoint under each viewpoint, the first index relationship, and the second index relationship. Two-index relationship.
  • the storage module 1202 is configured to store the feature of the image of each viewing angle under each viewing point, the first index relationship, and the second index relationship.
  • the processing module 1201 is specifically configured to apply back-projection transformation to the panoramic images under each viewpoint to obtain images of multiple viewing angles under each viewpoint, and each viewing angle under each viewpoint
  • the center point of the image is at the coordinate position of the corresponding panoramic image, and the overlapping ratio between images of adjacent viewing angles under each viewpoint is greater than a preset overlapping ratio; extracting the features of the image of each viewing angle under each viewpoint.
  • the processing module 1201 is specifically configured to use a sliding window with a second preset size to slide in the panoramic image in the panoramic image at each viewpoint, and use back-projection transformation, sequentially Obtain the image of the angle of view corresponding to the partial panoramic image in the sliding window and the center point of the image of the angle of view corresponding to the partial panoramic image in the coordinate position of the corresponding panoramic image, and the image of each angle of view under each viewpoint has the the second preset size.
  • the processing module 1201 is specifically configured to, according to the coordinate position of the center point of the image of each angle of view under each viewpoint in the corresponding panoramic image, and the image of each angle of view under each viewpoint feature to construct the second index relationship.
  • the processing module 1201 is specifically configured to cut the panoramic image under each viewpoint to obtain image blocks corresponding to the panoramic image under each viewpoint, and The center point of the image is at the coordinate position of the corresponding panoramic image, and the image blocks corresponding to the panoramic image under each viewpoint are used to construct the first index relationship.
  • the image blocks corresponding to the panoramic images under each viewpoint have a first preset size.
  • the processing module 1201 is further configured to use the panoramic image stitching technology to obtain the panoramic images under each viewpoint according to the pre-collected images of multiple viewpoints under each viewpoint, and the The overlapping ratio between the pre-collected images of the lower adjacent viewing angles is smaller than the preset overlapping ratio.
  • the processing module 1201 is specifically configured to project the pre-acquired image of each view angle under each view point in the first viewing plane to the second viewing plane to which the panoramic image belongs, so as to obtain The panoramic image under each viewpoint, and the transformation relationship between the first viewing plane and the second viewing plane.
  • the processing module 1201 is specifically configured to, according to the transformation relationship, use back projection transformation to project the image block corresponding to the target identifier to the first viewing plane to obtain the second image .
  • the electronic device is a cloud
  • the transceiver module 1203 is configured to receive the first image from the terminal and the location of the terminal when the terminal captures the first image, and send a message to the The terminal sends the second image.
  • the image processing device provided in the embodiment of the present application is used to execute the image processing method in the above embodiment, and has the same implementation principle and technical effect as the above embodiment.
  • the embodiment of the present application also provides an electronic device.
  • the electronic device may be the cloud, terminal or the electronic device described in FIG. Including: a processor (eg CPU) 1301 and a memory 1302 .
  • the memory 1302 may include a high-speed random-access memory (random-access memory, RAM), and may also include a non-volatile memory (non-volatile memory, NVM), such as at least one disk memory, and various instructions may be stored in the memory 1302 , so as to complete various processing functions and realize the method steps of the present application.
  • RAM random-access memory
  • NVM non-volatile memory
  • the electronic device may include a screen 1303 for displaying an interface and images of the electronic device.
  • the electronic device involved in this application may further include: a power supply 1304 , a communication bus 1305 and a communication port 1306 .
  • the communication port 1306 is used to realize connection and communication between the electronic device and other peripheral devices.
  • the memory 1302 is used to store computer-executable program codes, and the program codes include instructions; when the processor executes the instructions, the instructions cause the processor of the electronic device to perform the actions in the above-mentioned method embodiments, and its implementation principles and The technical effects are similar, and will not be repeated here.
  • modules or components described in the above embodiments may be one or more integrated circuits configured to implement the above method, for example: one or more application specific integrated circuits (ASIC), or , one or more microprocessors (digital signal processor, DSP), or, one or more field programmable gate arrays (field programmable gate array, FPGA), etc.
  • ASIC application specific integrated circuit
  • DSP digital signal processor
  • FPGA field programmable gate array
  • the processing element can be a general-purpose processor, such as a central processing unit (central processing unit, CPU) or other processors that can call program codes such as control device.
  • these modules can be integrated together and implemented in the form of a system-on-a-chip (SOC).
  • SOC system-on-a-chip
  • a computer program product includes one or more computer instructions.
  • Computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, e.g. Coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.) to another website site, computer, server or data center.
  • DSL digital subscriber line
  • the computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server, a data center, etc. integrated with one or more available media.
  • Available media may be magnetic media (eg, floppy disk, hard disk, magnetic tape), optical media (eg, DVD), or semiconductor media (eg, Solid State Disk (SSD)).
  • sequence numbers of the above-mentioned processes do not mean the order of execution, and the order of execution of the processes should be determined by their functions and internal logic, and should not be used in the implementation of this application.
  • the implementation of the examples constitutes no limitation.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

Provided are an image processing method and apparatus, and an electronic device. In the method, an electronic device stores image blocks corresponding to a panoramic image at various viewpoints, and a mapping relationship between features of an image of each angle of view at the viewpoints and identifiers of the image blocks corresponding to the panoramic image at the viewpoints. The image of each angle of view at the viewpoints and the image blocks corresponding to the panoramic image at the viewpoints are all obtained on the basis of the panoramic image at the viewpoints, wherein the panoramic image at the viewpoints is a high-definition image. Compared with the method for storing high-definition images of different angles of view at various viewpoints in the prior art, the method in the present application can reduce storage overheads. On this basis, image processing can also be realized, so as to obtain an image with higher definition.

Description

图像处理方法、装置和电子设备Image processing method, device and electronic equipment
本申请要求于2022年01月28日提交中国专利局、申请号为202210109463.6、申请名称为“图像处理方法、装置和电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application with the application number 202210109463.6 and the application title "Image Processing Method, Device and Electronic Equipment" filed with the China Patent Office on January 28, 2022, the entire contents of which are incorporated by reference in this application middle.
技术领域technical field
本申请实施例涉及图像处理领域,尤其涉及一种图像处理方法、装置和电子设备。The embodiments of the present application relate to the field of image processing, and in particular, to an image processing method, device, and electronic equipment.
背景技术Background technique
随着终端的发展,目前很多终端可以支持高倍率拍照,倍率如30倍、50倍等。但是因为一些终端本身结构的限制,如手机需要机身比较薄,因此终端采用高倍率拍照时,拍摄的图像的清晰度低、细节模糊。With the development of terminals, many terminals can support high-magnification photography at present, such as 30 times and 50 times. However, due to the limitations of the structure of some terminals, such as mobile phones, which require a relatively thin body, when the terminal uses a high magnification to take pictures, the captured images have low definition and fuzzy details.
为了提高终端使用高倍率拍照的图像的清晰度,可以预先存储大量的不同拍摄位置,以及同一拍摄位置不同拍摄视角的高清图像。终端在使用高倍率拍照后,可以在预先存储的高清图像中,获取与终端拍摄的图像相似度最大的高清图像,进而使得终端可以显示高清图像。目前这种方式需要存储大量的高清图像,存储开销大。In order to improve the clarity of images taken by the terminal at a high magnification, a large number of high-definition images of different shooting positions and different shooting angles at the same shooting position may be stored in advance. After taking pictures with a high magnification, the terminal can obtain the high-definition image with the greatest similarity to the image taken by the terminal among the pre-stored high-definition images, so that the terminal can display the high-definition image. At present, this method needs to store a large number of high-definition images, and the storage overhead is large.
发明内容Contents of the invention
本申请实施例提供一种图像处理方法、装置和电子设备,可以降低存储开销。Embodiments of the present application provide an image processing method, device, and electronic device, which can reduce storage overhead.
第一方面,本申请实施例提供一种图像处理方法,执行该方法的执行主体可以为电子设备或电子设备中的芯片,下述以云电子设备为例进行说明。电子设备中存储有各视点下全景图像对应的图像块,以及所述各视点下每个视角的图像的特征和所述各视点下全景图像对应的图像块的标识的映射关系,所述各视点下每个视角的图像和所述各视点下全景图像对应的图像块均基于所述各视点下全景图像得到,所述各视点下全景图像为高清图像,如是基于单反相机等可以获取清晰度高的图像的设备拍摄的图像得到的。In the first aspect, the embodiment of the present application provides an image processing method, and the execution subject of the method may be an electronic device or a chip in the electronic device, and the cloud electronic device is taken as an example for description below. The electronic device stores the image blocks corresponding to the panoramic images under each viewpoint, and the mapping relationship between the features of the images of each viewpoint under the viewpoints and the identifiers of the image blocks corresponding to the panoramic images under the viewpoints, and the viewpoints The images of each angle of view and the image blocks corresponding to the panoramic images at each viewpoint are all obtained based on the panoramic images at each viewpoint, and the panoramic images at each viewpoint are high-definition images. If it is based on a single-lens reflex camera, etc., it can obtain high-definition images. The image is obtained from the image captured by the device.
该方法中,电子设备可以获取待处理的第一图像,且可以提取第一图像中的特征,进而获取所述各视点下每个视角的图像的特征和所述第一图像的特征的相似度,其中,相似度最大的视角的图像与第一图像的特征的相似度最高。电子设备可以根据最大相似度对应的视角的图像的特征,以及所述映射关系,确定所述最大相似度对应的视角的图像的特征映射的目标标识。In this method, the electronic device can obtain the first image to be processed, and can extract the features in the first image, and then obtain the similarity between the features of the image of each viewing angle under each viewpoint and the feature of the first image , where the image with the largest similarity has the highest similarity with the feature of the first image. The electronic device may determine the target identifier of the feature map of the image of the viewing angle corresponding to the maximum similarity according to the feature of the image of the viewing angle corresponding to the maximum similarity and the mapping relationship.
因为各视点下全景图像对应的图像块是基于所述各视点下全景图像得到,因此在得到最大相似度对应的视角的图像的特征映射的目标标识后,可以根据所述目标标识对应的图像块,获取第二图像,所述第二图像的清晰度高于所述第一图像的清晰度。第二图像依据目标标识对应的图像块所属的全景图像中的图像块得到。Because the image blocks corresponding to the panoramic images under each viewpoint are obtained based on the panoramic images under each viewpoint, after obtaining the target identification of the feature map of the image of the viewing angle corresponding to the maximum similarity, the corresponding image block can be identified according to the target , acquiring a second image, where the definition of the second image is higher than the definition of the first image. The second image is obtained according to the image block in the panoramic image to which the image block corresponding to the target identifier belongs.
其中,因为所述各视点下全景图像的清晰度大于或等于预设清晰度,即所述各视点下全景图像的清晰度高于所述第一图像的清晰度,各视点下全景图像对应的图像块基于所述各视点下全景图像得到,因此基于目标标识对应的图像块获取的第二图像的清晰度大于或等于预 设清晰度。因此,采用该方法终端可以得到高清晰度的第二图像,且因为电子设备中存储的为各视点下全景图像对应的图像块,以及各视点下每个视角的图像的特征和各视点下全景图像对应的图像块的标识的映射关系,相较于现有技术中存储各视点下不同视角的高清图像的方式,可以减小存储开销。Wherein, because the resolution of the panoramic images at each viewpoint is greater than or equal to the preset resolution, that is, the resolution of the panoramic images at each viewpoint is higher than that of the first image, the corresponding panoramic images at each viewpoint The image blocks are obtained based on the panoramic images under the various viewpoints, so the definition of the second image obtained based on the image blocks corresponding to the target identifier is greater than or equal to the preset definition. Therefore, using this method, the terminal can obtain a high-definition second image, and because the electronic device stores the image blocks corresponding to the panoramic images at each viewpoint, as well as the features of the images at each viewpoint at each viewpoint and the panoramic images at each viewpoint The mapping relationship of the identifiers of the image blocks corresponding to the image can reduce the storage overhead compared with the way of storing high-definition images of different viewing angles under each viewpoint in the prior art.
在一种可能的实现方式中,为了减小电子设备相似度的计算量,电子设备在获取第一图像时,还可以获取拍摄第一图像的设备的位置(即视点)。如此,电子设备可以确定距离该位置预设范围内的目标视点,进而获取所述目标视点下每个视角的图像的特征和所述第一图像的特征的相似度。这样可以减少终端计算相似度的计算量,只需计算距离所述终端的位置预设范围内的目标视点下每个视角的图像的特征和所述第一图像的特征的相似度,而无需获取各视点下每个视角的图像的特征和所述第一图像的特征的相似度。In a possible implementation manner, in order to reduce the amount of calculation of the similarity of the electronic device, when the electronic device acquires the first image, it may also acquire the position (ie, viewpoint) of the device that captures the first image. In this way, the electronic device may determine a target viewpoint within a preset range from the location, and then acquire the similarity between the feature of the image at each viewing angle under the target viewpoint and the feature of the first image. This can reduce the calculation amount of the terminal to calculate the similarity, and only need to calculate the similarity between the feature of the image of each viewing angle and the feature of the first image at the target viewpoint within the preset range from the position of the terminal, without obtaining A degree of similarity between the feature of the image of each viewing angle under each viewpoint and the feature of the first image.
在一种可能的实现方式中,所述映射关系包括第一索引关系和第二索引关系,所述第二索引关系为:所述各视点下每个视角的图像的特征和所述各视点下每个视角的图像的中心点的映射关系,所述第一索引关系为:所述各视点下每个视角的图像的中心点与所述各视点下全景图像对应的图像块的标识的映射关系。其中,电子设备在获取最大相似度对应的视角的图像的特征后,可以根据所述最大相似度对应的视角的图像的特征,以及所述第二索引关系,确定所述最大相似度对应的视角的图像的特征映射的中心点,进而根据所述最大相似度对应的视角的图像的特征映射的中心点,以及所述第一索引关系,确定所述目标标识。In a possible implementation manner, the mapping relationship includes a first index relationship and a second index relationship, and the second index relationship is: the feature of the image of each viewing angle under each viewpoint and the The mapping relationship of the central point of the image of each viewing angle, the first index relationship is: the mapping relationship between the central point of the image of each viewing angle under each viewing point and the identifier of the image block corresponding to the panoramic image under each viewing point . Wherein, after the electronic device acquires the features of the image of the viewing angle corresponding to the maximum similarity, it may determine the viewing angle corresponding to the maximum similarity according to the features of the image of the viewing angle corresponding to the maximum similarity and the second index relationship. The center point of the feature map of the image of the image, and then according to the center point of the feature map of the image of the viewing angle corresponding to the maximum similarity, and the first index relationship, the target identifier is determined.
在一种实施例中,电子设备中存储的各视点下每个视角的图像的特征、所述第一索引关系,以及所述第二索引关系可以为工作人员预先设置在电子设备中的,或者电子设备自己获取的。In one embodiment, the features of the images of each viewpoint under each viewpoint, the first index relationship, and the second index relationship stored in the electronic device may be preset in the electronic device by the staff, or acquired by the electronic device itself.
其中,在一种实施例中,电子设备可以根据所述各视点下全景图像,获取所述各视点下每个视角的图像的特征、所述第一索引关系,以及所述第二索引关系,进而存储所述各视点下每个视角的图像的特征、所述第一索引关系,以及所述第二索引关系。Wherein, in one embodiment, the electronic device may acquire, according to the panoramic images under the various viewpoints, the features of the images of each viewpoint under the viewpoints, the first index relationship, and the second index relationship, Furthermore, image features of each viewing angle under each viewing point, the first index relationship, and the second index relationship are stored.
电子设备可以对所述各视点下全景图像采用反投影变换,得到所述各视点下多个视角的图像(高重叠图像),以及所述各视点下每个视角的图像的中心点在对应的全景图像的坐标位置,所述各视点下相邻视角的图像之间的重叠率大于预设重叠率,进而提取所述各视点下每个视角的图像的特征。The electronic device may use back-projection transformation on the panoramic images under each viewpoint to obtain images of multiple viewing angles (highly overlapping images) under each viewpoint, and the center point of the image of each viewing angle under each viewpoint is in the corresponding The coordinate position of the panoramic image, the overlap ratio between the images of the adjacent viewing angles under each viewpoint is greater than the preset overlapping ratio, and then the features of the images of each viewing angle under the various viewpoints are extracted.
其中,电子设备可以直接对所述各视点下全景图像采用反投影变换。Wherein, the electronic device may directly perform back-projection transformation on the panoramic images at each viewpoint.
或者,在一种实施例中,电子设备可以根据低重叠图像获取各视点下全景图像,进而对所述各视点下全景图像采用反投影变换。在该实施例中,电子设备可以采用全景图像拼接技术,根据所述各视点下多个视角的预先采集的图像,获取所述各视点下全景图像,所述各视点下相邻视角的预先采集的图像之间的重叠率小于所述预设重叠率,即低重叠图像。Alternatively, in an embodiment, the electronic device may acquire panoramic images at various viewpoints according to the low-overlapping images, and then perform back-projection transformation on the panoramic images at various viewpoints. In this embodiment, the electronic device may use the panorama image stitching technology to obtain the panoramic images under each viewpoint according to the pre-collected images of multiple viewpoints under each viewpoint, and the pre-collected images of adjacent viewpoints under each viewpoint The overlapping ratio between the images is less than the preset overlapping ratio, that is, low overlapping images.
具体的,电子设备可以在所述各视点下全景图像中,采用具有第二预设尺寸的滑动窗口在全景图像中进行滑动,且采用反投影变换,依次得到所述滑动窗口内的部分全景图像对应的视角的图像和所述部分全景图像对应的视角的图像的中心点在对应的全景图像的坐标位置,所述各视点下每个视角的图像具有所述第二预设尺寸。Specifically, the electronic device may use a sliding window with a second preset size to slide in the panoramic image in the panoramic image at each viewpoint, and use back-projection transformation to sequentially obtain partial panoramic images in the sliding window The center point of the image of the corresponding viewing angle and the image of the corresponding viewing angle of the partial panoramic image is at the coordinate position of the corresponding panoramic image, and the image of each viewing angle under each viewing point has the second preset size.
其中,电子设备可以根据所述各视点下每个视角的图像的中心点在对应的全景图像的坐标位置,以及所述各视点下每个视角的图像的特征,构建所述第二索引关系。以及,Wherein, the electronic device may construct the second index relationship according to the coordinate position of the center point of the image of each angle of view under each viewpoint in the corresponding panoramic image, and the characteristics of the image of each angle of view under each viewpoint. as well as,
电子设备可以将所述各视点下全景图像进行切割,得到所述各视点下全景图像对应的图像块,且根据所述各视点下每个视角的图像的中心点在对应的全景图像的坐标位置,以及所述各视点下全景图像对应的图像块,构建所述第一索引关系。在一种可能的实现方式中,所 述各视点下全景图像对应的图像块具有第一预设尺寸。The electronic device may cut the panoramic image under each viewpoint to obtain image blocks corresponding to the panoramic image under each viewpoint, and according to the coordinate position of the center point of the image of each angle of view under each viewpoint is in the corresponding panoramic image , and the image blocks corresponding to the panoramic images under the various viewpoints, construct the first index relationship. In a possible implementation manner, the image blocks corresponding to the panoramic images under each viewpoint have a first preset size.
在上述示例中,电子设备采用全景图像拼接技术,根据所述各视点下多个视角的预先采集的图像,获取所述各视点下全景图像,即将处于第一视平面中的所述各视点下每个视角的预先采集的图像投影至全景图像所属的第二视平面,以得到所述各视点下全景图像。在该投影过程中,电子设备还可以得到所述第一视平面和所述第二视平面的变换关系。In the above example, the electronic device adopts the panorama image stitching technology, and according to the pre-collected images of multiple viewing angles under each viewpoint, obtains the panoramic images under each viewpoint, that is, the panoramic images under each viewpoint in the first viewing plane. The pre-collected image of each viewing angle is projected onto the second viewing plane to which the panoramic image belongs, so as to obtain the panoramic image at each viewing point. During the projection process, the electronic device may also obtain the transformation relationship between the first viewing plane and the second viewing plane.
在一种可能的实现方式中,电子设备在得到目标标识后,可以根据所述变换关系,采用反投影变换将所述目标标识对应的图像块投影至所述第一视平面,得到所述第二图像。In a possible implementation manner, after obtaining the target identifier, the electronic device may use back projection transformation to project the image block corresponding to the target identifier to the first viewing plane according to the transformation relationship, to obtain the second Two images.
如此,电子设备可以得到与终端拍摄的图像处于同一视平面的第二图像,用户观看时没有视平面的差别,对于用户来说第一图像至第二图像的转换用户无感知,可以提高用户体验。In this way, the electronic device can obtain the second image on the same viewing plane as the image captured by the terminal, and there is no difference in the viewing plane when the user watches it. For the user, the conversion from the first image to the second image is not perceived by the user, which can improve user experience .
在一种可能的场景中,电子设备可以为云端,终端可以拍摄得到第一图像,但第一图像的清晰度不高,因此在该场景中,终端可以向云端发送第一图像和拍摄第一图像时终端的位置,如此,云端可以获取第一图像以及拍摄第一图像的位置,进而采用如上可能的实现方式中所述的方法,获取第二图像。云端在得到第二图像后,可以向所述终端发送所述第二图像,以使所述终端可以显示和存储第二图像,这样用户可以在终端上看到高清晰度的第二图像,可以提高用户体验。In a possible scenario, the electronic device can be the cloud, and the terminal can capture the first image, but the definition of the first image is not high. Therefore, in this scenario, the terminal can send the first image to the cloud and capture the first image. The image is the location of the terminal. In this way, the cloud can obtain the first image and the location where the first image was shot, and then use the method described in the above possible implementation manners to obtain the second image. After the cloud obtains the second image, it can send the second image to the terminal, so that the terminal can display and store the second image, so that the user can see the high-definition second image on the terminal, and can Improve user experience.
第二方面,本申请实施例提供一种图像处理方法,执行该方法的执行主体可以为终端或终端中的芯片,下述以终端为例进行说明。In a second aspect, the embodiment of the present application provides an image processing method, and the executing subject for executing the method may be a terminal or a chip in the terminal, and the following uses a terminal as an example for description.
终端使用第一倍率拍摄的第一图像,所述第一倍率大于或等于预设倍率,且终端向电子设备发送第一图像。终端接收来自电子设备的第二图像,且响应于图像显示指令,可以显示第二图像,所述第二图像的清晰度高于所述第一图像的清晰度。The terminal uses the first image captured by the first magnification, the first magnification is greater than or equal to the preset magnification, and the terminal sends the first image to the electronic device. The terminal receives the second image from the electronic device, and may display the second image in response to the image display instruction, and the definition of the second image is higher than that of the first image.
在一种可能的实现方式中,终端中存储有各视点下全景图像对应的图像块,以及所述各视点下每个视角的图像的特征和所述各视点下全景图像对应的图像块的标识的映射关系,所述各视点下每个视角的图像和所述各视点下全景图像对应的图像块均基于所述各视点下全景图像得到,所述各视点下全景图像的清晰度大于或等于预设清晰度。In a possible implementation manner, the terminal stores the image blocks corresponding to the panoramic images under each viewpoint, and the features of the images of each viewpoint under the viewpoints and the identification of the image blocks corresponding to the panoramic images under each viewpoint The mapping relationship of each angle of view under each viewpoint and the image block corresponding to the panoramic image under each viewpoint are obtained based on the panoramic image under each viewpoint, and the definition of the panoramic image under each viewpoint is greater than or equal to Preset sharpness.
终端响应于使用第一倍率拍摄的第一图像,可以获取所述各视点下每个视角的图像的特征和所述第一图像的特征的相似度,以及根据最大相似度对应的视角的图像的特征,以及所述映射关系,确定所述最大相似度对应的视角的图像的特征映射的目标标识。终端根据所述目标标识对应的图像块,获取第二图像,所述第二图像的清晰度高于所述第一图像的清晰度,进而响应于图像显示指令,可以显示第二图像。In response to the first image taken with the first magnification, the terminal may acquire the similarity between the features of the image of each viewing angle under the various viewpoints and the feature of the first image, and the The feature, and the mapping relationship, determine the target identifier of the feature map of the image of the viewing angle corresponding to the maximum similarity. The terminal acquires a second image according to the image block corresponding to the target identifier, and the definition of the second image is higher than that of the first image, and then can display the second image in response to an image display instruction.
在一种可能的实现方式中,所述获取所述各视点下每个视角的图像的特征和所述第一图像的特征的相似度,包括:确定距离所述终端的位置预设范围内的目标视点,以及获取所述目标视点下每个视角的图像的特征和所述第一图像的特征的相似度。In a possible implementation manner, the acquiring the similarity between the feature of the image of each viewing angle under each viewing point and the feature of the first image includes: determining a distance within a preset range from the terminal A target viewpoint, and obtaining a similarity between the features of the image of each viewing angle under the target viewpoint and the features of the first image.
在一种可能的实现方式中,所述映射关系包括第一索引关系和第二索引关系,所述第二索引关系为:所述各视点下每个视角的图像的特征和所述各视点下每个视角的图像的中心点的映射关系,所述第一索引关系为:所述各视点下每个视角的图像的中心点与所述各视点下全景图像对应的图像块的标识的映射关系。In a possible implementation manner, the mapping relationship includes a first index relationship and a second index relationship, and the second index relationship is: the feature of the image of each viewing angle under each viewpoint and the The mapping relationship of the central point of the image of each viewing angle, the first index relationship is: the mapping relationship between the central point of the image of each viewing angle under each viewing point and the identifier of the image block corresponding to the panoramic image under each viewing point .
所述根据最大相似度对应的视角的图像的特征,以及所述映射关系,确定所述最大相似度对应的视角的图像的特征映射的目标标识,包括:根据所述最大相似度对应的视角的图像的特征,以及所述第二索引关系,确定所述最大相似度对应的视角的图像的特征映射的中心点;根据所述最大相似度对应的视角的图像的特征映射的中心点,以及所述第一索引关系,确定所述目标标识。According to the feature of the image of the angle of view corresponding to the maximum similarity and the mapping relationship, determining the target identifier of the feature map of the image of the angle of view corresponding to the maximum similarity includes: according to the image of the angle of view corresponding to the maximum similarity The feature of the image, and the second index relationship, determine the center point of the feature map of the image of the viewing angle corresponding to the maximum similarity; according to the center point of the feature map of the image of the viewing angle corresponding to the maximum similarity, and the The first index relationship is used to determine the target identifier.
在一种可能的实现方式中,所述方法还包括:根据所述各视点下全景图像,获取所述各视点下每个视角的图像的特征、所述第一索引关系,以及所述第二索引关系;存储所述各视点下每个视角的图像的特征、所述第一索引关系,以及所述第二索引关系。In a possible implementation manner, the method further includes: according to the panoramic images under each viewpoint, acquiring the features of images of each viewpoint under each viewpoint, the first index relationship, and the second Index relationship: store the feature of the image of each viewing angle under each viewpoint, the first index relationship, and the second index relationship.
在一种可能的实现方式中,所述根据所述各视点下全景图像,获取所述各视点下每个视角的图像的特征,包括:对所述各视点下全景图像采用反投影变换,得到所述各视点下多个视角的图像,以及所述各视点下每个视角的图像的中心点在对应的全景图像的坐标位置,所述各视点下相邻视角的图像之间的重叠率大于预设重叠率;提取所述各视点下每个视角的图像的特征。In a possible implementation manner, the acquiring the feature of the image of each view angle under each view point according to the panoramic image under each view point includes: applying back projection transformation to the panoramic image under each view point to obtain The images of multiple viewing angles under each viewing point, and the center point of the image of each viewing angle under each viewing point are at the coordinate position of the corresponding panoramic image, and the overlapping ratio between images of adjacent viewing angles under each viewing point is greater than Presetting the overlapping ratio; extracting the feature of the image of each view angle under each view point.
在一种可能的实现方式中,所述对所述各视点下全景图像采用反投影变换,得到所述各视点下多个视角的图像,以及所述各视点下每个视角的图像的中心点在对应的全景图像的坐标位置,包括:在所述各视点下全景图像中,采用具有第二预设尺寸的滑动窗口在全景图像中进行滑动,且采用反投影变换,依次得到所述滑动窗口内的部分全景图像对应的视角的图像和所述部分全景图像对应的视角的图像的中心点在对应的全景图像的坐标位置,所述各视点下每个视角的图像具有所述第二预设尺寸。In a possible implementation manner, the back-projection transformation is used for the panoramic images under each viewpoint to obtain images of multiple viewing angles under each viewpoint, and the center point of the image of each viewing angle under each viewpoint The coordinate position of the corresponding panoramic image includes: in the panoramic image under each viewpoint, using a sliding window with a second preset size to slide in the panoramic image, and using back-projection transformation to sequentially obtain the sliding window The center point of the image of the angle of view corresponding to the partial panoramic image and the image of the angle of view corresponding to the partial panoramic image is at the coordinate position of the corresponding panoramic image, and the image of each angle of view under each viewpoint has the second preset size.
在一种可能的实现方式中,获取所述第二索引关系,包括:根据所述各视点下每个视角的图像的中心点在对应的全景图像的坐标位置,以及所述各视点下每个视角的图像的特征,构建所述第二索引关系。In a possible implementation manner, obtaining the second index relationship includes: according to the coordinate position of the center point of the image of each viewing angle under the various viewpoints in the corresponding panoramic image, and the The feature of the image of the viewing angle is used to construct the second index relationship.
在一种可能的实现方式中,获取所述第一索引关系,包括:将所述各视点下全景图像进行切割,得到所述各视点下全景图像对应的图像块;根据所述各视点下每个视角的图像的中心点在对应的全景图像的坐标位置,以及所述各视点下全景图像对应的图像块,构建所述第一索引关系。In a possible implementation manner, obtaining the first index relationship includes: cutting the panoramic image under each viewpoint to obtain image blocks corresponding to the panoramic image under each viewpoint; The center point of the image of each viewing angle is at the coordinate position of the corresponding panoramic image, and the image block corresponding to the panoramic image under each viewing angle, to construct the first index relationship.
在一种可能的实现方式中,所述各视点下全景图像对应的图像块具有第一预设尺寸。In a possible implementation manner, the image blocks corresponding to the panoramic images under each viewpoint have a first preset size.
在一种可能的实现方式中,所述根据所述各视点下全景图像,获取所述各视点下每个视角的图像的特征、所述第一索引关系,以及所述第二索引关系之前,还包括:采用全景图像拼接技术,根据所述各视点下多个视角的预先采集的图像,获取所述各视点下全景图像,所述各视点下相邻视角的预先采集的图像之间的重叠率小于所述预设重叠率。In a possible implementation manner, before acquiring the features of images of each view angle under each view point, the first index relationship, and the second index relationship according to the panoramic images under each view point, It also includes: using panoramic image stitching technology, according to the pre-collected images of multiple viewing angles under each viewpoint, obtaining the panoramic image under each viewpoint, and the overlap between the pre-collected images of adjacent viewing angles under each viewpoint rate is less than the preset overlap rate.
在一种可能的实现方式中,所述采用全景图像拼接技术,根据所述各视点下多个视角的预先采集的图像,获取所述各视点下全景图像,包括:将处于第一视平面中的所述各视点下每个视角的预先采集的图像投影至全景图像所属的第二视平面,以得到所述各视点下全景图像,以及所述第一视平面和所述第二视平面的变换关系。In a possible implementation manner, using the panoramic image stitching technology to acquire the panoramic images under each viewpoint according to the pre-collected images of multiple viewpoints under each viewpoint includes: placing the panoramic image in the first viewing plane The pre-collected images of each viewing angle under each viewpoint are projected onto the second viewing plane to which the panoramic image belongs, so as to obtain the panoramic image under each viewing point, and the first viewing plane and the second viewing plane transform relationship.
在一种可能的实现方式中,所述根据所述目标标识对应的图像块,获取第二图像,包括:根据所述变换关系,采用反投影变换将所述目标标识对应的图像块投影至所述第一视平面,得到所述第二图像。In a possible implementation manner, the acquiring the second image according to the image block corresponding to the target identifier includes: projecting the image block corresponding to the target identifier onto the image block corresponding to the target identifier according to the transformation relationship The first viewing plane is used to obtain the second image.
第三方面,本申请实施例提供一种图像处理装置,该图像处理装置可以为电子设备或电子设备中的芯片。该图像处理装置包括:In a third aspect, the embodiment of the present application provides an image processing apparatus, and the image processing apparatus may be an electronic device or a chip in the electronic device. The image processing device includes:
处理模块,用于获取所述各视点下每个视角的图像的特征和所述第一图像的特征的相似度,根据最大相似度对应的视角的图像的特征,以及所述映射关系,确定所述最大相似度对应的视角的图像的特征映射的目标标识,以及根据所述目标标识对应的图像块,获取第二图像,所述第二图像的清晰度高于所述第一图像的清晰度。A processing module, configured to obtain the similarity between the features of the image of each viewing angle under the various viewpoints and the features of the first image, and determine the The target identifier of the feature map of the image of the viewing angle corresponding to the maximum similarity, and according to the image block corresponding to the target identifier, acquire a second image, the definition of the second image is higher than the definition of the first image .
在一种可能的实现方式中,处理模块,具体用于获取所述第一图像,以及拍摄所述第一图像的位置,以及确定距离所述位置预设范围内的目标视点,以及获取所述目标视点下每个 视角的图像的特征和所述第一图像的特征的相似度。In a possible implementation manner, the processing module is specifically configured to acquire the first image, a location where the first image is taken, determine a target viewpoint within a preset range from the location, and acquire the A degree of similarity between the feature of the image of each viewing angle under the target viewpoint and the feature of the first image.
在一种可能的实现方式中,所述映射关系包括第一索引关系和第二索引关系,所述第二索引关系为:所述各视点下每个视角的图像的特征和所述各视点下每个视角的图像的中心点的映射关系,所述第一索引关系为:所述各视点下每个视角的图像的中心点与所述各视点下全景图像对应的图像块的标识的映射关系。In a possible implementation manner, the mapping relationship includes a first index relationship and a second index relationship, and the second index relationship is: the feature of the image of each viewing angle under each viewpoint and the The mapping relationship of the central point of the image of each viewing angle, the first index relationship is: the mapping relationship between the central point of the image of each viewing angle under each viewing point and the identifier of the image block corresponding to the panoramic image under each viewing point .
处理模块,具体用于根据所述最大相似度对应的视角的图像的特征,以及所述第二索引关系,确定所述最大相似度对应的视角的图像的特征映射的中心点,以及根据所述最大相似度对应的视角的图像的特征映射的中心点,以及所述第一索引关系,确定所述目标标识。The processing module is specifically configured to determine the center point of the feature map of the image of the viewing angle corresponding to the maximum similarity according to the features of the image of the viewing angle corresponding to the maximum similarity and the second index relationship, and according to the The center point of the image feature map of the viewing angle corresponding to the maximum similarity, and the first index relationship determine the target identifier.
在一种可能的实现方式中,处理模块,还用于根据所述各视点下全景图像,获取所述各视点下每个视角的图像的特征、所述第一索引关系,以及所述第二索引关系。In a possible implementation manner, the processing module is further configured to acquire, according to the panoramic images under each viewpoint, the features of images of each viewpoint under each viewpoint, the first index relationship, and the second index relationship.
存储模块,用于存储所述各视点下每个视角的图像的特征、所述第一索引关系,以及所述第二索引关系。A storage module, configured to store features of images of each view angle under each view point, the first index relationship, and the second index relationship.
在一种可能的实现方式中,处理模块,具体用于对所述各视点下全景图像采用反投影变换,得到所述各视点下多个视角的图像,以及所述各视点下每个视角的图像的中心点在对应的全景图像的坐标位置,所述各视点下相邻视角的图像之间的重叠率大于预设重叠率;提取所述各视点下每个视角的图像的特征。In a possible implementation manner, the processing module is specifically configured to perform back-projection transformation on the panoramic images under each viewpoint to obtain images of multiple angles of view under each viewpoint, and images of each angle of view under each viewpoint. The center point of the image is at the coordinate position of the corresponding panoramic image, and the overlapping ratio between images of adjacent viewing angles under each viewing point is greater than a preset overlapping ratio; extracting features of images of each viewing angle under each viewing point.
在一种可能的实现方式中,处理模块,具体用于在所述各视点下全景图像中,采用具有第二预设尺寸的滑动窗口在全景图像中进行滑动,且采用反投影变换,依次得到所述滑动窗口内的部分全景图像对应的视角的图像和所述部分全景图像对应的视角的图像的中心点在对应的全景图像的坐标位置,所述各视点下每个视角的图像具有所述第二预设尺寸。In a possible implementation manner, the processing module is specifically configured to use a sliding window with a second preset size to slide in the panoramic image in the panoramic image at each viewpoint, and use back-projection transformation to sequentially obtain The image of the angle of view corresponding to the partial panoramic image in the sliding window and the center point of the image of the angle of view corresponding to the partial panoramic image are at the coordinate position of the corresponding panoramic image, and the image of each angle of view under each viewpoint has the Second default size.
在一种可能的实现方式中,处理模块,具体用于根据所述各视点下每个视角的图像的中心点在对应的全景图像的坐标位置,以及所述各视点下每个视角的图像的特征,构建所述第二索引关系。In a possible implementation manner, the processing module is specifically configured to, according to the coordinate position of the center point of the image of each angle of view under each viewpoint in the corresponding panoramic image, and the coordinate position of the image of each angle of view under each viewpoint features, constructing the second index relationship.
在一种可能的实现方式中,处理模块,具体用于将所述各视点下全景图像进行切割,得到所述各视点下全景图像对应的图像块,以及根据所述各视点下每个视角的图像的中心点在对应的全景图像的坐标位置,以及所述各视点下全景图像对应的图像块,构建所述第一索引关系。In a possible implementation manner, the processing module is specifically configured to cut the panoramic image under each viewpoint to obtain image blocks corresponding to the panoramic image under each viewpoint, and The center point of the image is at the coordinate position of the corresponding panoramic image, and the image blocks corresponding to the panoramic image under each viewpoint are used to construct the first index relationship.
在一种可能的实现方式中,所述各视点下全景图像对应的图像块具有第一预设尺寸。In a possible implementation manner, the image blocks corresponding to the panoramic images under each viewpoint have a first preset size.
在一种可能的实现方式中,处理模块,还用于采用全景图像拼接技术,根据所述各视点下多个视角的预先采集的图像,获取所述各视点下全景图像,所述各视点下相邻视角的预先采集的图像之间的重叠率小于所述预设重叠率。In a possible implementation manner, the processing module is further configured to use the panoramic image stitching technology to acquire the panoramic images under each viewpoint according to the pre-collected images of multiple viewpoints under each viewpoint, and the panoramic images under each viewpoint are The overlap ratio between pre-collected images of adjacent viewing angles is smaller than the preset overlap ratio.
在一种可能的实现方式中,处理模块,具体用于将处于第一视平面中的所述各视点下每个视角的预先采集的图像投影至全景图像所属的第二视平面,以得到所述各视点下全景图像,以及所述第一视平面和所述第二视平面的变换关系。In a possible implementation manner, the processing module is specifically configured to project the pre-collected image of each viewing angle under each viewpoint in the first viewing plane to the second viewing plane to which the panoramic image belongs, so as to obtain the The panoramic image under each viewpoint, and the transformation relationship between the first viewing plane and the second viewing plane.
在一种可能的实现方式中,处理模块,具体用于根据所述变换关系,采用反投影变换将所述目标标识对应的图像块投影至所述第一视平面,得到所述第二图像。In a possible implementation manner, the processing module is specifically configured to, according to the transformation relationship, project the image block corresponding to the target identifier to the first viewing plane by using back-projection transformation to obtain the second image.
在一种可能的实现方式中,收发模块,用于接收来自终端的第一图像和所述终端拍摄所述第一图像时所述终端的位置,以及向所述终端发送所述第二图像。In a possible implementation manner, the transceiver module is configured to receive the first image from the terminal and the location of the terminal when the terminal captures the first image, and send the second image to the terminal.
第四方面,本申请实施例提供一种图像处理装置,该图像处理装置可以为终端或终端中的芯片。该图像处理装置包括:In a fourth aspect, the embodiment of the present application provides an image processing device, and the image processing device may be a terminal or a chip in the terminal. The image processing device includes:
在一种可能的实现方式中,终端中存储有各视点下全景图像对应的图像块,以及所述各 视点下每个视角的图像的特征和所述各视点下全景图像对应的图像块的标识的映射关系,所述各视点下每个视角的图像和所述各视点下全景图像对应的图像块均基于所述各视点下全景图像得到。In a possible implementation manner, the terminal stores the image blocks corresponding to the panoramic images under each viewpoint, and the features of the images of each viewpoint under the viewpoints and the identification of the image blocks corresponding to the panoramic images under each viewpoint The mapping relationship between the images of each viewing angle under the viewpoints and the image blocks corresponding to the panoramic images under the viewpoints are all obtained based on the panoramic images under the viewpoints.
拍摄模块,用于使用第一倍率拍摄的第一图像,第一倍率小于预设倍率。The photographing module is used to capture the first image with the first magnification, and the first magnification is smaller than the preset magnification.
处理模块,用于获取所述各视点下每个视角的图像的特征和所述第一图像的特征的相似度,以及根据最大相似度对应的视角的图像的特征,以及所述映射关系,确定所述最大相似度对应的视角的图像的特征映射的目标标识,以及根据所述目标标识对应的图像块,获取第二图像。所述第二图像的清晰度高于所述第一图像的清晰度。A processing module, configured to obtain the similarity between the features of the image of each viewing angle under the various viewpoints and the features of the first image, and determine according to the features of the image of the viewing angle corresponding to the maximum similarity and the mapping relationship Acquiring a second image according to the object identifier of the feature map of the image of the viewing angle corresponding to the maximum similarity, and the image block corresponding to the object identifier. The definition of the second image is higher than that of the first image.
显示模块,用于响应于图像显示指令,可以显示第二图像。The display module is configured to display the second image in response to the image display instruction.
在一种可能的实现方式中,处理模块,具体用于确定距离所述终端的位置预设范围内的目标视点,以及获取所述目标视点下每个视角的图像的特征和所述第一图像的特征的相似度。In a possible implementation manner, the processing module is specifically configured to determine a target viewpoint within a preset range from the terminal, and acquire features of an image of each viewing angle under the target viewpoint and the first image similarity of features.
在一种可能的实现方式中,所述映射关系包括第一索引关系和第二索引关系,所述第二索引关系为:所述各视点下每个视角的图像的特征和所述各视点下每个视角的图像的中心点的映射关系,所述第一索引关系为:所述各视点下每个视角的图像的中心点与所述各视点下全景图像对应的图像块的标识的映射关系。In a possible implementation manner, the mapping relationship includes a first index relationship and a second index relationship, and the second index relationship is: the feature of the image of each viewing angle under each viewpoint and the The mapping relationship of the central point of the image of each viewing angle, the first index relationship is: the mapping relationship between the central point of the image of each viewing angle under each viewing point and the identifier of the image block corresponding to the panoramic image under each viewing point .
处理模块,具体用于根据所述最大相似度对应的视角的图像的特征,以及所述第二索引关系,确定所述最大相似度对应的视角的图像的特征映射的中心点,以及根据所述最大相似度对应的视角的图像的特征映射的中心点,以及所述第一索引关系,确定所述目标标识。The processing module is specifically configured to determine the center point of the feature map of the image of the viewing angle corresponding to the maximum similarity according to the features of the image of the viewing angle corresponding to the maximum similarity and the second index relationship, and according to the The center point of the image feature map of the viewing angle corresponding to the maximum similarity, and the first index relationship determine the target identifier.
在一种可能的实现方式中,处理模块,还用于根据所述各视点下全景图像,获取所述各视点下每个视角的图像的特征、所述第一索引关系,以及所述第二索引关系。In a possible implementation manner, the processing module is further configured to acquire, according to the panoramic images under each viewpoint, the features of images of each viewpoint under each viewpoint, the first index relationship, and the second index relationship.
存储模块,用于存储所述各视点下每个视角的图像的特征、所述第一索引关系,以及所述第二索引关系。A storage module, configured to store features of images of each view angle under each view point, the first index relationship, and the second index relationship.
在一种可能的实现方式中,处理模块,具体用于对所述各视点下全景图像采用反投影变换,得到所述各视点下多个视角的图像,以及所述各视点下每个视角的图像的中心点在对应的全景图像的坐标位置,所述各视点下相邻视角的图像之间的重叠率大于预设重叠率;提取所述各视点下每个视角的图像的特征。In a possible implementation manner, the processing module is specifically configured to perform back-projection transformation on the panoramic images under each viewpoint to obtain images of multiple angles of view under each viewpoint, and images of each angle of view under each viewpoint. The center point of the image is at the coordinate position of the corresponding panoramic image, and the overlapping ratio between images of adjacent viewing angles under each viewing point is greater than a preset overlapping ratio; extracting features of images of each viewing angle under each viewing point.
在一种可能的实现方式中,处理模块,具体用于在所述各视点下全景图像中,采用具有第二预设尺寸的滑动窗口在全景图像中进行滑动,且采用反投影变换,依次得到所述滑动窗口内的部分全景图像对应的视角的图像和所述部分全景图像对应的视角的图像的中心点在对应的全景图像的坐标位置,所述各视点下每个视角的图像具有所述第二预设尺寸。In a possible implementation manner, the processing module is specifically configured to use a sliding window with a second preset size to slide in the panoramic image in the panoramic image at each viewpoint, and use back-projection transformation to sequentially obtain The image of the angle of view corresponding to the partial panoramic image in the sliding window and the center point of the image of the angle of view corresponding to the partial panoramic image are at the coordinate position of the corresponding panoramic image, and the image of each angle of view under each viewpoint has the Second default size.
在一种可能的实现方式中,处理模块,具体用于根据所述各视点下每个视角的图像的中心点在对应的全景图像的坐标位置,以及所述各视点下每个视角的图像的特征,构建所述第二索引关系。In a possible implementation manner, the processing module is specifically configured to, according to the coordinate position of the center point of the image of each angle of view under each viewpoint in the corresponding panoramic image, and the coordinate position of the image of each angle of view under each viewpoint features, constructing the second index relationship.
在一种可能的实现方式中,处理模块,具体用于将所述各视点下全景图像进行切割,得到所述各视点下全景图像对应的图像块,以及根据所述各视点下每个视角的图像的中心点在对应的全景图像的坐标位置,以及所述各视点下全景图像对应的图像块,构建所述第一索引关系。In a possible implementation manner, the processing module is specifically configured to cut the panoramic image under each viewpoint to obtain image blocks corresponding to the panoramic image under each viewpoint, and The center point of the image is at the coordinate position of the corresponding panoramic image, and the image blocks corresponding to the panoramic image under each viewpoint are used to construct the first index relationship.
在一种可能的实现方式中,所述各视点下全景图像对应的图像块具有第一预设尺寸。In a possible implementation manner, the image blocks corresponding to the panoramic images under each viewpoint have a first preset size.
在一种可能的实现方式中,处理模块,还用于采用全景图像拼接技术,根据所述各视点下多个视角的预先采集的图像,获取所述各视点下全景图像,所述各视点下相邻视角的预先采集的图像之间的重叠率小于所述预设重叠率。In a possible implementation manner, the processing module is further configured to use the panoramic image stitching technology to acquire the panoramic images under each viewpoint according to the pre-collected images of multiple viewpoints under each viewpoint, and the panoramic images under each viewpoint are The overlap ratio between pre-collected images of adjacent viewing angles is smaller than the preset overlap ratio.
在一种可能的实现方式中,处理模块,具体用于将处于第一视平面中的所述各视点下每个视角的预先采集的图像投影至全景图像所属的第二视平面,以得到所述各视点下全景图像,以及所述第一视平面和所述第二视平面的变换关系。In a possible implementation manner, the processing module is specifically configured to project the pre-collected image of each viewing angle under each viewpoint in the first viewing plane to the second viewing plane to which the panoramic image belongs, so as to obtain the The panoramic image under each viewpoint, and the transformation relationship between the first viewing plane and the second viewing plane.
在一种可能的实现方式中,处理模块,具体用于根据所述变换关系,采用反投影变换将所述目标标识对应的图像块投影至所述第一视平面,得到所述第二图像。In a possible implementation manner, the processing module is specifically configured to, according to the transformation relationship, project the image block corresponding to the target identifier to the first viewing plane by using back-projection transformation to obtain the second image.
第五方面,本申请实施例提供一种电子设备,该电子设备可以为上述的云端、终端。该电子设备中可以包括:处理器、存储器。存储器用于存储计算机可执行程序代码,程序代码包括指令;当处理器执行指令时,指令使所述电子设备执行如第一方面、第二方面中的方法。In a fifth aspect, the embodiment of the present application provides an electronic device, and the electronic device may be the above-mentioned cloud or terminal. The electronic device may include: a processor and a memory. The memory is used to store computer-executable program codes, and the program codes include instructions; when the processor executes the instructions, the instructions cause the electronic device to execute the methods in the first aspect and the second aspect.
第六方面,本申请实施例提供一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述第一方面、第二方面中的方法。In a sixth aspect, the embodiments of the present application provide a computer program product including instructions, which, when run on a computer, cause the computer to execute the methods in the first aspect and the second aspect above.
第七方面,本申请实施例提供一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,当其在计算机上运行时,使得计算机执行上述第一方面、第二方面中的方法。In the seventh aspect, the embodiment of the present application provides a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the computer-readable storage medium is run on a computer, the computer executes the above-mentioned first aspect and the second aspect. method.
上述第二方面至第七方面的各可能的实现方式,其有益效果可以参见上述第一方面所带来的有益效果,在此不加赘述。For the beneficial effects of the possible implementation manners of the above-mentioned second aspect to the seventh aspect, reference may be made to the beneficial effects brought by the above-mentioned first aspect, which will not be repeated here.
附图说明Description of drawings
图1为本申请实施例适用的一种场景示意图;FIG. 1 is a schematic diagram of a scene applicable to an embodiment of the present application;
图2为现有技术中云端存储的一拍摄位置处不同拍摄角度的高清图像的示意图;2 is a schematic diagram of high-definition images at different shooting angles at a shooting position stored in the cloud in the prior art;
图3为现有技术中图像处理方法的一种示意图;Fig. 3 is a schematic diagram of an image processing method in the prior art;
图4为本申请实施例提供的云端存储图像块和索引关系的一种示意图;Fig. 4 is a schematic diagram of cloud storage image block and index relationship provided by the embodiment of the present application;
图5为本申请实施例提供的云端存储图像块和索引关系的另一种示意图;Fig. 5 is another schematic diagram of cloud storage image block and index relationship provided by the embodiment of the present application;
图6为本申请实施例提供的云端获取视点下不同视角的图像的示意图;FIG. 6 is a schematic diagram of images of different viewing angles acquired by the cloud under the viewpoint provided by the embodiment of the present application;
图7为本申请实施例提供的图像处理方法的一种实施例的流程示意图;FIG. 7 is a schematic flowchart of an embodiment of an image processing method provided in the embodiment of the present application;
图8为本申请实施例提供的拍照界面的一种变化示意图;FIG. 8 is a schematic diagram of a variation of the camera interface provided by the embodiment of the present application;
图9为本申请实施例提供的图像处理方法的一种实施例的流程示意图;FIG. 9 is a schematic flowchart of an embodiment of an image processing method provided in an embodiment of the present application;
图10为本申请实施例提供云端存储的全景图像的示意图;FIG. 10 is a schematic diagram of a panoramic image stored in the cloud provided by an embodiment of the present application;
图11为本申请实施例提供的图像处理方法的另一种实施例的流程示意图;FIG. 11 is a schematic flowchart of another embodiment of the image processing method provided by the embodiment of the present application;
图12为本申请实施例提供的图像处理装置的一种结构示意图;FIG. 12 is a schematic structural diagram of an image processing device provided in an embodiment of the present application;
图13为本申请实施例提供的电子设备的一种结构示意图。FIG. 13 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
具体实施方式Detailed ways
本申请实施例涉及的术语释义:Definitions of terms involved in the embodiments of this application:
单视点:视点可以理解拍照时拍照设备(如手机)的位置,单视点即一个位置。Single point of view: The point of view can understand the position of the camera device (such as a mobile phone) when taking pictures, and a single point of view is a position.
全景图像拼接技术:是将多张图像拼接成一张大尺度图像。示例性的,本申请实施例中是将多张高清图像拼接成一张全景图像。这里简述全景图像拼接技术的原理,全景图像拼接可以包括但不限于4个步骤,该4个步骤分别为:检测并提取图像的特征和关键点、匹配两个图像的关键点、使用随机抽样一致算法(random sample consensus,RANSAC)估计单应矩阵,以及拼接图像。Panoramic image stitching technology: it is to stitch multiple images into a large-scale image. Exemplarily, in this embodiment of the application, multiple high-definition images are spliced into one panoramic image. Here is a brief introduction to the principle of panoramic image stitching technology. Panoramic image stitching may include but not limited to 4 steps, the 4 steps are: detecting and extracting image features and key points, matching key points of two images, using random sampling The consensus algorithm (random sample consensus, RANSAC) estimates the homography matrix, and stitches the image.
在一种实施例中,全景图像拼接技术具体实现可以包括:采用尺度不变特征变换(scale-invariant feature transform,SIFT)局部描述算子检测图像中的关键点和特征(特征描 述符或SIFT特征),以及匹配两个图像之间的特征描述符,即使用特征匹配两个图像的关键点。接着采用RANSAC算法,使用两个图像上匹配上的关键点来估计单应矩阵(homography estimation),也就是将其中一张图像通过关联性和另一张图像匹配。In one embodiment, the specific implementation of the panoramic image stitching technology may include: using a scale-invariant feature transform (scale-invariant feature transform, SIFT) local description operator to detect key points and features (feature descriptors or SIFT features) in the image ), and matching feature descriptors between two images, that is, using features to match keypoints of two images. Then, the RANSAC algorithm is used to estimate the homography matrix (homography estimation) using the key points matched on the two images, that is, to match one of the images with the other image through correlation.
估计单应矩阵后,可以采用透视变换(perspective transformation),如可以输入单应矩阵、想要扭曲的图像,还有输出图像的形状,进而通过获取两张图像的宽度之和然后使用图像的高度确定输出图像的导出形状,具体可以参照透视变换的现有技术中的相关描述。透视变换可以理解为:将图像投影至一个新的视平面(viewing plane),透视变换也称作投影映射(projective mapping)或投影变换。After estimating the homography matrix, perspective transformation (perspective transformation) can be used. For example, you can input the homography matrix, the image you want to distort, and the shape of the output image, and then use the height of the image by obtaining the sum of the widths of the two images For determining the derived shape of the output image, for details, reference may be made to relevant descriptions in the prior art of perspective transformation. Perspective transformation can be understood as: projecting an image to a new viewing plane. Perspective transformation is also called projective mapping or projection transformation.
投影变换:可以参照透视变换的描述。Projection transformation: You can refer to the description of perspective transformation.
反投影变换:投影变换指的是将图像投影至一个新的视平面的过程,反投影变换指的是将该新的视平面的图像投影至图像原视平面。应理解的是,在投影变换的过程中,可以得到原视平面至新的视平面之间的变换关系(如变换矩阵),反投影变换的过程即采用该“原视平面至新的视平面之间的变换关系”,将新的视平面的图像投影至图像原视平面。Back projection transformation: Projection transformation refers to the process of projecting an image to a new viewing plane, and back projection transformation refers to projecting the image of the new viewing plane to the original viewing plane of the image. It should be understood that in the process of projection transformation, the transformation relationship (such as transformation matrix) between the original viewing plane and the new viewing plane can be obtained, and the process of back-projection transformation adopts the "original viewing plane to new viewing plane The transformation relationship between ", project the image of the new viewing plane to the original viewing plane of the image.
在一种实施例中,反投影变换可以称为反投影映射、反透视变换,或逆透视变换,反投影变换具体可以参照反投影变换的现有技术中的相关描述。In an embodiment, the back-projection transformation may be called back-projection mapping, inverse perspective transformation, or inverse perspective transformation. For details of the back-projection transformation, reference may be made to relevant descriptions in the prior art of back-projection transformation.
高清图像:如采用单反相机拍摄的具有高清晰度的图像,高清图像的清晰度大于预设清晰度。在一种实施例中,若图像的分辨率相同,则码率越高,清晰度越高,在该种场景下,可以采用预设码率表征预设清晰度。本申请实施例中不限制表征清晰度的参数。High-definition image: such as a high-definition image captured by a SLR camera, the definition of the high-definition image is greater than the preset definition. In one embodiment, if the resolutions of the images are the same, the higher the code rate is, the higher the resolution is. In this scenario, the preset code rate can be used to represent the preset resolution. The parameters representing clarity are not limited in the embodiments of the present application.
拍照倍率:指变焦倍率。Photo magnification: Refers to the zoom magnification.
高倍率:拍照时采用的变焦倍率大于预设倍率。预设倍率取决于终端的拍照能力,不同终端的预设倍率可以相同或不同,在一种实施例中,如预设倍率可以为5。High magnification: The zoom magnification used when taking pictures is greater than the preset magnification. The preset magnification depends on the camera capability of the terminal, and the preset magnifications of different terminals may be the same or different. In one embodiment, the preset magnification may be 5, for example.
图1为本申请实施例适用的一种场景示意图。图1以终端为手机、单反相机进行比较说明,以手机和单反相机均拍摄电脑屏幕为例进行说明。参照图1中的a,用户使用单反相机采用倍率30进行拍照时,可以得到高清图像,如用户可以清楚地在图像中看到图像中电脑屏幕上的文字“一二三四”。当用户使用手机采用倍率30(即图1中的b中的30x)进行拍照时,拍照得到的图像的清晰度低,用户不能清楚地看到电脑屏幕上的文字,只能看到几个阴影方块,如图1中的b所示。应理解,为了便于说明手机、单反相机拍摄得到的图像,在图1中的a的右侧和图1中的b的右侧分别示出了手机、单反相机采用倍率30拍摄得到的图像。FIG. 1 is a schematic diagram of a scene applicable to an embodiment of the present application. Figure 1 compares the terminal with a mobile phone and a SLR camera, and uses both the mobile phone and the SLR camera to capture computer screens as an example. Referring to a in Figure 1, when the user uses a SLR camera to take pictures with a magnification of 30, he can get a high-definition image. For example, the user can clearly see the text "one two three four" on the computer screen in the image. When the user uses a mobile phone to take a photo with a magnification of 30 (that is, 30x in b in Figure 1), the resolution of the image obtained by taking the photo is low, and the user cannot clearly see the text on the computer screen, and can only see a few shadows block, as shown in b in Figure 1. It should be understood that, for the convenience of illustrating the images captured by the mobile phone and the SLR camera, the images captured by the mobile phone and the SLR camera at a magnification of 30 are shown on the right side of a in FIG. 1 and the right side of b in FIG. 1 respectively.
应理解,本申请实施例中,用户使用高倍率拍照可以理解为:用户拍照时拍照倍率大于预设倍率。预设倍率可以参照上述术语解释中的相关描述。It should be understood that, in the embodiment of the present application, the user takes a photo with a high magnification, which may be understood as: the user takes a photo with a magnification greater than a preset magnification. For the preset magnification, refer to the related description in the above term explanation.
为了提高终端使用高倍率拍照得到的图像的清晰度,现有技术中可以预先在云端存储大量的高清图像。高清图像包括:不同拍摄位置拍摄的高清图像,以及同一拍摄位置不同拍摄角度拍摄的高清图像。示例性的,图2为现有技术中云端存储的A拍摄位置处不同拍摄角度拍摄的高清图像的示意图,应理解图2中以黑色矩形表征拍摄的对象,以高清图像为6张为例进行说明。在一种实施例中,现有技术中云端存储的高清图像的画面的重叠率大于或等于第一重叠率,第一重叠率如80%。在一种实施例中,拍摄位置可以称为视点,拍摄角度可以称为视角,换句话说,现有技术中云端存储有不同视点拍摄的高清图像,以及同一视点不同视角拍摄的高清图像。In order to improve the clarity of images obtained by the terminal using high-magnification photography, a large number of high-definition images can be stored in the cloud in advance in the prior art. High-definition images include: high-definition images taken at different shooting locations, and high-definition images taken at the same shooting location at different shooting angles. Exemplarily, FIG. 2 is a schematic diagram of high-definition images taken at different shooting angles at the shooting position of A stored in the cloud in the prior art. It should be understood that the black rectangle in FIG. illustrate. In one embodiment, in the prior art, the overlap rate of the frames of the high-definition images stored in the cloud is greater than or equal to the first overlap rate, such as 80%. In one embodiment, the shooting location may be called a viewpoint, and the shooting angle may be called a viewing angle. In other words, in the prior art, high-definition images taken from different viewpoints and high-definition images taken from different viewpoints at the same viewpoint are stored in the cloud.
参照图3,现有技术中,终端使用高倍率拍照时,可以将拍摄的图像发送至云端,云端获取云端中存储的每个高清图像与来自终端的图像的相似度,进而将相似度最大的高清图像反 馈至终端。终端接收来自云端的高清图像后,可以显示高清图像,用户可以看到终端使用高倍率拍照得到高清图像。示例性的,图3中以终端向云端发送图1中的b所示的低清晰度的图像,云端可以向终端反馈高清晰度的图像,如显示有“一二三四”文字的图像。现有技术的方法中虽然能够使得终端在高倍率拍照时拍摄得到高清图像,但云端需要存储大量的高清图像,占用大量的存储空间,云端的存储开销大。Referring to Fig. 3, in the prior art, when the terminal uses a high magnification to take pictures, the captured image can be sent to the cloud, and the cloud obtains the similarity between each high-definition image stored in the cloud and the image from the terminal, and then the highest similarity High-definition images are fed back to the terminal. After the terminal receives the high-definition image from the cloud, it can display the high-definition image, and the user can see the high-definition image obtained by the terminal taking pictures with a high magnification. Exemplarily, in FIG. 3, the terminal sends the low-resolution image shown in b in FIG. 1 to the cloud, and the cloud can feed back a high-resolution image to the terminal, such as an image displaying the words "one two three four". Although the method in the prior art can enable the terminal to obtain high-definition images when taking pictures at high magnifications, the cloud needs to store a large number of high-definition images, occupying a large amount of storage space, and the storage cost of the cloud is large.
在一种实施例中,可以减少云端存储的高清图像,如存储重叠率小于第二重叠率的高清图像,第二重叠率如20%。这样,因为云端存储的高清图像的数量少,如同一拍摄位置处图像对应的拍摄角度减少,如此基于云端中存储的每个高清图像与来自终端的图像的相似度比较的方法,向终端反馈的高清图像与实际终端拍摄的图像的拍摄角度有差别,导致用户看起来是不同拍摄角度拍摄的图像,高清图像的反馈准确率低,导致用户体验低。In an embodiment, the number of high-definition images stored in the cloud may be reduced, for example, storing high-definition images with an overlapping rate less than a second overlapping rate, such as 20%. In this way, because the number of high-definition images stored in the cloud is small, the shooting angle corresponding to the image at the same shooting position is reduced, so based on the method of comparing the similarity between each high-definition image stored in the cloud and the image from the terminal, the feedback to the terminal The shooting angles of high-definition images and the images captured by the actual terminal are different, causing users to look like images taken at different shooting angles. The feedback accuracy of high-definition images is low, resulting in poor user experience.
基于上述问题,一方面,本申请实施例中可以在云端中存储不同单视点(或视点)下全景图像(或由全景图像分割成的图像块),对于单视点来说,云端存储的高清图像由多张高清图像变为一张全景图像(或一张全景图像对应的多个图像块),可以降低云端的存储开销。另一方面,为了保证向终端反馈准确的高清图像,还需要存储同一视点下不同拍摄角度的高清图像,本申请实施例中,在减少云端存储开销的基础上,可以存储不同视点以及同一视点下不同拍摄角度的高清图像的特征,这样能够在保证反馈准确性的基础上,还减小了云端的存储开销。Based on the above problems, on the one hand, in the embodiment of the present application, panoramic images (or image blocks divided by panoramic images) under different single viewpoints (or viewpoints) can be stored in the cloud. For a single viewpoint, the high-definition images stored in the cloud Changing multiple high-definition images into one panoramic image (or multiple image blocks corresponding to one panoramic image) can reduce storage overhead on the cloud. On the other hand, in order to ensure that accurate high-definition images are fed back to the terminal, it is also necessary to store high-definition images from different shooting angles under the same viewpoint. The characteristics of high-definition images from different shooting angles can not only ensure the accuracy of feedback, but also reduce the storage overhead of the cloud.
在一种实施例中,本申请实施例中的终端可以称为用户设备,终端具有拍照功能,且支持高倍率拍照。本申请实施例中的终端在使用高倍率进行拍照时,拍照得到的图像的清晰度低。应理解,高倍率可以理解为拍照倍率大于预设倍率。示例性的,终端可以为手机、平板电脑(portable android device,PAD)、个人数字处理(personal digital assistant,PDA)、具有无线通信功能的手持设备、计算设备、或可穿戴设备,虚拟现实(virtual reality,VR)终端设备、增强现实(augmented reality,AR)终端设备、智慧家庭(smart home)中的终端等,本申请实施例中对终端的形态不做具体限定。In an embodiment, the terminal in the embodiment of the present application may be called a user equipment, and the terminal has a camera function and supports high-magnification camera. When the terminal in the embodiment of the present application uses a high magnification to take a picture, the definition of the image obtained by taking the picture is low. It should be understood that a high magnification can be understood as a photographing magnification greater than a preset magnification. Exemplarily, the terminal can be a mobile phone, a tablet computer (portable android device, PAD), a personal digital assistant (PDA), a handheld device with a wireless communication function, a computing device, or a wearable device. Reality (VR) terminal equipment, augmented reality (augmented reality, AR) terminal equipment, terminals in a smart home (smart home), etc., the form of the terminal is not specifically limited in the embodiment of the present application.
在一种实施例中,云端可以为服务器,或服务器集群。示例性的,服务器可以如拍照应用程序对应的服务器,或携带有拍照功能的应用程序对应的服务器,本申请实施例对云端的形态不做具体限定。In an embodiment, the cloud may be a server, or a server cluster. Exemplarily, the server may be a server corresponding to a photographing application program, or a server corresponding to an application program carrying a photographing function, and the embodiment of the present application does not specifically limit the form of the cloud.
在介绍本申请实施例提供的图像处理方法之前,首先对云端中存储的内容进行说明:Before introducing the image processing method provided by the embodiment of this application, the content stored in the cloud will be described first:
在一种实施例中,云端中存储有不同视点下全景图像对应的多个图像块,其中,同一视点下全景图像对应的多个图像块之间无重叠或重叠率小于第三重叠率。示例性的,第三重叠率可以为20%、10%等更小的数值。应理解,全景图像为高清图像,全景图像对应的多个图像块也为高清的图像块。In one embodiment, multiple image blocks corresponding to panoramic images under different viewpoints are stored in the cloud, wherein there is no overlap between the multiple image blocks corresponding to panoramic images under the same viewpoint or the overlapping ratio is less than the third overlapping ratio. Exemplarily, the third overlapping ratio may be a smaller value such as 20%, 10%, or the like. It should be understood that the panoramic image is a high-definition image, and multiple image blocks corresponding to the panoramic image are also high-definition image blocks.
在一种实施例中,云端中存储有不同视点下全景图像。In one embodiment, panoramic images from different viewpoints are stored in the cloud.
本申请实施例中,对于一视点来说,因为云端未存储该视点下多张视角的高清图像,而是存储该视点下的一张全景图像,或全景图像对应的多个图像块,可以减少云端的存储开销。下述实施例中以一个视点为例介绍本申请实施例提供的图像处理方法。In the embodiment of the present application, for a viewpoint, because the cloud does not store multiple high-definition images of the viewpoint under the viewpoint, but stores a panoramic image under the viewpoint, or multiple image blocks corresponding to the panoramic image, it can reduce Storage overhead in the cloud. In the following embodiments, a viewpoint is taken as an example to introduce the image processing method provided in the embodiments of the present application.
在一种实施例中,本申请实施例中不同视点下全景图像可以通过全景摄像机拍摄得到,或者由不同视点下低重叠高清图像拼接得到。其中,低重叠高清图像之间的重叠率小于第二重叠率。下述以由不同视点下低重叠高清图像拼接得到不同视点下全景图像为例进行说明。In an embodiment, the panoramic images under different viewpoints in the embodiment of the present application may be obtained by shooting with a panoramic camera, or spliced from low-overlapping high-definition images under different viewpoints. Wherein, the overlap ratio between the low-overlap high-definition images is smaller than the second overlap ratio. In the following, the panoramic images under different viewpoints obtained by splicing low-overlapping high-definition images under different viewpoints are taken as an example for illustration.
参照图4,云端存储内容的过程可以包括如下步骤:Referring to Figure 4, the process of storing content in the cloud may include the following steps:
S401,云端采用全景图像拼接技术,对同一视点下的低重叠高清图像进行拼接,以得到不同视点下的全景图像。S401, the cloud uses panoramic image stitching technology to stitch low-overlap high-definition images under the same viewpoint to obtain panoramic images under different viewpoints.
在一种实施例中,同一视点下的低重叠高清图像可以由单反相机等能够拍摄高清图像的拍照设备采集得到。示例性的,如可以预先采用单反相机在同一视点下拍摄不同视角的高清图像,进而得到不同视点下不同视角的高清图像。在一种实施例中,各视点下低重叠高清图像可以称为各视点下多个视角的预先采集的图像。In one embodiment, low-overlap high-definition images under the same viewpoint can be captured by a camera device capable of capturing high-definition images, such as a single-lens reflex camera. Exemplarily, for example, a single-lens reflex camera may be used in advance to capture high-definition images of different viewing angles at the same viewpoint, and then obtain high-definition images of different viewing angles at different viewpoints. In an embodiment, the high-definition images with low overlap at each viewpoint may be referred to as pre-collected images of multiple viewing angles at each viewpoint.
其中,单反相机采集到的同一视点下相邻视角高清图像之间的重叠率小于第二重叠率,或者,可以在单反相机采集到的同一视点下的高清图像中选择高清图像,使得相邻视角高清图像之间的重叠率小于第二重叠率,以得到同一视点下的低重叠高清图像。其中,低重叠高清图像的重叠率小于第二重叠率的目的是为了:减少云端进行全景图像拼接的计算量,提高拼接效率。在一种实施例中,还可以说相邻视角高清图像之间的重叠率小于预设重叠率,该预设重叠率大于或等于第二重叠率且小于第一重叠率。Wherein, the overlap rate between the high-definition images of adjacent viewing angles collected by the single-lens reflex camera at the same viewpoint is less than the second overlapping ratio, or the high-definition image can be selected from the high-definition images collected by the single-lens reflex camera under the same viewpoint, so that the adjacent viewing angles The overlapping rate between high-definition images is smaller than the second overlapping rate, so as to obtain low-overlapping high-definition images under the same viewpoint. Among them, the purpose of the overlap rate of the low-overlap high-definition image being smaller than the second overlap rate is to reduce the calculation amount of panoramic image stitching in the cloud and improve stitching efficiency. In one embodiment, it can also be said that the overlap rate between the high-definition images of adjacent viewing angles is less than a preset overlap rate, and the preset overlap rate is greater than or equal to the second overlap rate and less than the first overlap rate.
对于同一视点下的低重叠高清图像,云端可以采用全景图像拼接技术,得到该视点下的全景图像。依据全景图像拼接技术,云端可以得到不同视点下的全景图像。全景图像拼接技术具体可以参照术语释义中的相关描述。For low-overlap high-definition images at the same viewpoint, the cloud can use panoramic image stitching technology to obtain panoramic images at this viewpoint. According to the panoramic image stitching technology, the cloud can obtain panoramic images from different viewpoints. For the panoramic image stitching technology, please refer to the relevant description in the definition of terms.
其中,S401如图5中的S1,图5为图4的简化流程示意图。Wherein, S401 is shown as S1 in FIG. 5 , and FIG. 5 is a simplified flow diagram of FIG. 4 .
图6为本申请实施例提供的云端获取同一视点下不同视角的图像的示意图。参照图6中的a,以一视点下的低重叠高清图像包括2张为例进行说明,云端执行S401可以得到该视点下的全景图像,如图6中的b所示。FIG. 6 is a schematic diagram of the cloud acquiring images of different viewing angles under the same viewpoint provided by the embodiment of the present application. Referring to a in FIG. 6 , it is illustrated by taking two low-overlapping high-definition images at a viewpoint as an example. The cloud executes S401 to obtain a panoramic image at the viewpoint, as shown in b in FIG. 6 .
S402,云端将各视点下的全景图像进行切割,得到各视点下全景图像对应的图像块。S402. The cloud segmentes the panoramic images under each viewpoint to obtain image blocks corresponding to the panoramic images under each viewpoint.
以一个视点为例,云端可以将该视点下全景图像切割成具有预设尺寸的图像块,得到该视点对应的多个图像块。在一种实施例中,每个图像块的尺寸相同,如均为800px*900px,即每个图像块具有第一预设尺寸,第一预设尺寸可以理解为具有第一预设宽度和第一预设高度。其中,1px表征一个像素。在一种实施例中,每个图像块的尺寸可以不同。Taking a viewpoint as an example, the cloud can cut the panoramic image under the viewpoint into image blocks with a preset size to obtain multiple image blocks corresponding to the viewpoint. In one embodiment, the size of each image block is the same, such as 800px*900px, that is, each image block has a first preset size, and the first preset size can be understood as having a first preset width and a first preset size. a preset height. Among them, 1px represents a pixel. In one embodiment, the size of each image block can be different.
在一种实施例中,同一视点对应的相邻两个图像块之间无重叠,即不包含相同的区域。在一种实施例中,同一视点对应的相邻两个图像块之间的重叠率可以小于第三重叠率。In an embodiment, there is no overlap between two adjacent image blocks corresponding to the same viewpoint, that is, they do not contain the same area. In an embodiment, the overlapping ratio between two adjacent image blocks corresponding to the same viewpoint may be smaller than the third overlapping ratio.
在一种实施例中,云端将全景图像切割成图像块后,可以对每个图像块进行编号。示例性的,可以按照切割后的图像块在全景图像中的行、列进行编号,如一图像块位于全景图像中的第一行、第一列,可以将该图像块编号为行1、列1。示例性的,可以对图像块按照1至N的顺序进行编号,如第一行第一列的图像块编号为1,第一行第二列的图像块的编号为2,N为大于1的整数。本申请实施例中对图像块的编号的方式不做限制,下述实施例中以行、列对图像块进行编号为例进行说明。在一种实施例中,图像块的行、列编号或“1-N”的编号可以称为图像块的标识。In one embodiment, after the cloud divides the panoramic image into image blocks, each image block may be numbered. Exemplarily, it can be numbered according to the row and column of the cut image block in the panoramic image. For example, if an image block is located in the first row and first column of the panoramic image, the image block can be numbered as row 1 and column 1 . Exemplarily, the image blocks can be numbered in the order of 1 to N, for example, the number of the image block in the first row and the first column is 1, the number of the image block in the first row and the second column is 2, and N is greater than 1 integer. In the embodiment of the present application, there is no limitation on the way of numbering the image blocks. In the following embodiments, the numbering of image blocks in rows and columns is used as an example for illustration. In an embodiment, the row, column number or "1-N" number of the image block may be referred to as the identifier of the image block.
本申请实施例中,将各视点下的全景图像切割成图像块是为了:便于云端对图像块进行加载,而非直云端接加载整个全景图像,因为图像块的加载时间小于整个全景图像的加载时间,因此可以提高云端的加载速度,提高云端向终端反馈高清图像的速度,具体可以参照图7中的相关描述。In the embodiment of the present application, the purpose of cutting the panoramic images under each viewpoint into image blocks is to facilitate the cloud to load the image blocks, instead of directly loading the entire panoramic image directly from the cloud, because the loading time of the image blocks is shorter than that of the entire panoramic image. Therefore, the loading speed of the cloud can be improved, and the speed of the high-definition image feedback from the cloud to the terminal can be improved. For details, refer to the relevant description in FIG. 7 .
其中,S402如图5中的S2。Wherein, S402 is shown as S2 in FIG. 5 .
参照图6,示例性的,云端执行S402,可以将全景图像切割成8个图像块,如图6中的c所示。Referring to FIG. 6 , for example, the cloud executes S402 to cut the panoramic image into 8 image blocks, as shown in c in FIG. 6 .
S403,云端采用反投影变换,得到各视点下全景图像对应的多个视角的图像。S403. The cloud adopts back-projection transformation to obtain images of multiple viewing angles corresponding to the panoramic image at each viewing point.
在S401中,云端采用全景图像拼接技术获取不同视点下全景图像的过程中,可以获取各视点下低重叠高清图像所在的第一视平面至全景图像所在的第二视平面的变换关系,进而云端可以采用各视点下的变换关系,将全景图像的各部分投影至第一视平面,以得到不同视角的图像。In S401, when the cloud adopts panoramic image mosaic technology to obtain panoramic images under different viewpoints, it can obtain the transformation relationship between the first viewing plane where the low-overlap high-definition image is located at each viewpoint and the second viewing plane where the panoramic image is located, and then the cloud Each part of the panoramic image can be projected onto the first viewing plane by using the transformation relationship at each viewpoint, so as to obtain images of different viewing angles.
在一种实施例中,云端可以按照全景图像从左至右、从上至下的顺序,依次将第二预设尺寸的滑动窗口内的部分全景图像投影至第二视平面,得到多个视角的图像。其中,每个视角的图像为高清图像,每个视角的图像的尺寸相同,即第二预设尺寸,如每个视角的图像具有第二预设宽度和第二预设高度。在一种实施例中,同一视点下全景图像对应的相邻视角的图像之间的重叠率大于第一重叠率,也就是说,云端控制滑动窗口每次滑动时可以与滑动窗口上一位置之间的重叠率保持大于第一重叠率,以得到各视点下全景图像对应的相邻视角的图像。In one embodiment, the cloud can sequentially project part of the panoramic images in the sliding window of the second preset size to the second viewing plane according to the order of the panoramic images from left to right and from top to bottom, so as to obtain multiple viewing angles Image. Wherein, the image of each viewing angle is a high-definition image, and the size of the image of each viewing angle is the same, that is, the second preset size, for example, the image of each viewing angle has a second preset width and a second preset height. In one embodiment, the overlapping rate between the images of adjacent viewing angles corresponding to the panoramic image under the same viewpoint is greater than the first overlapping rate, that is to say, the cloud control sliding window can be separated from the previous position of the sliding window every time it slides. The overlap rate between them is kept greater than the first overlap rate, so as to obtain images of adjacent viewing angles corresponding to the panoramic images at each viewing point.
在S403中,在反投影变换过程中,将第二预设尺寸的滑动窗口内的部分全景图像投影至第二视平面时,可以得到部分全景图像上每个像素点至对应视角的图像上的像素点的一一映射关系,进而在该过程中云端可以得到对应视角的图像的中心点在全景图像中的坐标位置。在一种实施例中,中心点在全景图像中的坐标位置可以为经纬度坐标。In S403, during the back-projection transformation process, when the partial panoramic image in the sliding window of the second preset size is projected onto the second viewing plane, the distance between each pixel on the partial panoramic image and the image corresponding to the viewing angle can be obtained. The one-to-one mapping relationship of pixels, and in this process, the cloud can obtain the coordinate position of the center point of the image corresponding to the perspective in the panoramic image. In an embodiment, the coordinate position of the center point in the panoramic image may be a latitude and longitude coordinate.
应理解的是,图像的中心点可以理解为:图像的物理中心点。It should be understood that the center point of the image can be understood as: the physical center point of the image.
应理解,S402和S403之间没有先后顺序的区分,二者可以同时执行。It should be understood that there is no sequence distinction between S402 and S403, and the two can be executed simultaneously.
其中,S403如图5中的S3。Wherein, S403 is shown as S3 in FIG. 5 .
参照图6,示例性的,云端执行S403,可以将全景图像进行反投影变换,得到该视点对应的4个视角的图像,如图6中的d所示。Referring to FIG. 6 , for example, the cloud executes S403 to perform back-projection transformation on the panoramic image to obtain images of four viewing angles corresponding to the viewpoint, as shown in d in FIG. 6 .
S404,云端根据各视点下每个视角的图像的中心点在全景图像中的坐标位置,建立每个视角的图像的中心点与图像块的第一索引关系。S404. The cloud establishes a first index relationship between the center point of the image of each angle of view and the image block according to the coordinate position of the center point of the image of each angle of view under each viewpoint in the panoramic image.
第一索引关系可以理解为:每个视角的图像对应全景图像中哪几个图像块,即建立每个视角的图像与图像块的标识映射关系。The first index relationship can be understood as: which image blocks in the panoramic image correspond to the images of each viewing angle, that is, to establish an identification mapping relationship between images of each viewing angle and image blocks.
在一种实施例中,因为每个视角的图像具有第二预设尺寸,在已经得知每个视角的图像的中心点在全景图像中的坐标位置的前提下,可以获取每个视角的图像对应的所有图像块。示例性的,如云端可以基于每个视角的图像的第二预设尺寸,以及每个视角的图像的中心点在全景图像中的坐标位置,确定每个视角的图像的四个顶点在全景图像中的坐标位置。进而,对于一个视角的图像来说,云端可以根据每个视角的图像四个顶点,以及中心点在全景图像中的位置坐标,确定该视角的图像在全景图像中对应的图像块。In one embodiment, because the image of each viewing angle has a second preset size, the image of each viewing angle can be acquired on the premise that the coordinate position of the center point of the image of each viewing angle in the panoramic image is known All corresponding image blocks. Exemplarily, based on the second preset size of the image of each viewing angle, and the coordinate position of the center point of the image of each viewing angle in the panoramic image, the cloud can determine the four vertices of the image of each viewing angle in the panoramic image The coordinate position in . Furthermore, for an image of a viewing angle, the cloud can determine the corresponding image block of the image of this viewing angle in the panoramic image according to the four vertices of the image of each viewing angle and the position coordinates of the center point in the panoramic image.
在一种实施例中,云端可以存储第一索引关系。在第一索引关系中,可以以每个视角的图像的中心点在全景图像中的坐标位置表征每个视角的图像,以图像块的编号表征图像块,也就是说,第一索引关系中可以包括:每个视角的图像的中心点在全景图像中的坐标位置和图像块的编号的映射关系。In an embodiment, the cloud may store the first index relationship. In the first index relationship, the image of each perspective can be represented by the coordinate position of the center point of the image of each perspective in the panoramic image, and the image block can be represented by the number of the image block, that is to say, in the first index relationship can Including: the mapping relationship between the coordinate position of the center point of the image of each viewing angle in the panoramic image and the number of the image block.
示例性的,参照图6所示,如全景图像对应4个视角的图像,该4个视角的图像的中心点在全景图像中的坐标位置分别为(经1、纬1)、(经2、纬2)、(经3、纬3),以及(经4、纬4)。其中,(经1、纬1)的图像对应的图像块为:(行1、列1)、(行1、列2),(经2、纬2)的图像对应的图像块为:(行1、列3)、(行1、列4)、(经3、纬3)的图像对应的图像块为:(行2、列1)、(行2、列2),(经4、纬4)的图像对应的图像块为:(行2、列3)、(行2、列4)。据此,云端存储的第一索引关系可以如表一所示:Exemplarily, as shown in FIG. 6, if the panoramic image corresponds to images of 4 viewing angles, the coordinate positions of the center points of the images of the 4 viewing angles in the panoramic image are (longitude 1, latitude 1), (longitude 2, latitude 2), (longitude 3, latitude 3), and (longitude 4, latitude 4). Among them, the image block corresponding to the image of (longitude 1, latitude 1) is: (row 1, column 1), (row 1, column 2), and the image block corresponding to the image of (longitude 2, latitude 2) is: (row 1. The image block corresponding to the image of column 3), (row 1, column 4), (longitude 3, latitude 3) is: (row 2, column 1), (row 2, column 2), (longitude 4, latitude 4) The image blocks corresponding to the image are: (row 2, column 3), (row 2, column 4). Accordingly, the first index relationship of cloud storage can be shown in Table 1:
表一Table I
各视角的图像的中心点在全景图像中的坐标位置The coordinate position of the center point of the image of each viewing angle in the panoramic image 图像块image blocks
(经1、纬1)(longitude 1, latitude 1) (行1、列1)、(行1、列2)(row 1, column 1), (row 1, column 2)
(经2、纬2)(longitude 2, latitude 2) (行1、列3)、(行1、列4)(row 1, column 3), (row 1, column 4)
(经3、纬3)(longitude 3, latitude 3) (行2、列1)、(行2、列2)(row 2, column 1), (row 2, column 2)
(经4、纬4)(longitude 4, latitude 4) (行2、列3)、(行2、列4)(row 2, column 3), (row 2, column 4)
其中,S404如图5中的S4。Wherein, S404 is shown as S4 in FIG. 5 .
S405,云端获取各视点下每个视角的图像的特征,以建立每个视角的图像的特征与每个视角的图像的中心点在全景图像中的坐标位置的第二索引关系。S405. The cloud acquires the feature of the image of each viewing angle under each viewing point, so as to establish a second index relationship between the feature of the image of each viewing angle and the coordinate position of the center point of the image of each viewing angle in the panoramic image.
对于每个视点下的每个视角的图像来说,云端可以获取每个视角的图像的特征。在一种实施例中,每个视角的图像的特征体现为特征向量,如特征向量可以为2048维的特征向量。也就是说,云端可以获取各视点下每个视角的图像的特征向量。For the images of each viewing angle under each viewing point, the cloud can obtain the features of the images of each viewing angle. In an embodiment, the feature of the image of each viewing angle is embodied as a feature vector, for example, the feature vector may be a 2048-dimensional feature vector. That is to say, the cloud can obtain feature vectors of images of each view angle under each view point.
在一种实施例中,云端可以采用神经网络模型,提取每个视角的图像的特征。示例性的,神经网络模型可以包括但不限于为:卷积神经网络(convolutional neural networks,CNN)、循环神经网络(recurrent neural network,RNN)和长短期记忆(long short-term memory,LSTM)。In one embodiment, the cloud may use a neural network model to extract features of images of each viewing angle. Exemplarily, the neural network model may include but not limited to: convolutional neural networks (convolutional neural networks, CNN), recurrent neural networks (recurrent neural network, RNN) and long short-term memory (long short-term memory, LSTM).
本申请实施例中,云端可以获取每个视角的图像的特征,进而可以根据每个视角的图像的特征,建立每个视角的图像的特征和每个视角的中心点在全景图像中的坐标位置的第二索引关系。在一种实施例中,云端可以存储第二索引关系,在第二索引关系中,可以以每个视角的图像的中心点在全景图像中的坐标位置表征每个视角的图像的中心点,以每个视角的图像的特征向量表征每个视角的图像的特征,也就是说,第二索引关系中可以包括:每个视角的中心点在全景图像中的坐标位置和每个视角的图像的特征的映射关系。In the embodiment of the present application, the cloud can acquire the features of the image of each viewing angle, and then can establish the features of the image of each viewing angle and the coordinate position of the center point of each viewing angle in the panoramic image according to the features of the image of each viewing angle The second index relation of . In one embodiment, the cloud can store a second index relationship, in the second index relationship, the center point of the image of each angle of view can be represented by the coordinate position of the center point of the image of each angle of view in the panoramic image, so as to The feature vector of the image of each viewing angle characterizes the features of the image of each viewing angle, that is, the second index relationship may include: the coordinate position of the center point of each viewing angle in the panoramic image and the features of the image of each viewing angle mapping relationship.
在一种实施例中,云端还可以根据第一索引关系和第二索引关系,可以得到第三索引关系。如第三索引关系为:每个视角的图像的特征和图像块的编号的映射关系。也就是说,云端可以以每个视角的中心点在全景图像中的坐标位置,对第一索引关系和第二索引关系进行合并,将具有相同坐标位置的中心点的图像的特征和图像块的编号进行映射,得到第三索引关系。In an embodiment, the cloud can also obtain a third index relationship according to the first index relationship and the second index relationship. For example, the third index relationship is: the mapping relationship between the feature of the image of each viewing angle and the number of the image block. That is to say, the cloud can merge the first index relationship and the second index relationship based on the coordinate position of the center point of each viewing angle in the panoramic image, and combine the features of the center point with the same coordinate position and the image block The number is mapped to obtain the third index relationship.
应理解,S404和S405之间没有先后顺序的区分,二者可以同时执行。It should be understood that there is no sequence distinction between S404 and S405, and the two can be executed simultaneously.
其中,S405如图5中的S5。Wherein, S405 is shown as S5 in FIG. 5 .
综上,在一种实施例中,云端中可以存储各视点下的多个图像块、第一索引关系和第二索引关系。或者,在一种实施例中,云端中可以存储各视点下的多个图像块和第三索引关系。如此,相较于现有技术中云端存储有各视点下不同视角的高清图像的方式,可以减少存储开销。To sum up, in one embodiment, multiple image blocks under each viewpoint, the first index relationship and the second index relationship may be stored in the cloud. Alternatively, in an embodiment, multiple image blocks and the third index relationship under each viewpoint may be stored in the cloud. In this way, compared with the prior art in which high-definition images of different viewing angles are stored in the cloud, storage overhead can be reduced.
基于云端中存储的内容的相关介绍,下面结合具体的实施例对本申请实施例提供的图像处理方法进行说明。下面这几个实施例可以相互结合,对于相同或相似的概念或过程可能在某些实施例不再赘述。图7为本申请实施例提供的图像处理方法的一种实施例的流程示意图。应理解,图7中以终端和云端交互的角度为例进行说明。Based on the relevant introduction of the content stored in the cloud, the image processing method provided by the embodiment of the present application will be described below in combination with specific embodiments. The following embodiments may be combined with each other, and the same or similar concepts or processes may not be repeated in some embodiments. FIG. 7 is a schematic flow chart of an embodiment of an image processing method provided in an embodiment of the present application. It should be understood that in FIG. 7, the perspective of interaction between the terminal and the cloud is taken as an example for illustration.
参照图7,本申请实施例提供的图像处理方法可以包括:Referring to FIG. 7, the image processing method provided by the embodiment of the present application may include:
S701,终端响应于拍照指令,拍照得到第一图像。S701. The terminal responds to a photographing instruction to obtain a first image by photographing.
在一种实施例中,拍照指令可以为用户操作终端显示的拍照界面触发的指令,示例性的,如拍照界面上包括拍照控件,用户操作该拍照控件可以触发向终端输入拍照指令。在一种实施例中,拍照指令可以为用户语音触发的,如用户说出“拍照”,可以触发向终端输入拍照 指令。在一种实施例中,用户还可以按照自定义的方式或者操作其他快捷按键的方式触发向终端输入拍照指令,本申请实施例对用户触发拍照指令的方式不做限制。In one embodiment, the photographing instruction may be an instruction triggered by the user operating the photographing interface displayed on the terminal. For example, if the photographing interface includes a photographing control, the user's operation of the photographing control may trigger the input of the photographing instruction to the terminal. In one embodiment, the photographing instruction can be triggered by the user's voice, for example, the user says "photographing", which can trigger the input of the photographing instruction to the terminal. In one embodiment, the user can also trigger the input of a photographing instruction to the terminal in a customized manner or by operating other shortcut keys, and the embodiment of the present application does not limit the manner in which the user triggers the photographing instruction.
终端响应于拍照指令,可以拍照得到第一图像。The terminal may take a photo to obtain the first image in response to the photo-taking instruction.
图8为本申请实施例提供的拍照界面的一种变化示意图。图8中的a所示的为拍照界面,拍照界面上包括预览框81、拍摄控件82以及倍率调整条83。用户调整倍率调整条83,可以改变终端的拍照倍率,示例性的,以用户将拍照倍率调整为30为例进行说明。用户点击拍摄控件82,相应的终端响应于拍照指令,可以以拍照倍率30拍照得到第一图像。FIG. 8 is a schematic diagram of a variation of the camera interface provided by the embodiment of the present application. Shown in a in FIG. 8 is the photographing interface, which includes a preview frame 81 , a photographing control 82 and a magnification adjustment bar 83 . The user adjusts the magnification adjustment bar 83 to change the photographing magnification of the terminal. As an example, the user adjusts the photographing magnification to 30 as an example for illustration. The user clicks on the shooting control 82, and the corresponding terminal responds to the shooting instruction and can take a picture at a shooting magnification of 30 to obtain a first image.
S702,终端向云端发送第一图像。S702. The terminal sends the first image to the cloud.
S703,云端获取第一图像的特征。S703. The cloud acquires features of the first image.
云端获取第一图像的特征的方式可以参照S405中云端获取各视点下每个视角的图像的特征的相关描述。For the manner in which the cloud acquires the features of the first image, reference may be made to the description in S405 that the cloud acquires the features of the images of each viewing angle under each viewpoint.
S704,云端获取各视点下每个视角的图像的特征和第一图像的特征的相似度。S704, the cloud obtains the similarity between the feature of the image of each viewing angle under each viewing point and the feature of the first image.
在一种实施例中,云端可以获取各视点下每个视角的图像的特征和第一图像的特征的余弦夹角或欧式距离等,以得到各视点下每个视角的图像的特征和第一图像的特征的相似度。其中,余弦夹角越小、欧式距离越小,表征相似度越大。In one embodiment, the cloud can obtain the cosine angle or Euclidean distance between the image features of each viewing angle under each viewpoint and the feature of the first image, so as to obtain the features of the image of each viewing angle under each viewing point and the first The similarity of image features. Among them, the smaller the cosine angle and the smaller the Euclidean distance, the greater the representation similarity.
在一种实施例中,为了减少云端的相似度计算量,终端在向云端发送第一图像时,可以上传终端的位置,即上述S702可以替换为:终端向云端发送第一图像以及终端的位置。相应的,S704可以替换为:云端获取处于距离终端的位置的预设范围内的视点下每个视角的图像的特征和第一图像的特征的相似度。在一种实施例中,可以将距离终端的位置的预设范围内的视点称为目标视点。In one embodiment, in order to reduce the amount of similarity calculation in the cloud, when the terminal sends the first image to the cloud, it can upload the location of the terminal, that is, the above S702 can be replaced by: the terminal sends the first image and the location of the terminal to the cloud . Correspondingly, S704 may be replaced by: the cloud obtains the similarity between the feature of the image of each viewing angle and the feature of the first image at a viewpoint within a preset range from the position of the terminal. In an embodiment, a viewpoint within a preset range from the location of the terminal may be referred to as a target viewpoint.
在该实施例中,因为云端中存储有不同视点下各视角的图像的特征,因此云端可以先基于终端的位置,确定处于终端的位置的预设距离范围内的视点,其中,处于终端的位置的预设距离范围内的视点可以理解为目标视点。进而终端设备可以获取目标视点下每个视角的图像的特征和第一图像的特征的相似度,可以避免计算所有视点下的特征相似度,能够提高云端的计算效率。In this embodiment, because the cloud stores the characteristics of images of various angles of view under different viewpoints, the cloud can first determine the viewpoints within a preset distance range of the terminal based on the location of the terminal, wherein the location of the terminal Viewpoints within the preset distance range of can be understood as target viewpoints. Furthermore, the terminal device can obtain the similarity between the feature of the image of each view angle under the target viewpoint and the feature of the first image, which can avoid calculating the similarity of features under all viewpoints, and can improve the computing efficiency of the cloud.
在一种实施例中,无论终端使用多大的拍照倍率,终端均可以执行S702-S708。在一种实施例中,终端在向云端发送第一图像以及终端的位置时,可以发送第一倍率。相应的,云端响应于第一倍率大于或等于预设倍率,执行S703-S708,云端响应于第一倍率小于预设倍率,因为终端本身就可以获取高清的第一图像,因此云端可以不执行S703-S708,以节省云端的计算资源。In an embodiment, no matter how much the camera magnification is used by the terminal, the terminal can execute S702-S708. In an embodiment, when the terminal sends the first image and the location of the terminal to the cloud, it may send the first magnification. Correspondingly, the cloud executes S703-S708 in response to the first magnification being greater than or equal to the preset magnification, and the cloud responds to the first magnification being smaller than the preset magnification, because the terminal itself can obtain the first high-definition image, so the cloud does not need to execute S703 -S708 to save computing resources in the cloud.
在一种实施例中,因为终端采用低倍率可以得到高清图像,这时无需终端和云端交互获取高清图像,因此在终端使用高倍率拍照的场景中,终端可以执行S702。在该实施例中,S701可以替换为:响应于拍照指令,以第一倍率进行拍照得到第一图像,第一倍率大于预设倍率。相应的,S702可以替换为:终端响应于第一倍率大于预设倍率,向云端发送第一图像以及终端的位置。In one embodiment, because the terminal can obtain high-definition images with low magnification, there is no need for the terminal to interact with the cloud to obtain high-definition images. Therefore, in the scene where the terminal takes pictures at high magnification, the terminal can perform S702. In this embodiment, S701 may be replaced by: in response to the photographing instruction, photographing at a first magnification to obtain a first image, where the first magnification is greater than a preset magnification. Correspondingly, S702 may be replaced by: the terminal sends the first image and the location of the terminal to the cloud in response to the first magnification being greater than the preset magnification.
S705,云端根据最大相似度对应的特征,以及第二索引关系,确定最大相似度对应的特征映射的中心点在全景图像中的位置坐标。S705. The cloud determines the position coordinates of the center point of the feature map corresponding to the maximum similarity in the panoramic image according to the feature corresponding to the maximum similarity and the second index relationship.
第二索引关系为:每个视角的图像的中心点在全景图像中的坐标位置和图像的特征的映射关系。云端在获取各视点下每个视角的图像的特征和第一图像的特征的相似度后,可以确定最大相似度,进而确定最大相似度对应的特征。在一种示例中,云端在获取处于终端的位置的预设距离范围内的视点下每个视角的图像的特征和第一图像的特征的相似度后,可以确 定最大相似度,进而确定最大相似度对应的特征。The second index relationship is: a mapping relationship between the coordinate position of the center point of the image of each viewing angle in the panoramic image and the feature of the image. After the cloud obtains the similarity between the feature of the image of each viewing angle under each viewpoint and the feature of the first image, the maximum similarity can be determined, and then the feature corresponding to the maximum similarity can be determined. In one example, the cloud can determine the maximum similarity after obtaining the similarity between the features of the image of each viewing angle and the features of the first image at the viewpoint within the preset distance range of the location of the terminal, and then determine the maximum similarity corresponding features.
如此,云端可以根据存储的第二索引关系,以及最大相似度对应的特征,得到最大相似度对应的特征映射的中心点在全景图像中的位置坐标。In this way, the cloud can obtain the position coordinates of the center point of the feature map corresponding to the maximum similarity in the panoramic image according to the stored second index relationship and the feature corresponding to the maximum similarity.
S706,云端根据最大相似度对应的特征映射的中心点在全景图像中的位置坐标,以及第一索引关系,确定中心点映射的图像块的标识。S706. The cloud determines the identity of the image block mapped by the center point according to the position coordinates of the center point of the feature map corresponding to the maximum similarity in the panoramic image and the first index relationship.
第一索引关系为:每个视角的图像的中心点在全景图像中的坐标位置和图像块的映射关系。因此在云端得到最大相似度对应的特征对应的中心点在全景图像中的位置坐标后,可以根据该中心点在全景图像中的位置坐标,以及第一索引关系,得到该中心点在全景图像中的位置坐标映射的图像块的标识,即最大相似度对应的特征映射的图像块的标识。在一种实施例中,最大相似度对应的特征映射的图像块的标识可以称为目标标识。The first index relationship is: a mapping relationship between the coordinate position of the center point of the image of each viewing angle in the panoramic image and the image block. Therefore, after the position coordinates of the central point corresponding to the feature corresponding to the maximum similarity in the panoramic image are obtained in the cloud, the position coordinates of the central point in the panoramic image and the first index relationship can be used to obtain the position coordinates of the central point in the panoramic image. The identity of the image block mapped to the position coordinates of , that is, the identity of the image block of the feature map corresponding to the maximum similarity. In an embodiment, the identification of the image block of the feature map corresponding to the maximum similarity may be referred to as an object identification.
示例性的,如最大相似度对应的特征映射的中心点在全景图像中的位置坐标为(经1、纬1),则基于表一(第一索引关系),可以得到该(经1、纬1)映射的图像块的编号为(行1、列1)、(行1、列2)。Exemplarily, if the position coordinates of the center point of the feature map corresponding to the maximum similarity in the panoramic image are (longitude 1, latitude 1), then based on Table 1 (the first index relationship), the (longitude 1, latitude 1) can be obtained 1) The numbers of the mapped image blocks are (row 1, column 1), (row 1, column 2).
在一种实施例中,若云端中存储有第三索引关系,第三索引关系为:图像的特征和图像块的编号的映射关系。在该实施例中,云端得到最大相似度对应的特征后,可以根据第三索引关系,得到最大相似度对应的特征映射的图像块的标识。相应的,在该实施例中,S705和S706可以替换为:云端根据最大相似度对应的特征,以及第三索引关系,确定最大相似度对应的特征映射的图像块的标识。In an embodiment, if the third index relationship is stored in the cloud, the third index relationship is: a mapping relationship between image features and image block numbers. In this embodiment, after the cloud obtains the feature corresponding to the maximum similarity, it can obtain the identifier of the image block of the feature map corresponding to the maximum similarity according to the third index relationship. Correspondingly, in this embodiment, S705 and S706 may be replaced by: the cloud determines the identity of the image block of the feature map corresponding to the maximum similarity according to the feature corresponding to the maximum similarity and the third index relationship.
S707,云端采用反投影变换,根据中心点映射的图像块,得到第二图像。S707. The cloud adopts back-projection transformation, and obtains the second image according to the image block mapped by the center point.
其中,中心点映射的图像块即目标标识对应的图像块。云端中存储有各视点下全景图像对应的多个图像块,在云端确定中心点映射的图像块的目标标识后,可以拼接目标标识对应的图像块,进而采用反投影变换,得到第二图像。其中,第二图像的清晰度高于第一图像的清晰度。在一种实施例中,第二图像的清晰度大于预设清晰度。Wherein, the image block mapped by the center point is the image block corresponding to the target identifier. The cloud stores multiple image blocks corresponding to the panorama image at each viewpoint. After the target identifier of the image block mapped by the center point is determined in the cloud, the image blocks corresponding to the target identifier can be spliced, and then back-projection transformation is used to obtain the second image. Wherein, the definition of the second image is higher than that of the first image. In an embodiment, the resolution of the second image is greater than the preset resolution.
其中,云端可以根据第一视平面和第二视平面的变换关系,将拼接后的中心点对应的图像块映射至第一视平面,以得到第二图像,即为终端拍摄的视平面的图像。Wherein, the cloud can map the image block corresponding to the spliced center point to the first viewing plane according to the transformation relationship between the first viewing plane and the second viewing plane, so as to obtain the second image, which is the image of the viewing plane captured by the terminal .
在一种实施例中,云端可以根据中心点映射的图像块的编号,对中心点映射的图像块进行拼接。示例性的,如(经1、纬1)映射的图像块的编号为(行1、列1)、(行1、列2),则云端可以按照行列顺序,拼接编号为(行1、列1)、(行1、列2)的图像块,得到第二图像。In an embodiment, the cloud may stitch the image blocks mapped with the center point according to the numbers of the image blocks mapped with the center point. Exemplarily, if the number of image blocks mapped by (longitude 1, latitude 1) is (row 1, column 1), (row 1, column 2), then the cloud can be spliced in the order of rows and columns, and the splicing number is (row 1, column 1), (row 1, column 2) image blocks to obtain the second image.
在一种实施例中,图像块之间存在重叠区域时,云端可以将(行1、列1)的图像块中的重叠区域覆盖(行1、列2)的图像块中的重叠区域,以拼接编号为(行1、列1)、(行1、列2)的图像块。其中,云端可以根据(行1、列1)的图像块和(行1、列2)的图像块中像素的相似度,确定(行1、列1)的图像块和(行1、列2)的图像块中的重叠区域,如将相似度为100%的区域作为重叠区域。In one embodiment, when there is an overlapping area between the image blocks, the cloud may cover the overlapping area in the image block (row 1, column 1) with the overlapping area in the image block (row 1, column 2) to Image blocks numbered (row 1, column 1), (row 1, column 2) are spliced. Among them, the cloud can determine the image block of (row 1, column 1) and the image block of (row 1, column 2) according to the similarity of the pixels in the image block of (row 1, column 1) and the image block of (row 1, column 2) ) in the image blocks, for example, an area with a similarity of 100% is used as an overlapping area.
S708,云端向终端发送第二图像。S708. The cloud sends the second image to the terminal.
相应的,终端接收来自云端的第二图像。Correspondingly, the terminal receives the second image from the cloud.
S709,终端响应于图像显示指令,显示第二图像。S709. The terminal displays the second image in response to the image display instruction.
终端接收到来自云端的第二图像后,可以基于用户的操作,显示第二图像。或者,终端接收到来自云端的第二图像,可以显示第二图像。After receiving the second image from the cloud, the terminal may display the second image based on the user's operation. Alternatively, the terminal may display the second image after receiving the second image from the cloud.
在一种实施例中,图像显示指令可以为用户操作拍照界面触发的指令。如拍照界面上包括图像显示控件,用户操作该图像显示控件可以触发向终端输入图像显示指令。在一种实施 例中,图像显示指令还可以为用户语音触发的,本申请实施例对终端接收拍照指令的方式不做限制。In an embodiment, the image display instruction may be an instruction triggered by the user operating the camera interface. For example, the camera interface includes an image display control, and the user's operation of the image display control can trigger the input of an image display instruction to the terminal. In one embodiment, the image display instruction may also be triggered by the user's voice, and the embodiment of the present application does not limit the way the terminal receives the photographing instruction.
示例性的,参照图8中的a,用户点击拍照控件82后,终端和云端交互执行S701-S708,在终端接收到来自云端的第二图像后,可以将第二图像存储至本地图像数据库(如相册)中。如图8中的b所示,拍照界面上包括图像显示控件84,用户点击图像显示控件84,终端可以显示高清晰度的第二图像,如图8中的c所示。与上述图1中的b不同的是,终端使用第一倍率(第一倍率为高倍率)拍照时,拍照得到的图像的清晰度高,如用户可以清楚地在第二图像中看到电脑屏幕上的文字。Exemplarily, referring to a in FIG. 8, after the user clicks the camera control 82, the terminal and the cloud interact to execute S701-S708, and after the terminal receives the second image from the cloud, the second image can be stored in the local image database ( such as a photo album). As shown in b in FIG. 8 , the camera interface includes an image display control 84 , and the user clicks on the image display control 84 , and the terminal can display a high-definition second image, as shown in c in FIG. 8 . The difference from b in Figure 1 above is that when the terminal uses the first magnification (the first magnification is a high magnification) to take pictures, the image obtained by taking pictures has high definition, for example, the user can clearly see the computer screen in the second image on the text.
在一种实施例中,图7所示的步骤S701-S709可以简化为图9所示。In an embodiment, steps S701-S709 shown in FIG. 7 can be simplified as shown in FIG. 9 .
本申请实施例中,终端使用高倍率拍照时,可以向云端发送拍摄的第一图像,由云端根据第一图像的特征和存储的各视点下多个视角的图像的相似度,确定最大相似度对应的图像,进而基于第一索引关系和第二索引关系,得到第一图像对应的图像块,进而拼接图像块进行反投影变换可以得到具有高清晰度的第二图像,如此终端可以显示具有高清晰度的第二图像,达到了终端在使用高倍率拍照时可以得到高清晰度的图像的目的。另一方面,也因为云端中存储的为各视点下的多个图像块、第一索引关系和第二索引关系,或者,云端中存储各视点下的多个图像块和第三索引关系。如此,相较于现有技术中云端存储有各视点下不同视角的高清图像的方式,可以减少云端的存储开销。In the embodiment of the present application, when the terminal uses a high magnification to take pictures, it can send the first image captured to the cloud, and the cloud determines the maximum similarity based on the characteristics of the first image and the stored similarity of images from multiple perspectives at each viewpoint The corresponding image, and then based on the first index relationship and the second index relationship, the image block corresponding to the first image is obtained, and then the image block is spliced and back-projected to obtain the second image with high definition, so that the terminal can display the image with high The high-definition second image achieves the purpose that the terminal can obtain a high-definition image when taking pictures at a high magnification. On the other hand, it is also because the cloud stores multiple image blocks under each viewpoint, the first index relationship and the second index relationship, or the cloud stores multiple image blocks under each viewpoint and the third index relationship. In this way, compared with the way in the prior art that the cloud stores high-definition images of different viewing angles under various viewpoints, the storage cost of the cloud can be reduced.
图7所示的实施例中,云端中存储有各视点下全景图像对应的图像块,以及第一索引关系和第二索引关系,或者云端中存储有各视点下全景图像对应的图像块,以及第三索引关系。在一种实施例中,云端中可以存储有各视点下全景图像。示例性的,云端中存储的各视点下全景图像可以如图10所示,应理解,图10中以全景图像中包括不同的形状(如黑色矩形、黑色三角形等)表征不同视点下全景图像。In the embodiment shown in FIG. 7 , the image blocks corresponding to the panoramic images under each viewpoint, and the first index relationship and the second index relationship are stored in the cloud, or the image blocks corresponding to the panoramic images under each viewpoint are stored in the cloud, and Third index relationship. In an embodiment, panoramic images at various viewpoints may be stored in the cloud. Exemplarily, the panoramic images at various viewpoints stored in the cloud may be shown in FIG. 10 . It should be understood that in FIG. 10 , the panoramic images include different shapes (such as black rectangles, black triangles, etc.) to represent the panoramic images at different viewpoints.
在该种实施例中,依据上述S701-S706中的描述,云端可以确定第一图像对应的图像块(即最大相似度对应的特征映射的图像块的编号),若云端中存储的为各视点下全景图像,则云端可以根据第一图像对应的图像块的编号,在该视点下的全景图像中切割出该编号对应的图像块,且反投影变换得到第二图像。如云端可以先加载该视点下的全景图像,进而将全景图像中第一图像对应的图像块的编号对应的图像块进行切割,且投影变换得到第二图像。In such an embodiment, according to the description in S701-S706 above, the cloud can determine the image block corresponding to the first image (that is, the number of the image block of the feature map corresponding to the maximum similarity). Then, according to the number of the image block corresponding to the first image, the cloud can cut out the image block corresponding to the number from the panoramic image under the viewpoint, and perform back projection transformation to obtain the second image. For example, the cloud may first load the panoramic image under the viewpoint, and then cut the image block corresponding to the number of the image block corresponding to the first image in the panoramic image, and perform projection transformation to obtain the second image.
示例性的,如第一图像对应的图像块的编号为(行1、列1)、(行1、列2),则云端可以按照图像块编号和图像块的第一预设尺寸,将全景图像中编号为(行1、列1)和(行1、列2)的图像块切割下来,且投影变换得到第二图像。Exemplarily, if the number of the image block corresponding to the first image is (row 1, column 1), (row 1, column 2), then the cloud can convert the panorama according to the image block number and the first preset size of the image block Image blocks numbered (row 1, column 1) and (row 1, column 2) in the image are cut out, and projectively transformed to obtain a second image.
相较于上述图7所示的实施例,图7所示的实施例中因为云端中存储的为全景图像对应的图像块,在云端确定第一图像对应的图像块的编号时,可以直接加载对应的图像块进行拼接、反投影变换等,而本申请实施例中,因为云端中存储的为全景图像,则云端在确定第一图像对应的图像块的编号后,需要先加载该图像块的编号所属的全景图像,进而在全景图像中切割出对应的图像块。而云端加载图像块的速度远大于加载整张全景图像的速度,因此图7所示的实施例中云端的加载效率高,可以更快地向终端反馈第二图像。Compared with the above-mentioned embodiment shown in FIG. 7, in the embodiment shown in FIG. 7, because the image block corresponding to the panoramic image is stored in the cloud, when the number of the image block corresponding to the first image is determined in the cloud, it can be directly loaded. The corresponding image blocks are spliced, back-projected, etc., and in the embodiment of the present application, because the cloud is stored as a panoramic image, after the cloud determines the number of the image block corresponding to the first image, it needs to first load the number of the image block. The panoramic image to which the number belongs, and then the corresponding image block is cut out in the panoramic image. However, the speed of loading image blocks in the cloud is much faster than the speed of loading the entire panoramic image. Therefore, in the embodiment shown in FIG. 7 , the loading efficiency of the cloud is high, and the second image can be fed back to the terminal faster.
本申请实施例提供的图像处理方法,云端中可以存储各视点下全景图像,在云端得到第一图像对应的图像块的编号后,可以在该编号所属的全景图像中切割对应的图像块,且投影变换得到第二图像。本申请实施例提供的图像处理方法也可以达到终端在使用高倍率拍照时得到高清晰度的图像的目的,相较于图7所示的实施例中云端加载图像块进行反投影变换的方 式,本申请实施例中因为云端还需要加载整张全景图像,进而在全景图像中切割图像块,因此加载时间长,加载效率低,向终端反馈第二图像的效率相对较低。In the image processing method provided by the embodiment of the present application, the panoramic images under each viewpoint can be stored in the cloud, and after the number of the image block corresponding to the first image is obtained in the cloud, the corresponding image block can be cut in the panoramic image to which the number belongs, and The projective transformation results in the second image. The image processing method provided by the embodiment of the present application can also achieve the purpose of obtaining high-definition images when the terminal uses high-magnification photography. In the embodiment of the present application, because the cloud still needs to load the entire panoramic image, and then cut the image blocks in the panoramic image, the loading time is long, the loading efficiency is low, and the efficiency of feeding back the second image to the terminal is relatively low.
在一种实施例中,终端中可以存储各视点下的多个图像块、第一索引关系和第二索引关系,或者,终端中存储各视点下的多个图像块和第三索引关系,或者,终端中可以存储各视点下全景图像,在终端使用高倍率拍照得到第一图像时,终端可以执行S703-S707,以得到高清晰度的第二图像,进而终端可以响应于图像显示指令,显示第二图像。In an embodiment, the terminal may store multiple image blocks under each viewpoint, the first index relationship and the second index relationship, or, the terminal may store multiple image blocks under each viewpoint and the third index relationship, or , the terminal can store panoramic images at various viewpoints, and when the terminal uses a high-magnification camera to obtain the first image, the terminal can execute S703-S707 to obtain a high-definition second image, and then the terminal can respond to the image display command, display second image.
如上实施例中以终端的云端交互为例,说明云端可以对来自终端的图像进行处理的场景,以及终端可以对拍摄得到的图像进行处理的场景。综上所示,针对一电子设备来说,如电子设备可以为云端、终端,或者其他具有处理能力的设备。参照图11,本申请实施例提供的图像处理方法还可以包括:In the above embodiment, the cloud interaction of the terminal is taken as an example to illustrate the scene where the cloud can process the image from the terminal, and the scene where the terminal can process the captured image. In summary, for an electronic device, for example, the electronic device may be a cloud, a terminal, or other devices with processing capabilities. Referring to Figure 11, the image processing method provided by the embodiment of the present application may also include:
S1101,获取待处理的第一图像。S1101. Acquire a first image to be processed.
当电子设备为云端时,云端获取待处理的第一图像的方式可以为:终端拍摄第一图像后发送至云端,可以参照S701-S702中的相关描述。在一种实施例中,也可以由用户将待处理的第一图像上传至云端,或者第一图像为云端本地存储的图像。When the electronic device is a cloud, the way for the cloud to acquire the first image to be processed may be: the terminal sends the first image to the cloud after taking the first image, and reference may be made to related descriptions in S701-S702. In an embodiment, the user may also upload the first image to be processed to the cloud, or the first image is an image locally stored in the cloud.
当电子设备为终端时,终端可以拍摄得到第一图像,或者第一图像可以作为终端本地存储的图像。When the electronic device is a terminal, the terminal may capture the first image, or the first image may be used as an image locally stored in the terminal.
当电子设备为其他具有处理能力的设备时,该设备可以拍摄得到第一图像,或者由用户上传第一图像至该设备,或者第一图像可以为该设备本地存储的图像,或者第一图像可以为来自其他电子设备传输的图像。When the electronic device is another device with processing capabilities, the device may capture the first image, or the user may upload the first image to the device, or the first image may be an image stored locally on the device, or the first image may be For images transmitted from other electronic devices.
本申请实施例中对电子设备获取待处理的第一图像的方式不做限制。In this embodiment of the present application, there is no limitation on the manner in which the electronic device acquires the first image to be processed.
S1102,获取各视点下每个视角的图像的特征和第一图像的特征的相似度。S1102. Obtain the similarity between the feature of the image of each viewing angle under each viewpoint and the feature of the first image.
电子设备执行S1102的步骤,可以参照S703-S704中的相关描述。For the electronic device to execute the step of S1102, reference may be made to related descriptions in S703-S704.
S1103,根据最大相似度对应的视角的图像的特征,以及映射关系,确定最大相似度对应的视角的图像的特征映射的目标标识。S1103. According to the feature of the image of the viewing angle corresponding to the maximum similarity and the mapping relationship, determine the target identifier of the feature map of the image of the viewing angle corresponding to the maximum similarity.
在一种实施例中,映射关系可以为第三索引关系。第三索引关系为:图像的特征和图像块的编号的映射关系。在该实施例中,云端得到最大相似度对应的特征后,可以根据第三索引关系,得到最大相似度对应的特征映射的图像块的标识。In an embodiment, the mapping relationship may be a third index relationship. The third index relationship is: a mapping relationship between image features and image block numbers. In this embodiment, after the cloud obtains the feature corresponding to the maximum similarity, it can obtain the identifier of the image block of the feature map corresponding to the maximum similarity according to the third index relationship.
在一种实施例中,映射关系可以包括第一索引关系和第二索引关系。第一索引关系为:每个视角的图像的中心点在全景图像中的坐标位置和图像块的映射关系,第二索引关系为:每个视角的图像的中心点在全景图像中的坐标位置和图像的特征的映射关系。在该实施例中,电子设备在获取各视点下每个视角的图像的特征和第一图像的特征的相似度(或者处于终端的位置的预设距离范围内的视点下每个视角的图像的特征和第一图像的特征的相似度)后,可以确定最大相似度,进而确定最大相似度对应的特征。进一步的,电子设备可以根据存储的第二索引关系,以及最大相似度对应的特征,得到最大相似度对应的特征映射的中心点在全景图像中的位置坐标,进而根据中心点在全景图像中的位置坐标,以及第一索引关系,得到该中心点在全景图像中的位置坐标映射的图像块的标识。In an embodiment, the mapping relationship may include a first index relationship and a second index relationship. The first index relationship is: the coordinate position of the center point of the image of each viewing angle in the panoramic image and the mapping relationship of the image block, and the second index relationship is: the coordinate position and the coordinate position of the center point of the image of each viewing angle in the panoramic image The mapping relationship of image features. In this embodiment, the electronic device obtains the similarity between the feature of the image of each viewing angle under each viewpoint and the feature of the first image (or the similarity of the image of each viewing angle within the preset distance range of the position of the terminal) After the similarity between the feature and the feature of the first image), the maximum similarity can be determined, and then the feature corresponding to the maximum similarity can be determined. Further, the electronic device can obtain the position coordinates of the central point of the feature map corresponding to the maximum similarity in the panoramic image according to the stored second index relationship and the feature corresponding to the maximum similarity, and then according to the position coordinates of the central point in the panoramic image The position coordinates, and the first index relationship, obtain the identity of the image block mapped to the position coordinates of the central point in the panoramic image.
其中,中心点在全景图像中的位置坐标映射的图像块的标识即为目标标识。Wherein, the identifier of the image block mapped to the position coordinates of the central point in the panoramic image is the target identifier.
S1104,根据目标标识对应的图像块,获取第二图像,第二图像的清晰度大于第一图像的清晰度。S1104. Acquire a second image according to the image block corresponding to the target identifier, where the definition of the second image is greater than that of the first image.
在一种实施例中,电子设备可以拼接目标标识对应的图像块,获取第二图像。In an embodiment, the electronic device may stitch the image blocks corresponding to the target identifier to acquire the second image.
或者,在一种实施例中,电子设备可以采用S707中的方式处理目标标识对应的图像块,以获取第二图像。Or, in an embodiment, the electronic device may process the image block corresponding to the target identifier in the manner in S707, so as to acquire the second image.
电子设备在得到第二图像后,因为第二图像是基于视点下对应的图像块获取的,因此第二图像的清晰度高于第一图像的清晰度,因此可以实现电子设备对第一图像的处理,得到清晰度更高的图像的目的。After the electronic device obtains the second image, because the second image is acquired based on the corresponding image block under the viewpoint, the definition of the second image is higher than that of the first image, so the electronic device can realize the accuracy of the first image. processing to obtain a higher-resolution image.
在一种实施例中,电子设备在得到第二图像后,电子设备可以存储第二图像,或者将第二图像传输至其他电子设备,本申请实施例对第二图像的后处理不做限制。对于云端和终端交互的场景,云端可以将第二图像发送至终端进行显示和存储。In an embodiment, after the electronic device obtains the second image, the electronic device may store the second image, or transmit the second image to other electronic devices. This embodiment of the present application does not limit the post-processing of the second image. For the scene where the cloud and the terminal interact, the cloud can send the second image to the terminal for display and storage.
本申请实施例中,电子设备中存储有各视点下全景图像对应的图像块,以及各视点下每个视角的图像的特征和各视点下全景图像对应的图像块的标识的映射关系,各视点下每个视角的图像和各视点下全景图像对应的图像块均基于各视点下全景图像得到,各视点下全景图像为高清图像,相较于现有技术中存储各视点下不同视角的高清图像的方式,可以减小存储开销,在此基础上,电子设备还可以对清晰度不高的地第一图像进行处理,得到清晰度更高的第二图像。In the embodiment of the present application, the electronic device stores the image blocks corresponding to the panoramic images under each viewpoint, and the mapping relationship between the features of the images of each viewpoint under each viewpoint and the identifiers of the image blocks corresponding to the panoramic images under each viewpoint, and each viewpoint The images of each angle of view and the image blocks corresponding to the panoramic images of each viewpoint are obtained based on the panoramic images of each viewpoint, and the panoramic images of each viewpoint are high-definition images. In this way, the storage overhead can be reduced. On this basis, the electronic device can also process the first image with low definition to obtain the second image with higher definition.
图12为本申请实施例提供的图像处理装置的一种结构示意图。图像处理装置可以为如上实施例中的云端、终端、电子设备,或者云端中的芯片,或终端中的芯片,或者电子设备中的芯片,用于实现本申请实施例提供的图像处理方法。其中,电子设备中存储有各视点下全景图像对应的图像块,以及所述各视点下每个视角的图像的特征和所述各视点下全景图像对应的图像块的标识的映射关系,所述各视点下每个视角的图像和所述各视点下全景图像对应的图像块均基于所述各视点下全景图像得到。FIG. 12 is a schematic structural diagram of an image processing device provided by an embodiment of the present application. The image processing apparatus may be the cloud, terminal, or electronic device in the above embodiments, or a chip in the cloud, or a chip in the terminal, or a chip in the electronic device, and is used to implement the image processing method provided in the embodiment of the present application. Wherein, the electronic device stores the image blocks corresponding to the panoramic images under each viewpoint, and the mapping relationship between the features of the images of each viewpoint under the viewpoints and the identifiers of the image blocks corresponding to the panoramic images under each viewpoint, the The images of each viewing angle at each viewpoint and the image blocks corresponding to the panoramic images at each viewpoint are obtained based on the panoramic images at each viewpoint.
参照图12,图像处理装置1200包括:处理模块1201,存储模块1202、以及收发模块1203。Referring to FIG. 12 , an image processing device 1200 includes: a processing module 1201 , a storage module 1202 , and a transceiver module 1203 .
处理模块1201,用于获取待处理的第一图像,以及获取所述各视点下每个视角的图像的特征和所述第一图像的特征的相似度,根据最大相似度对应的视角的图像的特征,以及所述映射关系,确定所述最大相似度对应的视角的图像的特征映射的目标标识,以及根据所述目标标识对应的图像块,获取第二图像,所述第二图像的清晰度高于所述第一图像的清晰度。The processing module 1201 is configured to obtain the first image to be processed, and obtain the similarity between the features of the image of each viewing angle under the various viewpoints and the feature of the first image, and according to the image of the viewing angle corresponding to the maximum similarity features, and the mapping relationship, determine the target identifier of the feature map of the image corresponding to the maximum similarity, and obtain a second image according to the image block corresponding to the target identifier, and the clarity of the second image higher than the resolution of the first image.
在一种可能的实现方式中,处理模块1201,具体用于获取所述第一图像,以及拍摄所述第一图像的位置,且确定距离该位置预设范围内的目标视点,以及获取所述目标视点下每个视角的图像的特征和所述第一图像的特征的相似度。In a possible implementation manner, the processing module 1201 is specifically configured to acquire the first image, and a location where the first image is taken, and determine a target viewpoint within a preset range from the location, and acquire the A degree of similarity between the feature of the image of each viewing angle under the target viewpoint and the feature of the first image.
在一种可能的实现方式中,所述映射关系包括第一索引关系和第二索引关系,所述第二索引关系为:所述各视点下每个视角的图像的特征和所述各视点下每个视角的图像的中心点的映射关系,所述第一索引关系为:所述各视点下每个视角的图像的中心点与所述各视点下全景图像对应的图像块的标识的映射关系。In a possible implementation manner, the mapping relationship includes a first index relationship and a second index relationship, and the second index relationship is: the feature of the image of each viewing angle under each viewpoint and the The mapping relationship of the central point of the image of each viewing angle, the first index relationship is: the mapping relationship between the central point of the image of each viewing angle under each viewing point and the identifier of the image block corresponding to the panoramic image under each viewing point .
处理模块1201,具体用于根据所述最大相似度对应的视角的图像的特征,以及所述第二索引关系,确定所述最大相似度对应的视角的图像的特征映射的中心点,以及根据所述最大相似度对应的视角的图像的特征映射的中心点,以及所述第一索引关系,确定所述目标标识。The processing module 1201 is specifically configured to determine the center point of the feature map of the image of the viewing angle corresponding to the maximum similarity according to the feature of the image of the viewing angle corresponding to the maximum similarity and the second index relationship, and according to the The center point of the feature map of the image of the viewing angle corresponding to the maximum similarity and the first index relationship are used to determine the target identifier.
在一种可能的实现方式中,处理模块1201,还用于根据所述各视点下全景图像,获取所述各视点下每个视角的图像的特征、所述第一索引关系,以及所述第二索引关系。In a possible implementation manner, the processing module 1201 is further configured to acquire, according to the panoramic images under each viewpoint, the feature of the image of each viewpoint under each viewpoint, the first index relationship, and the second index relationship. Two-index relationship.
存储模块1202,用于存储所述各视点下每个视角的图像的特征、所述第一索引关系,以及所述第二索引关系。The storage module 1202 is configured to store the feature of the image of each viewing angle under each viewing point, the first index relationship, and the second index relationship.
在一种可能的实现方式中,处理模块1201,具体用于对所述各视点下全景图像采用反投影变换,得到所述各视点下多个视角的图像,以及所述各视点下每个视角的图像的中心点在对应的全景图像的坐标位置,所述各视点下相邻视角的图像之间的重叠率大于预设重叠率;提取所述各视点下每个视角的图像的特征。In a possible implementation manner, the processing module 1201 is specifically configured to apply back-projection transformation to the panoramic images under each viewpoint to obtain images of multiple viewing angles under each viewpoint, and each viewing angle under each viewpoint The center point of the image is at the coordinate position of the corresponding panoramic image, and the overlapping ratio between images of adjacent viewing angles under each viewpoint is greater than a preset overlapping ratio; extracting the features of the image of each viewing angle under each viewpoint.
在一种可能的实现方式中,处理模块1201,具体用于在所述各视点下全景图像中,采用具有第二预设尺寸的滑动窗口在全景图像中进行滑动,且采用反投影变换,依次得到所述滑动窗口内的部分全景图像对应的视角的图像和所述部分全景图像对应的视角的图像的中心点在对应的全景图像的坐标位置,所述各视点下每个视角的图像具有所述第二预设尺寸。In a possible implementation manner, the processing module 1201 is specifically configured to use a sliding window with a second preset size to slide in the panoramic image in the panoramic image at each viewpoint, and use back-projection transformation, sequentially Obtain the image of the angle of view corresponding to the partial panoramic image in the sliding window and the center point of the image of the angle of view corresponding to the partial panoramic image in the coordinate position of the corresponding panoramic image, and the image of each angle of view under each viewpoint has the the second preset size.
在一种可能的实现方式中,处理模块1201,具体用于根据所述各视点下每个视角的图像的中心点在对应的全景图像的坐标位置,以及所述各视点下每个视角的图像的特征,构建所述第二索引关系。In a possible implementation manner, the processing module 1201 is specifically configured to, according to the coordinate position of the center point of the image of each angle of view under each viewpoint in the corresponding panoramic image, and the image of each angle of view under each viewpoint feature to construct the second index relationship.
在一种可能的实现方式中,处理模块1201,具体用于将所述各视点下全景图像进行切割,得到所述各视点下全景图像对应的图像块,以及根据所述各视点下每个视角的图像的中心点在对应的全景图像的坐标位置,以及所述各视点下全景图像对应的图像块,构建所述第一索引关系。In a possible implementation manner, the processing module 1201 is specifically configured to cut the panoramic image under each viewpoint to obtain image blocks corresponding to the panoramic image under each viewpoint, and The center point of the image is at the coordinate position of the corresponding panoramic image, and the image blocks corresponding to the panoramic image under each viewpoint are used to construct the first index relationship.
在一种可能的实现方式中,所述各视点下全景图像对应的图像块具有第一预设尺寸。In a possible implementation manner, the image blocks corresponding to the panoramic images under each viewpoint have a first preset size.
在一种可能的实现方式中,处理模块1201,还用于采用全景图像拼接技术,根据所述各视点下多个视角的预先采集的图像,获取所述各视点下全景图像,所述各视点下相邻视角的预先采集的图像之间的重叠率小于所述预设重叠率。In a possible implementation manner, the processing module 1201 is further configured to use the panoramic image stitching technology to obtain the panoramic images under each viewpoint according to the pre-collected images of multiple viewpoints under each viewpoint, and the The overlapping ratio between the pre-collected images of the lower adjacent viewing angles is smaller than the preset overlapping ratio.
在一种可能的实现方式中,处理模块1201,具体用于将处于第一视平面中的所述各视点下每个视角的预先采集的图像投影至全景图像所属的第二视平面,以得到所述各视点下全景图像,以及所述第一视平面和所述第二视平面的变换关系。In a possible implementation manner, the processing module 1201 is specifically configured to project the pre-acquired image of each view angle under each view point in the first viewing plane to the second viewing plane to which the panoramic image belongs, so as to obtain The panoramic image under each viewpoint, and the transformation relationship between the first viewing plane and the second viewing plane.
在一种可能的实现方式中,处理模块1201,具体用于根据所述变换关系,采用反投影变换将所述目标标识对应的图像块投影至所述第一视平面,得到所述第二图像。In a possible implementation manner, the processing module 1201 is specifically configured to, according to the transformation relationship, use back projection transformation to project the image block corresponding to the target identifier to the first viewing plane to obtain the second image .
在一种可能的实现方式中,所述电子设备为云端,收发模块1203,用于接收来自终端的第一图像和所述终端拍摄所述第一图像时所述终端的位置,以及向所述终端发送所述第二图像。In a possible implementation manner, the electronic device is a cloud, and the transceiver module 1203 is configured to receive the first image from the terminal and the location of the terminal when the terminal captures the first image, and send a message to the The terminal sends the second image.
本申请实施例提供的图像处理装置用于执行如上实施例中的图像处理方法,具有与上述实施例相同的实现原理和技术效果。The image processing device provided in the embodiment of the present application is used to execute the image processing method in the above embodiment, and has the same implementation principle and technical effect as the above embodiment.
在一种实施例中,本申请实施例还提供一种电子设备,参照图13,该电子设备可以为上述实施例中的云端、终端或图11中所述的电子设备,该电子设备中可以包括:处理器(例如CPU)1301、存储器1302。存储器1302可能包含高速随机存取存储器(random-access memory,RAM),也可能还包括非易失性存储器(non-volatile memory,NVM),例如至少一个磁盘存储器,存储器1302中可以存储各种指令,以用于完成各种处理功能以及实现本申请的方法步骤。In one embodiment, the embodiment of the present application also provides an electronic device. Referring to FIG. 13, the electronic device may be the cloud, terminal or the electronic device described in FIG. Including: a processor (eg CPU) 1301 and a memory 1302 . The memory 1302 may include a high-speed random-access memory (random-access memory, RAM), and may also include a non-volatile memory (non-volatile memory, NVM), such as at least one disk memory, and various instructions may be stored in the memory 1302 , so as to complete various processing functions and realize the method steps of the present application.
在一种实施例中,电子设备中可以包括屏幕1303,用于显示电子设备的界面和图像等。In one embodiment, the electronic device may include a screen 1303 for displaying an interface and images of the electronic device.
可选的,本申请涉及的电子设备还可以包括:电源1304、通信总线1305以及通信端口1306。通信端口1306用于实现电子设备与其他外设之间进行连接通信。在本申请实施例中,存储器1302用于存储计算机可执行程序代码,程序代码包括指令;当处理器执行指令时,指令使电子设备的处理器执行上述方法实施例中的动作,其实现原理和技术效果类似,在此不再赘述。Optionally, the electronic device involved in this application may further include: a power supply 1304 , a communication bus 1305 and a communication port 1306 . The communication port 1306 is used to realize connection and communication between the electronic device and other peripheral devices. In this embodiment of the present application, the memory 1302 is used to store computer-executable program codes, and the program codes include instructions; when the processor executes the instructions, the instructions cause the processor of the electronic device to perform the actions in the above-mentioned method embodiments, and its implementation principles and The technical effects are similar, and will not be repeated here.
需要说明的是,上述实施例中所述的模块或部件可以是被配置成实施以上方法的一个或多个集成电路,例如:一个或多个专用集成电路(application specific integrated circuit,ASIC),或,一个或多个微处理器(digital signal processor,DSP),或,一个或者多个现场可编程门阵列(field programmable gate array,FPGA)等。再如,当以上某个模块通过处理元件调度程序代码的形式实现时,该处理元件可以是通用处理器,例如中央处理器(central processing unit,CPU)或其它可以调用程序代码的处理器如控制器。再如,这些模块可以集成在一起,以片上系统(system-on-a-chip,SOC)的形式实现。It should be noted that the modules or components described in the above embodiments may be one or more integrated circuits configured to implement the above method, for example: one or more application specific integrated circuits (ASIC), or , one or more microprocessors (digital signal processor, DSP), or, one or more field programmable gate arrays (field programmable gate array, FPGA), etc. For another example, when one of the above modules is implemented in the form of a processing element scheduler code, the processing element can be a general-purpose processor, such as a central processing unit (central processing unit, CPU) or other processors that can call program codes such as control device. For another example, these modules can be integrated together and implemented in the form of a system-on-a-chip (SOC).
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行计算机程序指令时,全部或部分地产生按照本申请实施例的流程或功能。计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid State Disk(SSD))等。In the above embodiments, all or part of them may be implemented by software, hardware, firmware or any combination thereof. When implemented using software, it may be implemented in whole or in part in the form of a computer program product. A computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, the processes or functions according to the embodiments of the present application will be generated in whole or in part. A computer can be a general purpose computer, special purpose computer, computer network, or other programmable device. Computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, e.g. Coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.) to another website site, computer, server or data center. The computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server, a data center, etc. integrated with one or more available media. Available media may be magnetic media (eg, floppy disk, hard disk, magnetic tape), optical media (eg, DVD), or semiconductor media (eg, Solid State Disk (SSD)).
本文中的术语“多个”是指两个或两个以上。本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系;在公式中,字符“/”,表示前后关联对象是一种“相除”的关系。另外,需要理解的是,在本申请的描述中,“第一”、“第二”等词汇,仅用于区分描述的目的,而不能理解为指示或暗示相对重要性,也不能理解为指示或暗示顺序。The term "plurality" herein means two or more. The term "and/or" in this article is just an association relationship describing associated objects, which means that there can be three relationships, for example, A and/or B can mean: A exists alone, A and B exist simultaneously, and there exists alone B these three situations. In addition, the character "/" in this paper generally indicates that the contextual objects are an "or" relationship; in the formula, the character "/" indicates that the contextual objects are a "division" relationship. In addition, it should be understood that in the description of this application, words such as "first" and "second" are only used for the purpose of distinguishing descriptions, and cannot be understood as indicating or implying relative importance, nor can they be understood as indicating or imply order.
可以理解的是,在本申请的实施例中涉及的各种数字编号仅为描述方便进行的区分,并不用来限制本申请的实施例的范围。It can be understood that the various numbers involved in the embodiments of the present application are only for convenience of description, and are not used to limit the scope of the embodiments of the present application.
可以理解的是,在本申请的实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请的实施例的实施过程构成任何限定。It can be understood that, in the embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the order of execution, and the order of execution of the processes should be determined by their functions and internal logic, and should not be used in the implementation of this application. The implementation of the examples constitutes no limitation.

Claims (17)

  1. 一种图像处理方法,其特征在于,应用于电子设备,所述电子设备中存储有各视点下全景图像对应的图像块,以及所述各视点下每个视角的图像的特征和所述各视点下全景图像对应的图像块的标识的映射关系,所述各视点下每个视角的图像和所述各视点下全景图像对应的图像块均基于所述各视点下全景图像得到,所述方法包括:An image processing method, which is characterized in that it is applied to an electronic device, and the electronic device stores image blocks corresponding to panoramic images under each viewpoint, and features of images of each viewing angle under each viewpoint and the features of each viewpoint The mapping relationship of the identification of the image block corresponding to the lower panoramic image, the image of each viewing angle under each viewpoint and the image block corresponding to the panoramic image under each viewpoint are obtained based on the panoramic image under each viewpoint, and the method includes :
    获取待处理的第一图像;Obtain the first image to be processed;
    获取所述各视点下每个视角的图像的特征和所述第一图像的特征的相似度;Obtaining the similarity between the feature of the image of each viewing angle under each viewpoint and the feature of the first image;
    根据最大相似度对应的视角的图像的特征,以及所述映射关系,确定所述最大相似度对应的视角的图像的特征映射的目标标识;According to the feature of the image of the angle of view corresponding to the maximum similarity, and the mapping relationship, determine the target identifier of the feature map of the image of the angle of view corresponding to the maximum similarity;
    根据所述目标标识对应的图像块,获取第二图像,所述第二图像的清晰度高于所述第一图像的清晰度。A second image is acquired according to the image block corresponding to the target identifier, and the definition of the second image is higher than that of the first image.
  2. 根据权利要求1所述的方法,其特征在于,所述获取待处理的第一图像,包括:The method according to claim 1, wherein said acquiring the first image to be processed comprises:
    获取所述第一图像,以及拍摄所述第一图像的位置;acquiring the first image, and a location where the first image was taken;
    所述获取所述各视点下每个视角的图像的特征和所述第一图像的特征的相似度,包括:The acquiring the similarity between the feature of the image of each viewing angle under each viewpoint and the feature of the first image includes:
    确定距离所述位置预设范围内的目标视点;determining a target viewpoint within a preset range from the position;
    获取所述目标视点下每个视角的图像的特征和所述第一图像的特征的相似度。Obtain the similarity between the feature of the image of each viewing angle under the target viewpoint and the feature of the first image.
  3. 根据权利要求1或2所述的方法,其特征在于,所述映射关系包括第一索引关系和第二索引关系,所述第二索引关系为:所述各视点下每个视角的图像的特征和所述各视点下每个视角的图像的中心点的映射关系,所述第一索引关系为:所述各视点下每个视角的图像的中心点与所述各视点下全景图像对应的图像块的标识的映射关系;The method according to claim 1 or 2, wherein the mapping relationship includes a first index relationship and a second index relationship, and the second index relationship is: the features of images of each view angle under each view point The mapping relationship with the center point of the image of each angle of view under each viewpoint, the first index relationship is: the center point of the image of each angle of view under each viewpoint and the image corresponding to the panoramic image under each viewpoint The mapping relationship of block identification;
    所述根据最大相似度对应的视角的图像的特征,以及所述映射关系,确定所述最大相似度对应的视角的图像的特征映射的目标标识,包括:According to the feature of the image of the angle of view corresponding to the maximum similarity, and the mapping relationship, determining the target identifier of the feature map of the image of the angle of view corresponding to the maximum similarity includes:
    根据所述最大相似度对应的视角的图像的特征,以及所述第二索引关系,确定所述最大相似度对应的视角的图像的特征映射的中心点;According to the feature of the image of the viewing angle corresponding to the maximum similarity and the second index relationship, determine the center point of the feature map of the image of the viewing angle corresponding to the maximum similarity;
    根据所述最大相似度对应的视角的图像的特征映射的中心点,以及所述第一索引关系,确定所述目标标识。The target identifier is determined according to the center point of the feature map of the image of the viewing angle corresponding to the maximum similarity and the first index relationship.
  4. 根据权利要求3所述的方法,其特征在于,所述获取待处理的第一图像之前,还包括:The method according to claim 3, wherein, before acquiring the first image to be processed, further comprising:
    根据所述各视点下全景图像,获取所述各视点下每个视角的图像的特征、所述第一索引关系,以及所述第二索引关系;According to the panoramic images under the various viewpoints, acquire the features of the images of each viewpoint under the respective viewpoints, the first index relationship, and the second index relationship;
    存储所述各视点下每个视角的图像的特征、所述第一索引关系,以及所述第二索引关系。The feature of the image of each viewing angle under each viewing point, the first index relationship, and the second index relationship are stored.
  5. 根据权利要求4所述的方法,其特征在于,所述根据所述各视点下全景图像,获取所述各视点下每个视角的图像的特征,包括:The method according to claim 4, characterized in that, according to the panoramic image under each viewpoint, acquiring the features of the image of each viewpoint under each viewpoint includes:
    对所述各视点下全景图像采用反投影变换,得到所述各视点下多个视角的图像,以及所述各视点下每个视角的图像的中心点在对应的全景图像的坐标位置,所述各视点下相邻视角的图像之间的重叠率大于预设重叠率;Using back-projection transformation on the panoramic images under each viewpoint to obtain images of multiple viewing angles under each viewpoint, and the coordinate position of the center point of the image of each viewing angle under each viewpoint at the corresponding panoramic image, the The overlapping ratio between images of adjacent viewing angles under each viewpoint is greater than the preset overlapping ratio;
    提取所述各视点下每个视角的图像的特征。Extracting features of images of each viewing angle under each viewing point.
  6. 根据权利要求5所述的方法,其特征在于,所述对所述各视点下全景图像采用反投影变换,得到所述各视点下多个视角的图像,以及所述各视点下每个视角的图像的中心点在对应的全景图像的坐标位置,包括:The method according to claim 5, wherein the panoramic images under each viewpoint are back-projected to obtain images of multiple angles of view under each viewpoint, and images of each angle of view under each viewpoint are obtained. The center point of the image is at the coordinate position of the corresponding panoramic image, including:
    在所述各视点下全景图像中,采用具有第二预设尺寸的滑动窗口在全景图像中进行滑动, 且采用反投影变换,依次得到所述滑动窗口内的部分全景图像对应的视角的图像和所述部分全景图像对应的视角的图像的中心点在对应的全景图像的坐标位置,所述各视点下每个视角的图像具有所述第二预设尺寸。In the panoramic images at each viewpoint, a sliding window with a second preset size is used to slide in the panoramic image, and back-projection transformation is used to sequentially obtain the images and the corresponding viewing angles of the partial panoramic images in the sliding window. The center point of the image of the viewing angle corresponding to the partial panoramic image is at the coordinate position of the corresponding panoramic image, and the image of each viewing angle under each viewing point has the second preset size.
  7. 根据权利要求5或6所述的方法,其特征在于,获取所述第二索引关系,包括:The method according to claim 5 or 6, wherein obtaining the second index relationship comprises:
    根据所述各视点下每个视角的图像的中心点在对应的全景图像的坐标位置,以及所述各视点下每个视角的图像的特征,构建所述第二索引关系。The second index relationship is constructed according to the coordinate position of the center point of the image of each angle of view under the various viewpoints in the corresponding panoramic image, and the characteristics of the image of each angle of view under the various viewpoints.
  8. 根据权利要求5-7中任一项所述的方法,其特征在于,获取所述第一索引关系,包括:The method according to any one of claims 5-7, wherein obtaining the first index relationship comprises:
    将所述各视点下全景图像进行切割,得到所述各视点下全景图像对应的图像块;Cutting the panoramic image under each viewpoint to obtain image blocks corresponding to the panoramic image under each viewpoint;
    根据所述各视点下每个视角的图像的中心点在对应的全景图像的坐标位置,以及所述各视点下全景图像对应的图像块,构建所述第一索引关系。The first index relationship is constructed according to the coordinate position of the center point of the image of each viewing angle under each viewpoint in the corresponding panoramic image, and the corresponding image block of the panoramic image under each viewpoint.
  9. 根据权利要求8所述的方法,其特征在于,所述各视点下全景图像对应的图像块具有第一预设尺寸。The method according to claim 8, wherein the image blocks corresponding to the panoramic images under each viewpoint have a first preset size.
  10. 根据权利要求5-9中任一项所述的方法,其特征在于,所述根据所述各视点下全景图像,获取所述各视点下每个视角的图像的特征、所述第一索引关系,以及所述第二索引关系之前,还包括:The method according to any one of claims 5-9, characterized in that, according to the panoramic images under each viewpoint, the feature of the image of each viewpoint under each viewpoint and the first index relationship are acquired , and before the second index relation, also include:
    采用全景图像拼接技术,根据所述各视点下多个视角的预先采集的图像,获取所述各视点下全景图像,所述各视点下相邻视角的预先采集的图像之间的重叠率小于所述预设重叠率。Using panoramic image stitching technology, according to the pre-collected images of multiple viewing angles under each viewpoint, the panoramic images under each viewpoint are obtained, and the overlapping ratio between the pre-collected images of adjacent viewing angles under each viewpoint is less than the specified The preset overlap rate described above.
  11. 根据权利要求10所述的方法,其特征在于,所述采用全景图像拼接技术,根据所述各视点下多个视角的预先采集的图像,获取所述各视点下全景图像,包括:The method according to claim 10, characterized in that, using the panoramic image stitching technology to obtain the panoramic images under the various viewpoints according to the pre-collected images of multiple viewing angles under the respective viewpoints, comprising:
    将处于第一视平面中的所述各视点下每个视角的预先采集的图像投影至全景图像所属的第二视平面,以得到所述各视点下全景图像,以及所述第一视平面和所述第二视平面的变换关系。projecting the pre-acquired images of each viewing angle under the viewpoints in the first viewing plane to the second viewing plane to which the panoramic image belongs, so as to obtain the panoramic images under the respective viewpoints, and the first viewing plane and The transformation relationship of the second viewing plane.
  12. 根据权利要求11所述的方法,其特征在于,所述根据所述目标标识对应的图像块,获取第二图像,包括:The method according to claim 11, wherein said obtaining the second image according to the image block corresponding to the target identifier comprises:
    根据所述变换关系,采用反投影变换将所述目标标识对应的图像块投影至所述第一视平面,得到所述第二图像。According to the transformation relationship, the image block corresponding to the target identifier is projected to the first viewing plane by using back projection transformation to obtain the second image.
  13. 根据权利要求1所述的方法,其特征在于,所述电子设备为云端,所述获取所述第一图像,以及拍摄所述第一图像的位置,包括:The method according to claim 1, wherein the electronic device is a cloud, and the acquiring the first image and the location where the first image is taken include:
    接收来自终端的第一图像和所述终端拍摄所述第一图像时所述终端的位置;receiving a first image from a terminal and a location of the terminal when the terminal captured the first image;
    所述获取第二图像之后,还包括:After said acquiring the second image, it also includes:
    向所述终端发送所述第二图像。sending the second image to the terminal.
  14. 一种图像处理装置,其特征在于,电子设备中存储有各视点下全景图像对应的图像块,以及所述各视点下每个视角的图像的特征和所述各视点下全景图像对应的图像块的标识的映射关系,所述各视点下每个视角的图像和所述各视点下全景图像对应的图像块均基于所述各视点下全景图像得到,所述装置包括:An image processing device, characterized in that the electronic device stores image blocks corresponding to panoramic images under each viewpoint, and features of images of each angle of view under each viewpoint and image blocks corresponding to panoramic images under each viewpoint The mapping relationship of the identification, the image of each angle of view under each viewpoint and the image block corresponding to the panoramic image under each viewpoint are all obtained based on the panoramic image under each viewpoint, and the device includes:
    处理模块,用于:Processing modules for:
    获取待处理的第一图像;Obtain the first image to be processed;
    获取所述各视点下每个视角的图像的特征和所述第一图像的特征的相似度;Obtaining the similarity between the feature of the image of each viewing angle under each viewpoint and the feature of the first image;
    根据最大相似度对应的视角的图像的特征,以及所述映射关系,确定所述最大相似度对应的视角的图像的特征映射的目标标识;According to the feature of the image of the angle of view corresponding to the maximum similarity, and the mapping relationship, determine the target identifier of the feature map of the image of the angle of view corresponding to the maximum similarity;
    根据所述目标标识对应的图像块,获取第二图像,所述第二图像的清晰度高于所述第一 图像的清晰度。A second image is acquired according to the image block corresponding to the target identifier, and the definition of the second image is higher than that of the first image.
  15. 一种电子设备,其特征在于,包括:处理器和存储器;An electronic device, characterized in that it includes: a processor and a memory;
    所述存储器存储计算机执行指令;the memory stores computer-executable instructions;
    所述处理器执行所述存储器存储的计算机执行指令,使得所述处理器执行如权利要求1-13中任一项所述的方法。The processor executes the computer-implemented instructions stored in the memory, causing the processor to perform the method according to any one of claims 1-13.
  16. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有计算机程序或指令,当所述计算机程序或指令被运行时,实现如权利要求1-13中任一项所述的方法。A computer-readable storage medium, characterized in that computer programs or instructions are stored in the computer-readable storage medium, and when the computer programs or instructions are run, the computer program or instructions described in any one of claims 1-13 can be realized. described method.
  17. 一种计算机程序产品,其特征在于,包括计算机程序或指令,所述计算机程序或指令被处理器执行时,实现权利要求1-13中任一项所述的方法。A computer program product, characterized in that it includes a computer program or instruction, and when the computer program or instruction is executed by a processor, the method according to any one of claims 1-13 is realized.
PCT/CN2022/138573 2022-01-28 2022-12-13 Image processing method and apparatus, and electronic device WO2023142732A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210109463.6A CN114627000A (en) 2022-01-28 2022-01-28 Image processing method and device and electronic equipment
CN202210109463.6 2022-01-28

Publications (1)

Publication Number Publication Date
WO2023142732A1 true WO2023142732A1 (en) 2023-08-03

Family

ID=81899073

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/138573 WO2023142732A1 (en) 2022-01-28 2022-12-13 Image processing method and apparatus, and electronic device

Country Status (2)

Country Link
CN (1) CN114627000A (en)
WO (1) WO2023142732A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114627000A (en) * 2022-01-28 2022-06-14 华为技术有限公司 Image processing method and device and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107566724A (en) * 2017-09-13 2018-01-09 维沃移动通信有限公司 A kind of panoramic picture image pickup method and mobile terminal
US20200177929A1 (en) * 2018-11-30 2020-06-04 Korea Electronics Technology Institute Method and apparatus for providing free viewpoint video
CN111815752A (en) * 2020-07-16 2020-10-23 展讯通信(上海)有限公司 Image processing method and device and electronic equipment
CN112989092A (en) * 2019-12-13 2021-06-18 华为技术有限公司 Image processing method and related device
CN114627000A (en) * 2022-01-28 2022-06-14 华为技术有限公司 Image processing method and device and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107566724A (en) * 2017-09-13 2018-01-09 维沃移动通信有限公司 A kind of panoramic picture image pickup method and mobile terminal
US20200177929A1 (en) * 2018-11-30 2020-06-04 Korea Electronics Technology Institute Method and apparatus for providing free viewpoint video
CN112989092A (en) * 2019-12-13 2021-06-18 华为技术有限公司 Image processing method and related device
CN111815752A (en) * 2020-07-16 2020-10-23 展讯通信(上海)有限公司 Image processing method and device and electronic equipment
CN114627000A (en) * 2022-01-28 2022-06-14 华为技术有限公司 Image processing method and device and electronic equipment

Also Published As

Publication number Publication date
CN114627000A (en) 2022-06-14

Similar Documents

Publication Publication Date Title
US10915998B2 (en) Image processing method and device
WO2019205852A1 (en) Method and apparatus for determining pose of image capture device, and storage medium therefor
WO2022012085A1 (en) Face image processing method and apparatus, storage medium, and electronic device
JP4196216B2 (en) Image composition system, image composition method and program
US20140211065A1 (en) Method and system for creating a context based camera collage
WO2023045147A1 (en) Method and system for calibrating binocular camera, and electronic device and storage medium
WO2021258579A1 (en) Image splicing method and apparatus, computer device, and storage medium
WO2010028559A1 (en) Image splicing method and device
KR100982192B1 (en) A Method for Geo-tagging of Pictures and Apparatus thereof
EP3506167B1 (en) Processing method and mobile device
US11044398B2 (en) Panoramic light field capture, processing, and display
WO2021136386A1 (en) Data processing method, terminal, and server
WO2022161260A1 (en) Focusing method and apparatus, electronic device, and medium
WO2020220832A1 (en) Method and apparatus for achieving projection picture splicing, and projection system
WO2017107855A1 (en) Picture searching method and device
WO2018113339A1 (en) Projection image construction method and device
WO2023142732A1 (en) Image processing method and apparatus, and electronic device
CN114640833A (en) Projection picture adjusting method and device, electronic equipment and storage medium
KR20220073824A (en) Image processing method, image processing apparatus, and electronic device applying the same
WO2023169283A1 (en) Method and apparatus for generating binocular stereoscopic panoramic image, device, storage medium, and product
WO2024007748A9 (en) Method for displaying thumbnail during photographing and electronic device
WO2022267939A1 (en) Image processing method and apparatus, and computer-readable storage medium
WO2018058476A1 (en) Image correction method and device
TWI676113B (en) Preview method and device in iris recognition process
CN114782296B (en) Image fusion method, device and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22923520

Country of ref document: EP

Kind code of ref document: A1