WO2023193648A1 - Image processing method and apparatus, electronic device, and storage medium - Google Patents

Image processing method and apparatus, electronic device, and storage medium Download PDF

Info

Publication number
WO2023193648A1
WO2023193648A1 PCT/CN2023/084967 CN2023084967W WO2023193648A1 WO 2023193648 A1 WO2023193648 A1 WO 2023193648A1 CN 2023084967 W CN2023084967 W CN 2023084967W WO 2023193648 A1 WO2023193648 A1 WO 2023193648A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
sub
feature
partial
target
Prior art date
Application number
PCT/CN2023/084967
Other languages
French (fr)
Chinese (zh)
Inventor
郭士嘉
龙良曲
Original Assignee
影石创新科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 影石创新科技股份有限公司 filed Critical 影石创新科技股份有限公司
Publication of WO2023193648A1 publication Critical patent/WO2023193648A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks

Definitions

  • the present application relates to the field of image processing, and specifically relates to an image processing method, device, electronic equipment and storage medium.
  • a panoramic image is a wide-angle image that can show as much of the surrounding environment as possible through wide-angle expression methods, photos, videos, etc.
  • a panoramic image is an image that can surround 0 to 360° in the horizontal direction and 0 to 180° in the vertical direction.
  • the panoramic image needs to be recognized and processed to determine the shooting target in the panoramic image, so as to obtain an image with the shooting target as the focus.
  • Embodiments of the present application provide an image processing method, device, electronic device, and storage medium, which can quickly identify shooting targets in panoramic images, and improve the problem that the existing identification processing process is cumbersome, resulting in slow processing speed.
  • embodiments of the present application provide an image processing method, including:
  • the positional relationship between the first partial image and the panoramic image is determined, as well as the position of the target feature element in the first partial image, and the location of the target feature element is determined.
  • the position in the panoramic image is determined, as well as the position of the target feature element in the first partial image, and the location of the target feature element is determined.
  • distortion correction processing is performed on the panoramic image to obtain a corrected panoramic image.
  • embodiments of the present application also provide an image processing device, including:
  • Acquisition unit used to acquire panoramic images
  • a segmentation processing unit configured to segment the panoramic image using a preset segmentation model to obtain a first partial image and a second partial image, wherein the degree of distortion of the first partial image is lower than that of the second partial image.
  • a feature recognition processing unit configured to perform feature recognition processing on the first partial map to obtain image features of the first partial map, wherein the image features of the first partial map are composed of a plurality of feature elements;
  • a first determination unit configured to determine a target feature element based on a plurality of the feature elements, and determine the position of the target feature element in the first partial map
  • a second determination unit configured to determine the positional relationship between the first partial image and the panoramic image and the position of the target feature element in the first partial image according to the preset segmentation model, and determine The position of the target feature element in the panoramic image;
  • a correction unit configured to perform distortion correction processing on the panoramic image based on the position of the target feature element in the panoramic image to obtain a corrected panoramic image.
  • embodiments of the present application also provide an electronic device, including a processor and a memory, the memory stores a plurality of instructions; the processor loads instructions from the memory to execute the instructions provided by the embodiments of the present application. steps in any image processing method.
  • embodiments of the present application also provide a computer-readable storage medium that stores a plurality of instructions, and the instructions are suitable for loading by the processor to execute the methods provided by the embodiments of the present application. The steps in any image processing method.
  • a preset segmentation model is used to segment the panoramic image to obtain a first partial image with a low degree of distortion, and then feature recognition processing is performed on the image features of the first partial image to determine the image of the first partial image.
  • the image is subjected to distortion correction processing to obtain a corrected panoramic image.
  • the panoramic image is quickly processed through the preset segmentation model, thereby processing the acquired panoramic image and improving the problem of large distortion of some important scenery or people in the image.
  • Figure 1 is a schematic scene diagram of an image processing method provided by an embodiment of the present application.
  • FIG. 2 is a schematic flowchart of an image processing method provided by an embodiment of the present application.
  • Figure 2a is a schematic diagram of a panoramic image segmented according to a preset segmentation model provided by an embodiment of the present application
  • Figure 2b is a schematic diagram of a sub-image obtained by extracting the first partial image provided by the embodiment of the present application;
  • Figure 3 is a schematic flowchart of a method for obtaining image features of a first partial image provided by an embodiment of the present application
  • Figure 4 is a schematic flowchart of a method for determining a target feature element and the position of the target feature element in the first partial diagram provided by an embodiment of the present application;
  • Figure 5 is a schematic flowchart of a method for obtaining a corrected panoramic image of the next frame provided by an embodiment of the present application
  • Figure 6 is a schematic diagram of the image processing method provided by the embodiment of the present application applied in a server scenario
  • Figure 7 is a first structural schematic diagram of an image processing device provided by an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of an image processing device provided by an embodiment of the present application.
  • Embodiments of the present application provide an image processing method, device, storage medium, and storage medium.
  • the image processing device may be integrated into an electronic device, and the electronic device may be a terminal, a server, or other equipment.
  • the terminal can be a mobile phone, a tablet, a smart Bluetooth device, a laptop, or a personal computer (PC);
  • the server can be a single server or a server cluster composed of multiple servers.
  • the image processing device can also be integrated in multiple electronic devices.
  • the image processing device can be integrated in multiple servers, and the image processing method of the present application is implemented by multiple servers.
  • the server can also be implemented in the form of a terminal.
  • the electronic device can be a server, and the server is integrated with an image processing device.
  • the server in the embodiment of the present application is used to obtain a panoramic image; the panoramic image is segmented using a preset segmentation model to obtain the first local map and second local map, the degree of distortion of the first local map is lower than that of the second local map; feature recognition processing is performed on the first local map to obtain the image features of the first local map, so
  • the image features of the first partial map are composed of a plurality of feature elements; perform extraction processing on the feature elements in the image features of the first partial map, determine the target feature element in the feature elements, and determine the target feature The position of the element in the first partial map; determine the position change relationship corresponding to the preset segmentation model; determine based on the position change relationship of the target feature element in the first partial map based on the position change relationship The position of the target feature element in the panoramic image; based on the position of the target feature element in the panoramic image, perform distortion correction processing on the panoramic image to obtain a corrected
  • an image processing method is provided, as shown in Figure 2.
  • the specific process of the image processing method can be as follows:
  • Panoramic image is a kind of wide-angle image, which can be expressed through wide-angle means and photos, videos and other forms. style, showing as much of the surrounding environment as possible.
  • a panoramic image is an image that can surround 0 to 360° in the horizontal direction and 0 to 180° in the vertical direction. Panoramic images can be captured with an ordinary camera and then synthesized, or they can be captured directly with a panoramic camera.
  • the panoramic image obtained may be the current frame panoramic image extracted from the panoramic video, or it may be a pre-photographed panoramic image.
  • the acquired panoramic image may be a rectangular image with an aspect ratio of 2:1.
  • S120 Use a preset segmentation model to segment the panoramic image to obtain a first partial image and a second partial image, where the degree of distortion of the first partial image is lower than that of the second partial image.
  • the preset segmentation model may refer to dividing image areas with a high degree of image distortion and image areas with a low degree of image distortion in the panoramic image through machine learning or other methods, so as to extract image areas with a low degree of image distortion in the panoramic image.
  • image distortion refers to problems such as blurring, stretching, and deformation of characters or scenes in the image that cannot reflect the real situation of the characters or scenes during imaging.
  • the degree of distortion can be judged through human experience, or can be confirmed by comparing the coordinates of the distortion position with the coordinates under normal conditions.
  • Region division may refer to dividing the panoramic image into several regional parts according to the degree of distortion. In some embodiments, the panoramic image may be divided into two parts according to the degree of distortion.
  • the panoramic image has a low degree of distortion.
  • the area with a preset degree of distortion is a low-distortion area, which is the area where the first partial image is located.
  • the area on the panoramic image where the degree of distortion is higher than the preset distortion level is a high-distortion area, which is the area where the second partial image is located.
  • the preset segmentation model may be set by dividing low-distortion areas and high-distortion areas of the panoramic image based on human experience.
  • the low-distortion area of the panoramic image It can be the 80% image area in the middle of the image along the width direction, that is, the middle rectangular frame part of the panoramic image shown in Figure 2a. This image area is located in the middle of the panoramic image. The imaging effect is good and the panoramic image has high distortion.
  • the area can be 10% of the image area in the upper and lower parts of the image, that is, the upper rectangular frame part and the lower rectangular frame part of the panoramic image shown in Figure 2a. This image area is located at the upper edge and lower edge of the panoramic image, imaging Less effective.
  • Panoramic images can be processed quickly through preset segmentation models.
  • the segmentation process can be to divide the panoramic image into several segments according to the division results of the preset segmentation model. Images with different degrees of distortion.
  • the preset segmentation model segments the panoramic image into low distortion areas and high distortion areas
  • the segmentation process is to divide the panoramic image into the first partial image in the low distortion area and the third partial image in the high distortion area. Two partial pictures.
  • the first partial map and the second partial map may be obtained through sliding window sampling, or the panoramic image may be extracted by cutting.
  • the number of the first partial images and the number of the second partial images obtained may be arbitrary; there may be no overlapping area between the first partial images and the second partial images; the first partial images and the second partial images may be Stitching creates a panoramic image.
  • S130 Perform feature recognition processing on the first partial map to obtain image features of the first partial map, where the image features of the first partial map are composed of multiple feature elements.
  • the feature recognition process may refer to performing pixel-level image classification on the first partial image, and labeling to determine the object category to which each pixel in the first partial image belongs, thereby obtaining images with different object category labels.
  • object categories can include landscapes, buildings, and people.
  • the resulting images with different labels are image features.
  • the image features may include color features, texture features, shape features and spatial relationship features.
  • the image feature may be composed of multiple feature elements, wherein the feature elements constituting the image feature may be pixels.
  • Pixel refers to the basic coding of basic original pigments and their grayscales.
  • the pixel category is the pixel type. Different images have different pixel types. However, for different pixel types, different values need to be passed in the template parameters.
  • the data types of pixels include CV_32U, CV_32S, CV_32F, CV_8U, CV_8UC3, etc.
  • the scenery, buildings, and people in the first partial picture can be identified, and by labeling and segmenting the scenery, buildings, and people in the first partial picture, it is determined that
  • the background part and foreground part of the first partial image can be represented in the form of a mask image, where the mask image refers to the background and foreground of the first partial image (the foreground can include buildings and people) through Images with different grayscales are presented.
  • the method of obtaining the image features of the first local map includes:
  • the size information may include the length, width, and angular field of view of the first partial image.
  • the field of view (Field of view, FOV) is also called the field of view in optical engineering.
  • the size of the field of view determines the field of view of the optical instrument.
  • FOV Field of view
  • the field of view angle of the panoramic image in the horizontal direction is 0 to 360°
  • the field of view angle in the vertical direction is 0 to 180°.
  • the size information of the first partial image can be determined by measuring the first partial image or by determining the segmentation ratio of the preset segmentation model.
  • the sliding window is a sliding window.
  • the value of a certain point is expanded to the area containing the point, and the area is used for judgment. This area is the window.
  • the sliding window is able to frame the time series according to the specified unit length, thereby calculating the statistical indicators within the frame. It is equivalent to sliding a slider with a specified length on the scale, and the image data in the slider can be fed back every time it slides one unit.
  • the size of the sliding window is the range of the sliding window for image segmentation on the first partial image.
  • the sliding window can be of any shape.
  • the sliding window can be a rectangular frame.
  • the size of the sliding window can include the length of the rectangular frame and width.
  • the sliding window may be a square frame, and the size of the sliding window may include the side length of the square frame.
  • the sliding direction of the sliding window is the direction in which the sliding window slides in the first partial graph, and the sliding direction of the sliding window can be arbitrary. In some embodiments, the sliding direction of the sliding window is sliding along the length direction of the first partial image.
  • the sliding step size of the sliding window is the distance of each sliding when the sliding window slides on the first partial graph.
  • the sliding step size of the sliding window can be set according to the size information of the first partial graph. In some embodiments, when the sliding direction of the sliding window is to slide along the length direction of the first partial graph, the sliding step size of the sliding window is equal to the first partial graph.
  • the length of a partial graph is divided by the number of times the sliding window slides.
  • S133 Segment the first partial image according to the size, sliding step and sliding direction of the sliding window to obtain multiple sub-images.
  • Segmentation processing refers to intercepting the image within the position of the sliding window from the first partial image.
  • Segmenting the first partial image may be when the sliding window moves along the preset direction, and each time the sliding window slides through a sliding step, the image in the sliding window is intercepted to obtain a separate image. Images are denoted as sub-images. When the sliding window slides through n sliding steps, there are n sub-images obtained. Among them, the sub-images can be set alternately, separated and partially overlapped.
  • the preset direction is along the length direction of the first partial image
  • the sliding window slides through two sliding steps, and there are two sub-images obtained.
  • the sum of the lengths of the two sub-images is greater than the length of the first partial image, that is, as shown in Figure 2b.
  • the first partial image located in the middle of the panoramic image is divided into two sub-images, left and right.
  • the scenery, buildings, and people in the sub-image can be identified, and by labeling and segmenting the scenery, buildings, and people in the sub-image, the background part and the background part of the sub-image can be determined.
  • the foreground part can be represented in the form of a mask image, where the mask image refers to presenting the background and foreground of the sub-image (the foreground can include buildings and people) through images with different grayscales.
  • feature recognition processing is performed on the sub-image
  • the method for determining the image features of the sub-image includes:
  • the sub-image is determined to be the target sub-image
  • the image characteristics of the target sub-image are determined.
  • the method of determining the image characteristics of the sub-image includes:
  • the mask map of the target sub-image determine the image characteristics of the target sub-image
  • the image characteristics of the sub-image are determined.
  • the preset shape refers to the shape that meets the input image shape requirements during encoding and decoding processing, for example, in the embodiment of the present application, the preset shape is a square. Comparing the shape of the sub-image with a preset shape refers to determining whether the shape of the sub-image is a square. When the shape of the sub-image is a square, the sub-image is determined to be the target sub-image. When the shape of the sub-image is not a square, Adjust the shape of the sub-image so that the sub-image is a target sub-image that conforms to a preset shape.
  • the method of performing shape adjustment may include:
  • the sub-image is generally a rectangular image
  • the size information of the sub-image includes the length and width of the sub-image. Determining the size information of the sub-image can be obtained by measuring the sub-image.
  • the sub-image is resized.
  • resizing the sub-image means shortening the rectangular picture along the length direction or elongating it along the width direction, so that the length and width of the sub-image are the same, and a target sub-image with the same length and width is obtained, and then the feature recognition process is performed
  • the target sub-image has the same length and width, it can be conveniently processed by convolution and pooling, so as to better identify the features in the target sub-image.
  • the encoding and decoding process may refer to feature extraction and upsampling of the target sub-image, thereby obtaining a mask map of the target sub-image.
  • the feature extraction of the target sub-image can be performed by performing convolution and pooling processing on the target sub-images with the same length and width; the up-sampling of the target sub-image can be performed by deconvolution of the target sub-image after feature extraction. , thereby obtaining the mask map of the target sub-image.
  • the mask image of the target sub-image includes a foreground part and a background part, where the gray value of the pixels in the foreground part is greater than the gray value of the pixels in the background part, and the foreground part is the image feature of the target sub-image.
  • the proportional relationship between the sub-image and the target sub-image refers to the transformation relationship between the target sub-image after shape adjustment and the sub-image before shape adjustment. According to the transformation relationship, the relationship between the sub-image and the target sub-image can be directly determined. The proportional relationship between them can then quickly determine the image characteristics of the sub-image based on the image characteristics of the target sub-image.
  • the method of determining the image characteristics of the first partial map includes:
  • the image characteristics of the sub-image and the position of the sub-image in the first partial diagram are determined.
  • Determining the position of the sub-image in the first partial image may refer to determining the coordinate position of the sub-image on the first partial image.
  • a coordinate system can be established on the first local graph, the coordinate position of the sliding window after each sliding step is determined, and then the sub-image is determined based on the coordinate position after each sliding step. The coordinate position on the first partial map.
  • the image features in the overlapping area can be determined by determining the feature intensity of the overlapping image features in the overlapping area.
  • the characteristic intensity may be the pixel intensity, that is, the brightness value of the pixel.
  • the brightness value of the pixel is between 0 and 255. The brightness value close to 255 is high and the brightness close to 0 is low.
  • the sub-image includes a first sub-image and a second sub-image
  • the first sub-image and the second sub-image include an overlapping area that overlaps each other, wherein the first sub-image in the overlapping area
  • the image feature is the first image feature
  • the image feature of the second sub-image in the overlapping area is the second image feature
  • the method of performing feature recognition processing on the first partial image and obtaining the image features of the first partial image also includes:
  • the first image feature or the second image feature is selected as the image feature of the first local map.
  • S140 Determine the target characteristic element based on multiple characteristic elements, and determine whether the target characteristic element is in A location in a part of the graph.
  • Determining the position of the target feature element in the first partial image can be determined by extracting the feature elements. Extracting the feature elements means determining the more significant feature elements based on the pixel values of the feature elements, where , the more significant feature element refers to the target feature element whose pixel value is greater than the preset pixel threshold.
  • the pixel value can be the pixel intensity, and the pixel intensity is between 0 and 255.
  • the pixel threshold can be set manually. For example, the pixel preset can be set to 150 or 200 as needed.
  • the target feature element means that the pixel value is greater than 150 or 250. 200 characteristic elements.
  • a method for determining a target feature element based on multiple feature elements and determining the position of the target feature element in the first partial map includes:
  • the feature elements are the pixels that make up the image features, and the pixel value of each pixel can be determined, and the pixel value can be the gray value of the pixel.
  • the method of determining the pixel value of the feature element of the image feature in the first partial image may include obtaining the grayscale value of the pixel point through the impixel function.
  • Binarization processing refers to increasing the pixel value of the first feature element and reducing the pixel value of the second feature element based on the intensity comparison result, so that the difference between the first feature element and the second feature element is more obvious.
  • the gray value of a pixel ranges from 0 to 255.
  • Increasing the pixel value of the first feature element may mean adjusting the pixel value of the first feature element to 255, and increasing the pixel value of the second feature element.
  • Decreasing the pixel value of the feature element may refer to adjusting the pixel value of the second feature element to 0.
  • the position of the target feature element in the first partial diagram can be determined by comparing the target feature element with the first The positional relationship of the local graph is determined.
  • S150 determine the positional relationship between the first partial image and the panoramic image, as well as the position of the target feature element in the first partial image, and determine the position of the target feature element in the panoramic image.
  • Determining the positional relationship between the first partial image and the panoramic image refers to determining the size and position of the first partial image in the panoramic image when segmenting the panoramic image according to the preset segmentation model, where, since the preset segmentation model is preset When determining the position relationship, it can be obtained directly and quickly based on the corresponding position change relationship.
  • Determining the position of the target feature element in the panoramic image means that since the positional relationship between the first partial image and the panoramic image is determined, the position of the target feature element in the first partial image is also determined. Therefore, it can be quickly and accurately determined through coordinate conversion. Determine the position of the target feature element in the panoramic image.
  • Distortion correction processing refers to changing the distance between the lens and the imaging surface when acquiring an image to make the image of the subject clear. That is, the position of the target feature element in the panoramic image is used as the position for distortion correction processing to correct the image, that is, the image is obtained.
  • the position of the target feature element in the corrected panoramic image has the clearest image and smaller distortion.
  • the correction method may include changing the focus point so that the position of the focus point is aligned with the location of the target feature element. Focus refers to changing the distance between the lens and the imaging surface to make the subject image clear when acquiring an image.
  • the focus point refers to the photographed object characterized by the desired target characteristic elements.
  • the image processing method of the present application also includes:
  • the next frame of panoramic image can be the next frame of panoramic image in the panoramic video, or the next frame of panoramic image during continuous shooting.
  • the method of obtaining the next frame of panoramic image can be through shooting, Pre-stored panoramic images can also be retrieved.
  • Performing distortion correction processing on the acquired panoramic image of the next frame refers to shooting the panoramic image of the next frame based on the modified position of the target feature element of the currently captured panoramic image, and using the target feature element of the modified panoramic image. Perform distortion correction processing on the clearest and less distorted position in the panoramic image captured in the next frame to obtain the corrected panoramic image of the next frame.
  • FIG. 6 is a schematic flow chart of an image processing method applied in an experimental scenario according to an embodiment of the present invention.
  • the image processing method is applied to a server.
  • the image processing method includes:
  • the horizontal field of view angle of the panoramic image is 0 ⁇ 360°, and the vertical field of view angle is 0 ⁇ 180°.
  • S220 Segment the panoramic image to obtain a first partial image and a second partial image.
  • the degree of distortion of the first partial image is lower than that of the second partial image.
  • the panoramic image has a length of 1000 and a width of 500
  • the first partial image after excision has a length of 1000 and a width of 400.
  • the first partial image is extracted through a sliding window to obtain two left and right sub-images, in which there is an overlapping area between the two sub-images.
  • the length of the first sub-image is 600 and the width is 400
  • the length of the second sub-image is 600 and the width is 400.
  • the length of the overlapping area between the first sub-image and the second sub-image is 200
  • width is 400.
  • S240 Perform feature recognition processing on the sub-image to obtain image features of the sub-image.
  • U 2 net is used to perform feature recognition processing on the sub-image.
  • the sub-image is input into the encoder-decoder network structure model to obtain 6 mask images with the same size as the input sub-image.
  • the 6 mask images are The intensity is averaged and output to obtain the mask map of the sub-image.
  • S250 Determine the position of the sub-image in the first partial map, and determine the image features of the first partial map based on the position of the sub-image in the first partial map.
  • the image features of the first partial map are composed of multiple feature elements.
  • the coordinates of the four corners of the first partial graph are (0,0), (1000,0), (1000,400), (0,400), from which the coordinates of the four corners of the first sub-image can be determined to be (0,0), (600,0 ), (600, 400), (0, 400), the coordinates of the four corners of the second sub-image are (400, 0), (1000, 0), (1000, 400), (400, 400),
  • the positional relationship between the sub-image and the first partial map can be determined, and then based on the positional relationship between the mask map of the sub-image and the first partial map, the positional relationship used to represent the first partial map can be determined.
  • a mask map of the image features of a partial graph are (0,0), (1000,0), (1000,400), (0,400), from which the coordinates of the four corners of the first sub-image can be determined to be (0,0), (600,0 ), (600, 400), (0, 400), the coordinates of the four corners of the second sub-image are (400,
  • S270 Determine the position of the target feature element in the panoramic image according to the positional relationship between the first partial image and the panoramic image.
  • the positional relationship between the first partial image and the panoramic image is determined based on the segmentation ratio. Since the segmentation ratio between the first partial image and the panoramic image is determined during the segmentation process, it is possible to quickly determine the position of the target feature element in the panorama. location in the image.
  • the panoramic image is segmented to quickly determine the first partial image with a lower degree of distortion
  • the first partial image is determined by performing feature recognition and extraction processing on the first partial image.
  • the target feature element according to the segmentation ratio of the segmentation process, quickly determines the position of the target feature element of the first partial image in the panoramic image, and performs distortion correction on the panoramic image according to the position of the target feature element in the panoramic image to obtain the correction panoramic image after.
  • the speed of processing panoramic images can be increased, and the problem that the existing recognition processing process is cumbersome, resulting in slow processing speed, can be improved.
  • the image processing device can be integrated in an electronic device.
  • the electronic device can be a terminal, a server, and other equipment.
  • the terminal can be a mobile phone, tablet computer, smart Bluetooth device, laptop, personal computer and other devices;
  • the server can be a single server or a server cluster composed of multiple servers.
  • the image processing device is specifically integrated in a server as an example to describe the method in the embodiment of the present application in detail.
  • the image processing device may include:
  • Acquisition unit 301 used to acquire panoramic images
  • the segmentation processing unit 302 is configured to segment the panoramic image using a preset segmentation model to obtain a first partial image and a second partial image, wherein the degree of distortion of the first partial image is lower than that of the second partial image;
  • the feature recognition processing unit 303 is configured to determine a target feature element based on a plurality of the feature elements, and determine the position of the target feature element in the first partial map;
  • the first determination unit 304 is configured to determine the positional relationship between the first partial image and the panoramic image and the position of the target feature element in the first partial image according to the preset segmentation model, and Determine the position of the target feature element in the panoramic image;
  • the second determination unit 305 is used to determine the positional relationship between the first partial image and the panoramic image according to the preset segmentation model; according to the positional relationship between the first partial image and the panoramic image, and the position of the target feature element in the first partial image. , determine the position of the target feature element in the panoramic image;
  • the correction unit 306 is configured to perform distortion correction processing on the panoramic image based on the position of the target feature element in the panoramic image to obtain a corrected panoramic image.
  • the feature recognition processing unit 303 is also specifically used to:
  • the size information of the first partial graph determine the size, sliding step length and sliding direction of the sliding window
  • the first partial image is segmented to obtain multiple sub-images
  • the image characteristics of the sub-image are determined.
  • the feature recognition processing unit 303 is also specifically used to:
  • the image characteristics of the sub-image and the position of the sub-image in the first partial diagram are determined.
  • the feature recognition processing unit 303 is also specifically used to:
  • the sub-image is determined to be the target sub-image; when the shape of the sub-image is different from the preset shape, the shape of the sub-image is adjusted to obtain a target that conforms to the preset shape. subimage;
  • the image characteristics of the target sub-image are determined.
  • the feature recognition processing unit 303 is also specifically used to:
  • the mask map of the target sub-image determine the image characteristics of the target sub-image
  • the image characteristics of the sub-image are determined.
  • the feature recognition processing unit 303 is also specifically used to:
  • the sub-image includes a first sub-image and a second sub-image, and the first sub-image and the second sub-image include an overlapping area that overlaps each other, wherein the image feature of the first sub-image in the overlapping area is the first image feature, The image features of the second sub-image in the overlapping area are the second image features;
  • the method of performing feature recognition processing on the first partial image and obtaining the image features of the first partial image also includes:
  • the first image feature or the second image feature is selected as the image feature of the first local map.
  • the first determining unit 304 is also specifically used to:
  • the position of the target feature element in the first local map is determined.
  • correction unit 306 is also specifically used to:
  • the next frame of panoramic image is obtained, and distortion correction processing is performed on the next frame of panoramic image to obtain the corrected next frame of panoramic image.
  • each of the above units can be implemented as an independent entity, or can be combined in any way to be implemented as the same or several entities.
  • each of the above units please refer to the previous method embodiments, and will not be described again here.
  • the image processing device of this embodiment includes an acquisition unit 301 for acquiring a panoramic image; a segmentation processing unit 302 for segmenting the panoramic image using a preset segmentation model to obtain the first partial image and the second partial image.
  • the feature recognition processing unit 303 is used to perform feature recognition processing on the first partial figure to obtain the first The image features of the local map, where the image features of the first local map are composed of multiple feature elements;
  • the first determination unit 304 is used to determine the target feature element based on the multiple feature elements, and determine whether the target feature element is in The position in the first partial image;
  • the second determination unit 305 is used to determine the positional relationship between the first partial image and the panoramic image according to the preset segmentation model, and the location of the target feature element.
  • the position in the first partial image is determined, and the position of the target feature element in the panoramic image is determined; the correction unit 306 is used to perform distortion correction processing on the panoramic image based on the position of the target feature element in the panoramic image, Get the corrected panoramic image. Therefore, embodiments of the present application can increase the speed of processing panoramic images, and improve the problem that the existing recognition processing process is cumbersome, resulting in slow processing speed.
  • An embodiment of the present application also provides an electronic device, which may be a terminal, a server, or other devices.
  • the terminal can be a mobile phone, a tablet, a smart Bluetooth device, a laptop, a personal computer, etc.
  • the server can be a single server or a server cluster composed of multiple servers, etc.
  • the image processing device can also be integrated in multiple electronic devices.
  • the image processing device can be integrated in multiple servers, and the image processing method of the present application is implemented by multiple servers.
  • the image processing device may include components such as a processor 401 of one or more processing cores, a memory 402 of one or more computer-readable storage media, a power supply 403, an input module 404, and a communication module 405.
  • a processor 401 of one or more processing cores may include components such as a processor 401 of one or more processing cores, a memory 402 of one or more computer-readable storage media, a power supply 403, an input module 404, and a communication module 405.
  • a processor 401 of one or more processing cores such as a processor 401 of one or more processing cores, a memory 402 of one or more computer-readable storage media, a power supply 403, an input module 404, and a communication module 405.
  • FIG. 8 does not constitute a limitation on the image processing device, and may include more or fewer components than shown, or some components may be combined, or different components may be used. layout. in:
  • the processor 401 is the control center of the image processing device, using various interfaces and lines to connect various parts of the entire image processing device, by running or executing software programs and/or modules stored in the memory 402, and calling the software programs and/or modules stored in the memory 402.
  • the data in the image processing device performs various functions and processes the data.
  • processor 401 may include one or more processing cores Center;
  • the processor 401 can integrate an application processor and a modem processor, where the application processor mainly processes the operating system, user interface, application programs, etc., and the modem processor mainly processes wireless communications. It can be understood that the above modem processor may not be integrated into the processor 401.
  • the memory 402 can be used to store software programs and modules.
  • the processor 401 executes various functional applications and data processing by running the software programs and modules stored in the memory 402 .
  • the memory 402 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function (such as a sound playback function, an image playback function, etc.), etc.; the storage data area may store data based on Data created using image processing devices, etc.
  • memory 402 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. Accordingly, the memory 402 may also include a memory controller to provide the processor 401 with access to the memory 402 .
  • the image processing device also includes a power supply 403 that supplies power to various components.
  • the power supply 403 can be logically connected to the processor 401 through a power management system, thereby realizing functions such as managing charging, discharging, and power consumption management through the power management system.
  • the power supply 403 may also include one or more DC or AC power supplies, recharging systems, power failure detection circuits, power converters or inverters, power status indicators, and other arbitrary components.
  • the image processing device may further include an input module 404 that may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal input related to user settings and function control.
  • an input module 404 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal input related to user settings and function control.
  • the image processing device may also include a communication module 405.
  • the communication module 405 may include a wireless module.
  • the image processing device may perform short-distance wireless transmission through the wireless module of the communication module 405, thereby providing users with wireless broadband. Internet access.
  • the communication module 405 can be used to help users send and receive emails, browse web pages, access streaming media, etc.
  • the image processing device may also include a display unit and the like, which will not be described again here.
  • the processor 401 in the image processing device will load the executable files corresponding to the processes of one or more application programs into the memory 402 according to the following instructions, and use The processor 401 runs application programs stored in the memory 402 to implement various functions.
  • a computer program product including a computer program or instructions that implement the steps in any of the above image processing methods when executed by a processor.
  • embodiments of the present application provide a computer-readable storage medium in which a plurality of instructions are stored, and the instructions can be loaded by the processor to execute the steps in any image processing method provided by the embodiments of the present application. .
  • the storage medium may include: read-only memory (ROM, Read Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disk, etc.
  • ROM read-only memory
  • RAM random access memory
  • magnetic disk or optical disk etc.
  • a computer program product or computer program includes computer instructions stored in a computer-readable storage medium.
  • the processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the method provided in the various optional implementations of image processing provided in the above embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Processing (AREA)

Abstract

Embodiments of the present application disclose an image processing method and apparatus, an electronic device, and a storage medium. According to the embodiments of the present application, the method comprises: obtaining a panoramic image; segmenting the panoramic image by using a preset segmentation model so as to obtain a first local image; performing feature recognition on the first local image to obtain image features of the first local image; determining a target feature element according to a plurality of feature elements, and determining the position of the target feature element in the first local image; according to the preset segmentation model, determining a position relationship between the first local image and the panoramic image, and the position of the target feature element in the first local image, and determining the position of the target feature element in the panoramic image; and performing distortion correction on the panoramic image on the basis of the position of the target feature element in the panoramic image so as to obtain a corrected panoramic image. Therefore, a photographed target in the panoramic image can be quickly recognized, thereby alleviating the problem of slow processing speeds caused by existing recognition processing processes that are relatively complicated.

Description

一种图像处理方法、装置、电子设备和存储介质An image processing method, device, electronic equipment and storage medium 技术领域Technical field
本申请涉及图像处理领域,具体涉及一种图像处理方法、装置、电子设备和存储介质。The present application relates to the field of image processing, and specifically relates to an image processing method, device, electronic equipment and storage medium.
背景技术Background technique
全景图像是一种广角图,可以通过广角的表现手段以及相片、视频等形式,尽可能多表现出周围的环境。全景图像是一种可以在水平方向呈0~360°、竖直方向呈0~180°环绕的图像。目前在拍摄全景图像时,需要对全景图像进行识别处理,确定全景图像中的拍摄目标,从而获取以拍摄目标为焦点的图像。A panoramic image is a wide-angle image that can show as much of the surrounding environment as possible through wide-angle expression methods, photos, videos, etc. A panoramic image is an image that can surround 0 to 360° in the horizontal direction and 0 to 180° in the vertical direction. Currently, when shooting a panoramic image, the panoramic image needs to be recognized and processed to determine the shooting target in the panoramic image, so as to obtain an image with the shooting target as the focus.
发明内容Contents of the invention
本申请实施例提供一种图像处理方法、装置、电子设备和存储介质,可以快速的对全景图像中的拍摄目标进行识别,改善现有识别处理的过程较为繁琐,导致处理速度较慢的问题。Embodiments of the present application provide an image processing method, device, electronic device, and storage medium, which can quickly identify shooting targets in panoramic images, and improve the problem that the existing identification processing process is cumbersome, resulting in slow processing speed.
一方面,本申请实施例提供一种图像处理方法,包括:On the one hand, embodiments of the present application provide an image processing method, including:
获取全景图像;Get a panoramic image;
采用预设分割模型对所述全景图像进行分割处理,得到第一局部图和第二局部图,其中,所述第一局部图的畸变程度低于所述第二局部图;Segment the panoramic image using a preset segmentation model to obtain a first partial image and a second partial image, wherein the degree of distortion of the first partial image is lower than that of the second partial image;
对所述第一局部图进行特征识别处理,得到所述第一局部图的图像特征,其中,所述第一局部图的图像特征由多个特征元素构成;Perform feature recognition processing on the first partial map to obtain image features of the first partial map, where the image features of the first partial map are composed of a plurality of feature elements;
根据多个所述特征元素确定目标特征元素,并确定所述目标特征元素在所述第一局部图中的位置;Determine a target feature element based on a plurality of the feature elements, and determine the position of the target feature element in the first partial map;
根据所述预设分割模型,确定所述第一局部图和所述全景图像的位置关系,以及所述目标特征元素在所述第一局部图中的位置,并确定所述目标特征元素在所述全景图像中的位置; According to the preset segmentation model, the positional relationship between the first partial image and the panoramic image is determined, as well as the position of the target feature element in the first partial image, and the location of the target feature element is determined. The position in the panoramic image;
基于所述目标特征元素在所述全景图像中的位置,对所述全景图像进行畸变修正处理,得到修正后的全景图像。Based on the position of the target feature element in the panoramic image, distortion correction processing is performed on the panoramic image to obtain a corrected panoramic image.
另一方面,本申请实施例还提供一种图像处理装置,包括:On the other hand, embodiments of the present application also provide an image processing device, including:
获取单元,用于获取全景图像;Acquisition unit, used to acquire panoramic images;
分割处理单元,用于采用预设分割模型对所述全景图像进行分割处理,得到第一局部图和第二局部图,其中,所述第一局部图的畸变程度低于所述第二局部图;A segmentation processing unit configured to segment the panoramic image using a preset segmentation model to obtain a first partial image and a second partial image, wherein the degree of distortion of the first partial image is lower than that of the second partial image. ;
特征识别处理单元,用于对所述第一局部图进行特征识别处理,得到所述第一局部图的图像特征,其中,所述第一局部图的图像特征由多个特征元素构成;A feature recognition processing unit, configured to perform feature recognition processing on the first partial map to obtain image features of the first partial map, wherein the image features of the first partial map are composed of a plurality of feature elements;
第一确定单元,用于根据多个所述特征元素确定目标特征元素,并确定所述目标特征元素在所述第一局部图中的位置;A first determination unit configured to determine a target feature element based on a plurality of the feature elements, and determine the position of the target feature element in the first partial map;
第二确定单元,用于根据所述预设分割模型,确定所述第一局部图和所述全景图像的位置关系,以及所述目标特征元素在所述第一局部图中的位置,并确定所述目标特征元素在所述全景图像中的位置;A second determination unit, configured to determine the positional relationship between the first partial image and the panoramic image and the position of the target feature element in the first partial image according to the preset segmentation model, and determine The position of the target feature element in the panoramic image;
修正单元,用于基于所述目标特征元素在所述全景图像中的位置,对所述全景图像进行畸变修正处理,得到修正后的全景图像。A correction unit configured to perform distortion correction processing on the panoramic image based on the position of the target feature element in the panoramic image to obtain a corrected panoramic image.
另一方面,本申请实施例还提供一种电子设备,包括处理器和存储器,所述存储器存储有多条指令;所述处理器从所述存储器中加载指令,以执行本申请实施例所提供的任一种图像处理方法中的步骤。On the other hand, embodiments of the present application also provide an electronic device, including a processor and a memory, the memory stores a plurality of instructions; the processor loads instructions from the memory to execute the instructions provided by the embodiments of the present application. steps in any image processing method.
另一方面,本申请实施例还提供一种计算机可读存储介质,所述计算机可读存储介质存储有多条指令,所述指令适于处理器进行加载,以执行本申请实施例所提供的任一种图像处理方法中的步骤。On the other hand, embodiments of the present application also provide a computer-readable storage medium that stores a plurality of instructions, and the instructions are suitable for loading by the processor to execute the methods provided by the embodiments of the present application. The steps in any image processing method.
在本申请中,采用预设分割模型对全景图像进行分割处理,从而得到畸变程度较低的第一局部图,然后对第一局部图的图像特征进行特征识别处理,确定第一局部图的图像特征中的目标特征元素,以及目标特征元素在第一局部图中的位置,并根据预设分割模型对应的位置变化关系,确定目标特征元素在全景图像中的位置,以目标特征元素在全景图像中的位置,对全景 图像进行畸变修正处理,得到修正后的全景图像。由此,通过预设分割模型对全景图像进行快速的处理,从而对所获取到的全景图像进行处理,改善图像中部分重要景物或人物畸变较大的问题。In this application, a preset segmentation model is used to segment the panoramic image to obtain a first partial image with a low degree of distortion, and then feature recognition processing is performed on the image features of the first partial image to determine the image of the first partial image. The target feature element in the feature, and the position of the target feature element in the first partial image, and according to the position change relationship corresponding to the preset segmentation model, determine the position of the target feature element in the panoramic image, so that the target feature element is in the panoramic image location, panorama The image is subjected to distortion correction processing to obtain a corrected panoramic image. As a result, the panoramic image is quickly processed through the preset segmentation model, thereby processing the acquired panoramic image and improving the problem of large distortion of some important scenery or people in the image.
附图说明Description of the drawings
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments will be briefly introduced below. Obviously, the drawings in the following description are only some embodiments of the present application. For those skilled in the art, other drawings can also be obtained based on these drawings without exerting creative efforts.
图1是本申请实施例提供的图像处理方法的场景示意图;Figure 1 is a schematic scene diagram of an image processing method provided by an embodiment of the present application;
图2是本申请实施例提供的图像处理方法的流程示意图;Figure 2 is a schematic flowchart of an image processing method provided by an embodiment of the present application;
图2a是本申请实施例提供的按照预设分割模型对全景图像进行分割处理的全景图像示意图;Figure 2a is a schematic diagram of a panoramic image segmented according to a preset segmentation model provided by an embodiment of the present application;
图2b是本申请实施例提供的对第一局部图进行提取得到子图像的示意图;Figure 2b is a schematic diagram of a sub-image obtained by extracting the first partial image provided by the embodiment of the present application;
图3是本申请实施例提供的得到第一局部图的图像特征的方法的流程示意图;Figure 3 is a schematic flowchart of a method for obtaining image features of a first partial image provided by an embodiment of the present application;
图4是本申请实施例提供的确定目标特征元素以及目标特征元素在第一局部图中的位置的方法的流程示意图;Figure 4 is a schematic flowchart of a method for determining a target feature element and the position of the target feature element in the first partial diagram provided by an embodiment of the present application;
图5是本申请实施例提供的得到修正后的下一帧全景图像的方法的流程示意图;Figure 5 is a schematic flowchart of a method for obtaining a corrected panoramic image of the next frame provided by an embodiment of the present application;
图6是本申请实施例提供的图像处理方法应用在服务器场景中的示意图;Figure 6 is a schematic diagram of the image processing method provided by the embodiment of the present application applied in a server scenario;
图7是本申请实施例提供的图像处理装置的第一种结构示意图;Figure 7 is a first structural schematic diagram of an image processing device provided by an embodiment of the present application;
图8是本申请实施例提供的图像处理装置的结构示意图。FIG. 8 is a schematic structural diagram of an image processing device provided by an embodiment of the present application.
具体实施方式Detailed ways
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而 不是全部的实施例。基于本申请中的实施例,本领域技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application. Obviously, the described embodiments are only part of the embodiments of the present application, and Not all examples. Based on the embodiments in this application, all other embodiments obtained by those skilled in the art without making creative efforts fall within the scope of protection of this application.
本申请实施例提供一种图像处理方法、装置、存储介质和存储介质。Embodiments of the present application provide an image processing method, device, storage medium, and storage medium.
其中,该图像处理装置具体可以集成在电子设备中,该电子设备可以为终端、服务器等设备。其中,终端可以为手机、平板电脑、智能蓝牙设备、笔记本电脑、或者个人电脑(Personal Computer,PC)等设备;服务器可以是单一服务器,也可以是由多个服务器组成的服务器集群。The image processing device may be integrated into an electronic device, and the electronic device may be a terminal, a server, or other equipment. Among them, the terminal can be a mobile phone, a tablet, a smart Bluetooth device, a laptop, or a personal computer (PC); the server can be a single server or a server cluster composed of multiple servers.
在一些实施例中,该图像处理装置还可以集成在多个电子设备中,比如,图像处理装置可以集成在多个服务器中,由多个服务器来实现本申请的图像处理方法。In some embodiments, the image processing device can also be integrated in multiple electronic devices. For example, the image processing device can be integrated in multiple servers, and the image processing method of the present application is implemented by multiple servers.
在一些实施例中,服务器也可以以终端的形式来实现。In some embodiments, the server can also be implemented in the form of a terminal.
例如,参考图1,该电子设备可以服务器,服务器中集成有图像处理装置,本申请实施例中的服务器用于获取全景图像;采用预设分割模型对所述全景图像进行分割处理,得到第一局部图和第二局部图,所述第一局部图的畸变程度低于所述第二局部图;对所述第一局部图进行特征识别处理,得到所述第一局部图的图像特征,所述第一局部图的图像特征由多个特征元素构成;对所述第一局部图的图像特征中的特征元素进行提取处理,确定所述特征元素中的目标特征元素,以及确定所述目标特征元素在所述第一局部图中的位置;确定所述预设分割模型对应的位置变化关系;根据所述目标特征元素在所述第一局部图中的位置,基于所述位置变化关系,确定所述目标特征元素在所述全景图像中的位置;基于所述目标特征元素在所述全景图像中的位置,对所述全景图像进行畸变修正处理,得到修正后的全景图像。For example, referring to Figure 1, the electronic device can be a server, and the server is integrated with an image processing device. The server in the embodiment of the present application is used to obtain a panoramic image; the panoramic image is segmented using a preset segmentation model to obtain the first local map and second local map, the degree of distortion of the first local map is lower than that of the second local map; feature recognition processing is performed on the first local map to obtain the image features of the first local map, so The image features of the first partial map are composed of a plurality of feature elements; perform extraction processing on the feature elements in the image features of the first partial map, determine the target feature element in the feature elements, and determine the target feature The position of the element in the first partial map; determine the position change relationship corresponding to the preset segmentation model; determine based on the position change relationship of the target feature element in the first partial map based on the position change relationship The position of the target feature element in the panoramic image; based on the position of the target feature element in the panoramic image, perform distortion correction processing on the panoramic image to obtain a corrected panoramic image.
以下分别进行详细说明。需说明的是,以下实施例的序号不作为对实施例优选顺序的限定。Each is explained in detail below. It should be noted that the serial numbers of the following embodiments are not used to limit the preferred order of the embodiments.
在本实施例中,提供了一种图像处理方法,如图2所示,该图像处理方法的具体流程可以如下:In this embodiment, an image processing method is provided, as shown in Figure 2. The specific process of the image processing method can be as follows:
S110、获取全景图像。S110. Obtain a panoramic image.
全景图像是一种广角图,可以通过广角的表现手段以及相片、视频等形 式,尽可能多表现出周围的环境。全景图像是一种可以在水平方向呈0~360°、竖直方向呈0~180°环绕的图像。全景图像可以通过普通相机拍摄然后合成,也可以直接用全景相机进行拍摄。Panoramic image is a kind of wide-angle image, which can be expressed through wide-angle means and photos, videos and other forms. style, showing as much of the surrounding environment as possible. A panoramic image is an image that can surround 0 to 360° in the horizontal direction and 0 to 180° in the vertical direction. Panoramic images can be captured with an ordinary camera and then synthesized, or they can be captured directly with a panoramic camera.
获取全景图像可以是从全景视频中提取到的当前帧全景图像,也可以是预先拍摄好的全景图像。The panoramic image obtained may be the current frame panoramic image extracted from the panoramic video, or it may be a pre-photographed panoramic image.
在一些实施例中,所获取到的全景图像可以为长宽比为2∶1的长方形图像。In some embodiments, the acquired panoramic image may be a rectangular image with an aspect ratio of 2:1.
S120、采用预设分割模型对全景图像进行分割处理,得到第一局部图和第二局部图,其中,第一局部图的畸变程度低于第二局部图。S120. Use a preset segmentation model to segment the panoramic image to obtain a first partial image and a second partial image, where the degree of distortion of the first partial image is lower than that of the second partial image.
预设分割模型可以是指通过机器学习等方式,对全景图像中的图像畸变程度高的图像区域和图像畸变程度低的图像区域进行划分,以供提取全景图像中图像畸变程度低的图像区域。其中,图像畸变是指在成像时,图像中的人物或者景物所出现的模糊、拉伸、变形等不能反映人物或者景物真实情况的问题。畸变程度可以通过人为的经验进行判断,也可以通过对畸变位置的坐标和正常情况下的坐标进行比对来确认。区域划分可以是指根据畸变程度的不同或,将全景图像划分为若干区域部分,在一些实施例中,根据畸变程度的不同,可以将全景图像划分为两部分,例如,全景图像上畸变程度低于预设畸变程度的部分为低畸变区域,即第一局部图所在区域,全景图像上畸变程度高于预设畸变程度的区域为高畸变区域,即第二局部图所在区域。The preset segmentation model may refer to dividing image areas with a high degree of image distortion and image areas with a low degree of image distortion in the panoramic image through machine learning or other methods, so as to extract image areas with a low degree of image distortion in the panoramic image. Among them, image distortion refers to problems such as blurring, stretching, and deformation of characters or scenes in the image that cannot reflect the real situation of the characters or scenes during imaging. The degree of distortion can be judged through human experience, or can be confirmed by comparing the coordinates of the distortion position with the coordinates under normal conditions. Region division may refer to dividing the panoramic image into several regional parts according to the degree of distortion. In some embodiments, the panoramic image may be divided into two parts according to the degree of distortion. For example, the panoramic image has a low degree of distortion. The area with a preset degree of distortion is a low-distortion area, which is the area where the first partial image is located. The area on the panoramic image where the degree of distortion is higher than the preset distortion level is a high-distortion area, which is the area where the second partial image is located.
在本申请实施例中,预设分割模型可以是通过对人为经验对全景图像的低畸变区域和高畸变区域进行划分设定,例如,如图2a所示的全景图像,全景图像的低畸变区域可以为沿宽度方向、处于图像中间的80%的图像区域,即图2a所示的全景图像的中部矩形框体部分,该图像区域位于全景图像的中部,成像效果较好,全景图像的高畸变区域可以为图像中上部和下部各10%的图像区域,即图2a所示的全景图像的上部矩形框体部分和下部矩形框体部分,该图像区域位于全景图像的上部边缘和下部边缘,成像效果较差。通过预设分割模型可以快速的对全景图像进行处理。In this embodiment of the present application, the preset segmentation model may be set by dividing low-distortion areas and high-distortion areas of the panoramic image based on human experience. For example, in the panoramic image as shown in Figure 2a, the low-distortion area of the panoramic image It can be the 80% image area in the middle of the image along the width direction, that is, the middle rectangular frame part of the panoramic image shown in Figure 2a. This image area is located in the middle of the panoramic image. The imaging effect is good and the panoramic image has high distortion. The area can be 10% of the image area in the upper and lower parts of the image, that is, the upper rectangular frame part and the lower rectangular frame part of the panoramic image shown in Figure 2a. This image area is located at the upper edge and lower edge of the panoramic image, imaging Less effective. Panoramic images can be processed quickly through preset segmentation models.
分割处理可以是把全景图像按照预设分割模型的划分结果,分为若干个 畸变程度不同的图像。例如,在本申请实施例中,预设分割模型将全景图像分割为低畸变区域和高畸变区域,分割处理是将全景图像分为低畸变区域处的第一局部图和高畸变区域处的第二局部图。The segmentation process can be to divide the panoramic image into several segments according to the division results of the preset segmentation model. Images with different degrees of distortion. For example, in the embodiment of the present application, the preset segmentation model segments the panoramic image into low distortion areas and high distortion areas, and the segmentation process is to divide the panoramic image into the first partial image in the low distortion area and the third partial image in the high distortion area. Two partial pictures.
得到第一局部图和第二局部图的方式可以是通过滑窗采样提取,也可以将全景图像以切割的方式进行提取。其中,所得到第一局部图的数量和第二局部图的数量可以是任意的;第一局部图和第二局部图之间可以不存在重叠的区域;第一局部图和第二局部图可以拼接构成全景图像。The first partial map and the second partial map may be obtained through sliding window sampling, or the panoramic image may be extracted by cutting. Wherein, the number of the first partial images and the number of the second partial images obtained may be arbitrary; there may be no overlapping area between the first partial images and the second partial images; the first partial images and the second partial images may be Stitching creates a panoramic image.
S130、对第一局部图进行特征识别处理,得到第一局部图的图像特征,其中,第一局部图的图像特征由多个特征元素构成。S130. Perform feature recognition processing on the first partial map to obtain image features of the first partial map, where the image features of the first partial map are composed of multiple feature elements.
特征识别处理可以是指对第一局部图进行像素级别的图像分类,标注确定出第一局部图中每个像素所属的对象类别,从而得到带有不同对象类别标记的图像。其中,对象类别可以包括风景、建筑、人物。对第一局部图上的所属的对象类别进行标注后,所得到带有不同标注的图像为图像特征,在一些实施例中,图像特征可以包括颜色特征、纹理特征、形状特征和空间关系特征。在一些实施例中,图像特征可以由多个特征元素构成,其中,构成图像特征的特征元素可以为像素点。像素(pixel)是指基本原色素及其灰度的基本编码。像素类别即像素类型,不同的图像有不同的像素类型,不过对于不同的像素类型,需要在模板参数传入不同的值。例如,像素的数据类型包括CV_32U,CV_32S,CV_32F,CV_8U,CV_8UC3等。The feature recognition process may refer to performing pixel-level image classification on the first partial image, and labeling to determine the object category to which each pixel in the first partial image belongs, thereby obtaining images with different object category labels. Among them, object categories can include landscapes, buildings, and people. After labeling the corresponding object categories on the first partial map, the resulting images with different labels are image features. In some embodiments, the image features may include color features, texture features, shape features and spatial relationship features. In some embodiments, the image feature may be composed of multiple feature elements, wherein the feature elements constituting the image feature may be pixels. Pixel refers to the basic coding of basic original pigments and their grayscales. The pixel category is the pixel type. Different images have different pixel types. However, for different pixel types, different values need to be passed in the template parameters. For example, the data types of pixels include CV_32U, CV_32S, CV_32F, CV_8U, CV_8UC3, etc.
例如,在本申请实施例中,通过特征识别处理,可以将第一局部图中的风景、建筑、人物进行识别,并通过对第一局部图中的风景、建筑、人物进行标注分割,确定出第一局部图的背景部分和前景部分,并可以以掩码图的形式进行表示,其中,掩码图(mask)是指将第一局部图的背景和前景(前景可以包括建筑、人物)通过灰度不同的图像进行呈现。For example, in the embodiment of the present application, through feature recognition processing, the scenery, buildings, and people in the first partial picture can be identified, and by labeling and segmenting the scenery, buildings, and people in the first partial picture, it is determined that The background part and foreground part of the first partial image can be represented in the form of a mask image, where the mask image refers to the background and foreground of the first partial image (the foreground can include buildings and people) through Images with different grayscales are presented.
其中,如图3所示,在一些实施例中,得到第一局部图的图像特征的方法包括:As shown in Figure 3, in some embodiments, the method of obtaining the image features of the first local map includes:
S131、确定第一局部图的尺寸信息。S131. Determine the size information of the first partial image.
尺寸信息可以包括第一局部图的长、宽以及角视场。其中,视场角 (Field of view,FOV)在光学工程中又称视场,视场角的大小决定了光学仪器的视野范围。在本申请实施例中,当拍摄图像为全景图像时,全景图像在水平方向的视场角为0~360°,在竖直方向上的视场角为0~180°。The size information may include the length, width, and angular field of view of the first partial image. Among them, the field of view (Field of view, FOV) is also called the field of view in optical engineering. The size of the field of view determines the field of view of the optical instrument. In the embodiment of the present application, when the captured image is a panoramic image, the field of view angle of the panoramic image in the horizontal direction is 0 to 360°, and the field of view angle in the vertical direction is 0 to 180°.
确定第一局部图的尺寸信息可以通过对第一局部图进行测量得到,也可以通过对预设分割模型的分割比例确定得到。The size information of the first partial image can be determined by measuring the first partial image or by determining the segmentation ratio of the preset segmentation model.
S132、根据第一局部图的尺寸信息,确定设定的滑窗的尺寸、滑动步长以及滑动方向。S132. Determine the set size, sliding step length and sliding direction of the sliding window based on the size information of the first partial image.
滑窗即为滑动窗口,为了提升数据的准确性,将某个点的取值扩大到包含这个点的区域,用区域来进行判断,这个区域就是窗口。滑动窗口就是能够根据指定的单位长度来框住时间序列,从而计算框内的统计指标。相当于一个长度指定的滑块在刻度尺上面滑动,每滑动一个单位即可反馈滑块内的图像数据。The sliding window is a sliding window. In order to improve the accuracy of the data, the value of a certain point is expanded to the area containing the point, and the area is used for judgment. This area is the window. The sliding window is able to frame the time series according to the specified unit length, thereby calculating the statistical indicators within the frame. It is equivalent to sliding a slider with a specified length on the scale, and the image data in the slider can be fed back every time it slides one unit.
滑窗的尺寸为滑窗在第一局部图上进行图像分割的范围,滑窗可以为任意形状,在一些实施例中,滑窗可以为矩形框,滑窗的尺寸可以包括矩形框的长度和宽度。在一些实施例中,滑窗可以为方形框,滑窗的尺寸可以包括方形框的边长。The size of the sliding window is the range of the sliding window for image segmentation on the first partial image. The sliding window can be of any shape. In some embodiments, the sliding window can be a rectangular frame. The size of the sliding window can include the length of the rectangular frame and width. In some embodiments, the sliding window may be a square frame, and the size of the sliding window may include the side length of the square frame.
滑窗的滑动方向为滑窗在第一局部图进行滑动时滑动的方向,滑窗的滑动方向可以是任意的。在一些实施例中,滑窗的滑动方向为沿第一局部图的长度方向滑动。The sliding direction of the sliding window is the direction in which the sliding window slides in the first partial graph, and the sliding direction of the sliding window can be arbitrary. In some embodiments, the sliding direction of the sliding window is sliding along the length direction of the first partial image.
滑窗的滑动步长为滑窗在第一局部图进行滑动时,每次滑动的距离。滑窗的滑动步长可以根据第一局部图的尺寸信息进行设置,在一些实施例中,当滑窗的滑动方向为沿第一局部图的长度方向滑动时,滑窗的滑动步长等于第一局部图的长度除以滑窗滑动的次数。The sliding step size of the sliding window is the distance of each sliding when the sliding window slides on the first partial graph. The sliding step size of the sliding window can be set according to the size information of the first partial graph. In some embodiments, when the sliding direction of the sliding window is to slide along the length direction of the first partial graph, the sliding step size of the sliding window is equal to the first partial graph. The length of a partial graph is divided by the number of times the sliding window slides.
S133、根据滑窗的尺寸、滑动步长以及滑动方向,对第一局部图进行分割处理,得到多张子图像。S133. Segment the first partial image according to the size, sliding step and sliding direction of the sliding window to obtain multiple sub-images.
分割处理是指将滑窗所在位置内的图像,从而第一局部图上截取下来。Segmentation processing refers to intercepting the image within the position of the sliding window from the first partial image.
对第一局部图进行分割处理可以是当滑窗沿预设方向移动时,每次滑窗滑过一个滑动步长时,对滑窗内的图像进行截取,得到一张单独的图像,该 图像记作子图像。当滑窗滑过n个滑动步长时,获取到的子图像有n张。其中,子图像之间可以相间设置、相隔设置和部分重叠设置。Segmenting the first partial image may be when the sliding window moves along the preset direction, and each time the sliding window slides through a sliding step, the image in the sliding window is intercepted to obtain a separate image. Images are denoted as sub-images. When the sliding window slides through n sliding steps, there are n sub-images obtained. Among them, the sub-images can be set alternately, separated and partially overlapped.
例如,在本申请实施例中,预设方向为沿第一局部图的长度方向,滑窗滑过2个滑动步长,得到的子图像有2张。2张子图像的长度之和大于第一局部图的长度,即如图2b所示,本申请实施例中,位于全景图像中部的第一局部图被分为左、右两张子图。For example, in the embodiment of the present application, the preset direction is along the length direction of the first partial image, the sliding window slides through two sliding steps, and there are two sub-images obtained. The sum of the lengths of the two sub-images is greater than the length of the first partial image, that is, as shown in Figure 2b. In the embodiment of the present application, the first partial image located in the middle of the panoramic image is divided into two sub-images, left and right.
S134、对子图像进行特征识别处理,得到子图像的图像特征。S134. Perform feature recognition processing on the sub-image to obtain image features of the sub-image.
在本申请实施例中,通过特征识别处理,可以将子图像中的风景、建筑、人物进行识别,并通过对子图像中的风景、建筑、人物进行标注分割,确定出子图像的背景部分和前景部分,并可以以掩码图的形式进行表示,其中,掩码图(mask)是指将子图像的背景和前景(前景可以包括建筑、人物)通过灰度不同的图像进行呈现。In the embodiment of the present application, through feature recognition processing, the scenery, buildings, and people in the sub-image can be identified, and by labeling and segmenting the scenery, buildings, and people in the sub-image, the background part and the background part of the sub-image can be determined. The foreground part can be represented in the form of a mask image, where the mask image refers to presenting the background and foreground of the sub-image (the foreground can include buildings and people) through images with different grayscales.
其中,在一些实施例中,对子图像进行特征识别处理,确定子图像的图像特征的方法包括:Among them, in some embodiments, feature recognition processing is performed on the sub-image, and the method for determining the image features of the sub-image includes:
确定子图像以及子图像的形状;Determine the sub-image and the shape of the sub-image;
将子图像的形状与预设的形状进行比对:Compare the shape of the sub-image to the preset shape:
当子图像的形状与预设的形状相同时,确定子图像为目标子图像;When the shape of the sub-image is the same as the preset shape, the sub-image is determined to be the target sub-image;
当子图像的形状与预设的形状不相同时,对子图像进行形状调整,得到符合预设的形状的目标子图像;When the shape of the sub-image is different from the preset shape, adjust the shape of the sub-image to obtain a target sub-image that conforms to the preset shape;
对目标子图像进行特征识别处理,得到目标子图像的图像特征;Perform feature recognition processing on the target sub-image to obtain the image features of the target sub-image;
根据目标子图像的图像特征,确定子图像的图像特征。According to the image characteristics of the target sub-image, the image characteristics of the sub-image are determined.
其中,在一些实施例中,根据目标子图像,确定子图像的图像特征的方法包括:Among them, in some embodiments, according to the target sub-image, the method of determining the image characteristics of the sub-image includes:
对目标子图像进行编码解码处理,得到目标子图像的掩码图;Encode and decode the target sub-image to obtain a mask image of the target sub-image;
根据目标子图像的掩码图,确定目标子图像的图像特征;According to the mask map of the target sub-image, determine the image characteristics of the target sub-image;
确定子图像和目标子图像之间的比例关系;Determine the proportional relationship between the sub-image and the target sub-image;
根据子图像和目标子图像之间的比例关系,确定子图像的图像特征。According to the proportional relationship between the sub-image and the target sub-image, the image characteristics of the sub-image are determined.
预设的形状是指符合编码解码处理时,对输入图像形状要求的形状,例 如,在本申请实施例中,预设的形状为正方形。将子图像的形状与预设的形状进行比对是指判断子图像的形状是否为正方形,当子图像的形状为正方形时,确定子图像为目标子图像,当子图像的形状不是正方形时,对子图像进行形状调整,使得子图像为符合预设的形状的目标子图像。The preset shape refers to the shape that meets the input image shape requirements during encoding and decoding processing, for example For example, in the embodiment of the present application, the preset shape is a square. Comparing the shape of the sub-image with a preset shape refers to determining whether the shape of the sub-image is a square. When the shape of the sub-image is a square, the sub-image is determined to be the target sub-image. When the shape of the sub-image is not a square, Adjust the shape of the sub-image so that the sub-image is a target sub-image that conforms to a preset shape.
在一些实施例中,进行形状调整的方法可以包括:In some embodiments, the method of performing shape adjustment may include:
获取子图像的尺寸信息。其中,子图像一般为矩形图像,子图像的尺寸信息包括子图像的长、宽。确定子图像的尺寸信息可以通过对子图像进行测量得到。Get the size information of the subimage. Among them, the sub-image is generally a rectangular image, and the size information of the sub-image includes the length and width of the sub-image. Determining the size information of the sub-image can be obtained by measuring the sub-image.
根据子图像的尺寸信息,对子图像进行尺寸调整处理。其中,对子图像进行尺寸调整处理是指将对长方形的图片沿长度方向缩短或沿宽度方向伸长,从而使得子图像的长宽相同,得到长宽相同的目标子图像,在进行特征识别处理时,长宽相同的目标子图像可以方便进行卷积处理和池化处理,从而更好的识别出目标子图像中的特征。According to the size information of the sub-image, the sub-image is resized. Among them, resizing the sub-image means shortening the rectangular picture along the length direction or elongating it along the width direction, so that the length and width of the sub-image are the same, and a target sub-image with the same length and width is obtained, and then the feature recognition process is performed When the target sub-image has the same length and width, it can be conveniently processed by convolution and pooling, so as to better identify the features in the target sub-image.
编码解码处理可以是指对目标子图像进行特征提取和上采样,从而得到目标子图像的掩码图。其中,对目标子图像进行特征提取可以是将长宽相同的目标子图像进行卷积处理和池化处理;对目标子图像进行上采样可以是对特征提取后的目标子图像进行反卷积处理,从而得到目标子图像的掩码图。其中,目标子图像的掩码图包括前景部分和背景部分,其中前景部分像素的灰度值大于背景部分像素的灰度值,前景部分为目标子图像的图像特征。The encoding and decoding process may refer to feature extraction and upsampling of the target sub-image, thereby obtaining a mask map of the target sub-image. Among them, the feature extraction of the target sub-image can be performed by performing convolution and pooling processing on the target sub-images with the same length and width; the up-sampling of the target sub-image can be performed by deconvolution of the target sub-image after feature extraction. , thereby obtaining the mask map of the target sub-image. Among them, the mask image of the target sub-image includes a foreground part and a background part, where the gray value of the pixels in the foreground part is greater than the gray value of the pixels in the background part, and the foreground part is the image feature of the target sub-image.
子图像和目标子图像之间的比例关系是指对进行形状调整后的目标子图像与进行形状调整前的子图像之间的变换关系,根据变换关系可以直接确定出子图像和目标子图像之间的比例关系,进而可以根据目标子图像的图像特征,快速的确定出子图像的图像特征。The proportional relationship between the sub-image and the target sub-image refers to the transformation relationship between the target sub-image after shape adjustment and the sub-image before shape adjustment. According to the transformation relationship, the relationship between the sub-image and the target sub-image can be directly determined. The proportional relationship between them can then quickly determine the image characteristics of the sub-image based on the image characteristics of the target sub-image.
S135、根据子图像的图像特征,确定第一局部图的图像特征。S135. Determine the image features of the first partial image according to the image features of the sub-image.
其中,确定第一局部图的图像特征的方法包括:Among them, the method of determining the image characteristics of the first partial map includes:
确定子图像的图像特征,以及子图像在第一局部图中的位置;Determine the image characteristics of the sub-image and the position of the sub-image in the first partial image;
根据子图像的图像特征,以及子图像在第一局部图中的位置,确定第一局部图的图像特征。 According to the image characteristics of the sub-image and the position of the sub-image in the first partial diagram, the image characteristics of the first partial diagram are determined.
确定子图像在第一局部图中的位置可以是指确定子图像在第一局部图上的坐标位置。例如,在一些实施例中,可以在第一局部图上建立坐标系,确定滑窗在每个滑动步长移动后的坐标位置,然后根据每个滑动步长移动后的坐标位置确定子图像在第一局部图上的坐标位置。Determining the position of the sub-image in the first partial image may refer to determining the coordinate position of the sub-image on the first partial image. For example, in some embodiments, a coordinate system can be established on the first local graph, the coordinate position of the sliding window after each sliding step is determined, and then the sub-image is determined based on the coordinate position after each sliding step. The coordinate position on the first partial map.
在确定子图像在第一局部图上的坐标位置后,通过确定子图像的图像特征在子图像中的位置,确定子图像的图像特征在第一局部图中的位置,从而确定第一局部图的图像特征。After determining the coordinate position of the sub-image on the first partial map, by determining the position of the image feature of the sub-image in the sub-image, determining the position of the image feature of the sub-image in the first partial map, thereby determining the first partial map image features.
其中,在一些实施例中,当位置相邻的子图像的重叠区域内包括重叠的图像特征时,可以通过确定重叠区域内重叠的图像特征的特征强度大小,来确定重叠区域内的图像特征。其中,特征强度可以为像素强度,即可以为像素的亮度值,其中,像素的亮度值在0~255之间,靠近255的亮度高,靠近0的亮度低。In some embodiments, when the overlapping area of adjacent sub-images includes overlapping image features, the image features in the overlapping area can be determined by determining the feature intensity of the overlapping image features in the overlapping area. The characteristic intensity may be the pixel intensity, that is, the brightness value of the pixel. The brightness value of the pixel is between 0 and 255. The brightness value close to 255 is high and the brightness close to 0 is low.
例如,在一些实施例中,子图像包括第一子图像和第二子图像,第一子图像和第二子图像之间包括相互重叠的重叠区域,其中,第一子图像在重叠区域中的图像特征为第一图像特征,第二子图像在重叠区域中的图像特征为第二图像特征;For example, in some embodiments, the sub-image includes a first sub-image and a second sub-image, and the first sub-image and the second sub-image include an overlapping area that overlaps each other, wherein the first sub-image in the overlapping area The image feature is the first image feature, and the image feature of the second sub-image in the overlapping area is the second image feature;
对第一局部图进行特征识别处理,得到第一局部图的图像特征的方法还包括:The method of performing feature recognition processing on the first partial image and obtaining the image features of the first partial image also includes:
当第一图像特征与第二图像特征重叠时,计算第一图像特征的第一特征强度和第二图像特征的第二特征强度;When the first image feature overlaps with the second image feature, calculating a first feature intensity of the first image feature and a second feature intensity of the second image feature;
比较第一特征强度和第二特征强度:Compare the first feature strength with the second feature strength:
当第一特征强度大于第二特征强度时,选择第一图像特征作为第一局部图的图像特征;When the first feature intensity is greater than the second feature intensity, select the first image feature as the image feature of the first local map;
当第一特征强度小于第二特征强度时,选择第二图像特征作为第一局部图的图像特征;When the first feature intensity is less than the second feature intensity, select the second image feature as the image feature of the first local map;
当第一特征强度等于第二特征强度时,选择第一图像特征或第二图像特征作为第一局部图的图像特征。When the first feature intensity is equal to the second feature intensity, the first image feature or the second image feature is selected as the image feature of the first local map.
S140、根据多个特征元素确定目标特征元素,并确定目标特征元素在第 一局部图中的位置。S140. Determine the target characteristic element based on multiple characteristic elements, and determine whether the target characteristic element is in A location in a part of the graph.
确定目标特征元素在第一局部图中的位置可以通过对特征元素进行提取处理的方式确定,其中,对特征元素进行提取处理是指根据特征元素的像素值,确定出较为显著的特征元素,其中,较为显著的特征元素是指特征元素的像素值大于预设的像素阈值的目标特征元素。其中,像素值可以为像素强度,像素强度在0~255之间,像素阈值可以通过人为设置,例如,可以根据需要将像素预设设置为150或200,目标特征元素即指像素值大于150或200的特征元素。Determining the position of the target feature element in the first partial image can be determined by extracting the feature elements. Extracting the feature elements means determining the more significant feature elements based on the pixel values of the feature elements, where , the more significant feature element refers to the target feature element whose pixel value is greater than the preset pixel threshold. Among them, the pixel value can be the pixel intensity, and the pixel intensity is between 0 and 255. The pixel threshold can be set manually. For example, the pixel preset can be set to 150 or 200 as needed. The target feature element means that the pixel value is greater than 150 or 250. 200 characteristic elements.
如图4所示,在一些实施例中,根据多个特征元素确定目标特征元素,并确定目标特征元素在第一局部图中的位置的方法包括:As shown in Figure 4, in some embodiments, a method for determining a target feature element based on multiple feature elements and determining the position of the target feature element in the first partial map includes:
S141、确定第一局部图中多个特征元素的像素值。S141. Determine the pixel values of multiple feature elements in the first partial image.
特征元素为组成图像特征的像素点,可以确定每个像素点的像素值,像素值可以为像素的灰度值。确定第一局部图中图像特征的特征元素的像素值的方法可以包括通过impixel函数获取像素点的灰度值。The feature elements are the pixels that make up the image features, and the pixel value of each pixel can be determined, and the pixel value can be the gray value of the pixel. The method of determining the pixel value of the feature element of the image feature in the first partial image may include obtaining the grayscale value of the pixel point through the impixel function.
S142、将第一局部图中多个特征元素的像素值与预设的像素阈值进行比较,分别确定特征元素的像素值大于像素阈值的第一特征元素,以及特征元素的像素值小于像素阈值的第二特征元素,其中,第一特征元素为目标特征元素;S142. Compare the pixel values of multiple feature elements in the first local image with the preset pixel threshold, and determine respectively the first feature element whose pixel value is greater than the pixel threshold, and the first feature element whose pixel value is less than the pixel threshold. a second characteristic element, wherein the first characteristic element is the target characteristic element;
S143、对第一特征元素的像素值和第二特征元素的像素值进行二值化处理,得到二值化处理后的第一局部图。S143. Binarize the pixel value of the first feature element and the pixel value of the second feature element to obtain a binarized first local map.
二值化处理是指根据强度比对的结果将第一特征元素的像素值调大,将第二特征元素的像素值调小,使得第一特征元素和第二特征元素之间的区别更加明显。例如,在一些实施例中,像素的灰度值的范围在0~255之间,将第一特征元素的像素值调大可以是指将第一特征元素的像素值调整至255,将第二特征元素的像素值调小可以是指将第二特征元素的像素值调整至0。Binarization processing refers to increasing the pixel value of the first feature element and reducing the pixel value of the second feature element based on the intensity comparison result, so that the difference between the first feature element and the second feature element is more obvious. . For example, in some embodiments, the gray value of a pixel ranges from 0 to 255. Increasing the pixel value of the first feature element may mean adjusting the pixel value of the first feature element to 255, and increasing the pixel value of the second feature element. Decreasing the pixel value of the feature element may refer to adjusting the pixel value of the second feature element to 0.
S144、根据二值化处理后的第一局部图,确定目标特征元素在第一局部图中的位置。S144. Determine the position of the target feature element in the first partial map according to the binarized first partial map.
确定目标特征元素在第一局部图中的位置可以通过目标特征元素与第一 局部图的位置关系确定。The position of the target feature element in the first partial diagram can be determined by comparing the target feature element with the first The positional relationship of the local graph is determined.
S150、根据预设分割模型,确定第一局部图和全景图像的位置关系,以及目标特征元素在第一局部图中的位置,并确定目标特征元素在全景图像中的位置。S150. According to the preset segmentation model, determine the positional relationship between the first partial image and the panoramic image, as well as the position of the target feature element in the first partial image, and determine the position of the target feature element in the panoramic image.
确定第一局部图和全景图像的位置关系是指根据预设分割模型对全景图像进行分割时的尺寸位置,确定第一局部图在全景图像中的位置,其中,由于预设分割模型是预先设置的,在确定位置关系时,可以根据对应的位置变化关系直接、快速的获取到。Determining the positional relationship between the first partial image and the panoramic image refers to determining the size and position of the first partial image in the panoramic image when segmenting the panoramic image according to the preset segmentation model, where, since the preset segmentation model is preset When determining the position relationship, it can be obtained directly and quickly based on the corresponding position change relationship.
确定目标特征元素在全景图像中的位置是指由于第一局部图和全景图像的位置关系确定,目标特征元素在第一局部图的位置也确定,因此,可以通过坐标换算的方式,快速准确的确定目标特征元素在全景图像中的位置。Determining the position of the target feature element in the panoramic image means that since the positional relationship between the first partial image and the panoramic image is determined, the position of the target feature element in the first partial image is also determined. Therefore, it can be quickly and accurately determined through coordinate conversion. Determine the position of the target feature element in the panoramic image.
S160、基于目标特征元素在全景图像中的位置,对全景图像进行畸变修正处理,得到修正后的全景图像。S160. Based on the position of the target feature element in the panoramic image, perform distortion correction processing on the panoramic image to obtain a corrected panoramic image.
畸变修正处理是指在获取图像时,通过改变透镜与成像面的距离,使得拍摄物成像清晰,即将目标特征元素在全景图像中的位置作为畸变修正处理的位置对图像进行修正,即使得获取到的修正后的全景图像中的目标特征元素所在位置的成像最清晰、畸变较小。其中,修正的方式可以包括改变对焦点,使得对焦点的位置对准目标特征元素所在位置。对焦(Focus)是指在获取图像时,通过改变透镜与成像面的距离,使得拍摄物成像清晰。对焦点是指所要目标特征元素所表征的拍摄物。Distortion correction processing refers to changing the distance between the lens and the imaging surface when acquiring an image to make the image of the subject clear. That is, the position of the target feature element in the panoramic image is used as the position for distortion correction processing to correct the image, that is, the image is obtained. The position of the target feature element in the corrected panoramic image has the clearest image and smaller distortion. The correction method may include changing the focus point so that the position of the focus point is aligned with the location of the target feature element. Focus refers to changing the distance between the lens and the imaging surface to make the subject image clear when acquiring an image. The focus point refers to the photographed object characterized by the desired target characteristic elements.
如图5所示,在得到修正后的全景图像之后,本申请的图像处理方法还包括:As shown in Figure 5, after obtaining the corrected panoramic image, the image processing method of the present application also includes:
S170、获取目标特征元素在修改后的全景图像中的位置;S170. Obtain the position of the target feature element in the modified panoramic image;
S180、基于目标特征元素在修正后的全景图像中的位置,获取下一帧全景图像,并对下一帧全景图像进行畸变修正处理,得到修正后的下一帧全景图像。S180. Based on the position of the target feature element in the corrected panoramic image, obtain the next frame of panoramic image, and perform distortion correction processing on the next frame of panoramic image to obtain the corrected next frame of panoramic image.
下一帧全景图像可以为全景视频中的下一帧全景图像,也可以为连续拍摄时的下一帧全景图像,获取下一帧全景图像的方法可以为通过拍摄获取, 也可以为提取预先存储的全景图像获取。对获取的下一帧全景图像进行畸变修正处理是指根据修改后的当前拍摄的全景图像的目标特征元素所在位置,进行下一帧全景图像的拍摄,并以修改后的全景图像的目标特征元素为下一帧拍摄的全景图像中最清晰、畸变较小的位置进行畸变修正处理,得到修正后的下一帧全景图像。The next frame of panoramic image can be the next frame of panoramic image in the panoramic video, or the next frame of panoramic image during continuous shooting. The method of obtaining the next frame of panoramic image can be through shooting, Pre-stored panoramic images can also be retrieved. Performing distortion correction processing on the acquired panoramic image of the next frame refers to shooting the panoramic image of the next frame based on the modified position of the target feature element of the currently captured panoramic image, and using the target feature element of the modified panoramic image. Perform distortion correction processing on the clearest and less distorted position in the panoramic image captured in the next frame to obtain the corrected panoramic image of the next frame.
下面结合一具体应用场景对本发明实施例中图像处理方法进行描述。The image processing method in the embodiment of the present invention is described below in conjunction with a specific application scenario.
请参阅图6,为本发明实施例中图像处理方法应用在实验场景中的施例流程示意图,该图像处理方法应用于服务器,该图像处理方法包括:Please refer to Figure 6, which is a schematic flow chart of an image processing method applied in an experimental scenario according to an embodiment of the present invention. The image processing method is applied to a server. The image processing method includes:
S210、获取全景图像。S210. Obtain the panoramic image.
全景图像的水平视场角为0~360°,竖直视场角为0~180°。The horizontal field of view angle of the panoramic image is 0~360°, and the vertical field of view angle is 0~180°.
S220、对全景图像进行分割处理,得到第一局部图和第二局部图,第一局部图的畸变程度低于第二局部图。S220. Segment the panoramic image to obtain a first partial image and a second partial image. The degree of distortion of the first partial image is lower than that of the second partial image.
将全景图像的上部和下部分割10%的区域,其中,上部和下部分割掉10%的区域为第二局部图,其余80%的区域为第一局部图。例如,全景图像的长为1000,宽为500,切除处理后的第一局部图的长为1000,宽为400。Divide 10% of the area from the upper and lower parts of the panoramic image, where the 10% area from the upper and lower parts is the second partial image, and the remaining 80% of the area is the first partial image. For example, the panoramic image has a length of 1000 and a width of 500, and the first partial image after excision has a length of 1000 and a width of 400.
S230、对第一局部图进行提取处理,得到子图像。S230. Extract the first partial image to obtain the sub-image.
通过滑窗对第一局部图进行提取处理,得到左右两张子图像,其中,两张子图像之间存在重叠区域。例如,第一张子图像的长为600,宽为400,第二张子图像的长为600,宽为400,其中,第一张子图像和第二张子图像之间重叠区域的长为200,宽为400。The first partial image is extracted through a sliding window to obtain two left and right sub-images, in which there is an overlapping area between the two sub-images. For example, the length of the first sub-image is 600 and the width is 400, and the length of the second sub-image is 600 and the width is 400. Among them, the length of the overlapping area between the first sub-image and the second sub-image is 200, width is 400.
S240、对子图像进行特征识别处理,得到子图像的图像特征。S240. Perform feature recognition processing on the sub-image to obtain image features of the sub-image.
采用U2net对子图像进行特征识别处理,其中,将子图像输入到encoder-decoder的网络结构模型中,得到6张与输入子图像尺寸相同的掩码图,将6张掩码图中的强度做均值进行输出,得到子图像的掩码图。U 2 net is used to perform feature recognition processing on the sub-image. The sub-image is input into the encoder-decoder network structure model to obtain 6 mask images with the same size as the input sub-image. The 6 mask images are The intensity is averaged and output to obtain the mask map of the sub-image.
S250、确定子图像在第一局部图中的位置,并根据子图像在第一局部图中的位置,确定第一局部图的图像特征,第一局部图的图像特征由多个特征元素构成。S250. Determine the position of the sub-image in the first partial map, and determine the image features of the first partial map based on the position of the sub-image in the first partial map. The image features of the first partial map are composed of multiple feature elements.
在第一局部图上建立坐标系,其中,第一局部图的四个拐角处的坐标为 (0,0)、(1000,0)、(1000、400)、(0,400),由此可以确定第一张子图像四个拐角处的坐标为(0,0)、(600,0)、(600、400)、(0,400),第二张子图像四个拐角处的坐标为(400,0)、(1000,0)、(1000、400)、(400,400),根据子图像和第一局部图之间的坐标关系,可以确定出子图像与第一局部图的位置关系,进而根据子图像的掩码图与第一局部图的位置关系,确定用于表征第一局部图的图像特征的掩码图。Establish a coordinate system on the first partial graph, where the coordinates of the four corners of the first partial graph are (0,0), (1000,0), (1000,400), (0,400), from which the coordinates of the four corners of the first sub-image can be determined to be (0,0), (600,0 ), (600, 400), (0, 400), the coordinates of the four corners of the second sub-image are (400, 0), (1000, 0), (1000, 400), (400, 400), According to the coordinate relationship between the sub-image and the first partial map, the positional relationship between the sub-image and the first partial map can be determined, and then based on the positional relationship between the mask map of the sub-image and the first partial map, the positional relationship used to represent the first partial map can be determined. A mask map of the image features of a partial graph.
S260、对第一局部图的图像特征的特征元素进行提取处理,确定特征元素中的目标特征元素,以及确定目标特征元素在第一局部图中的位置。S260. Extract the feature elements of the image features of the first partial map, determine the target feature element among the feature elements, and determine the position of the target feature element in the first partial map.
确定用于表征第一局部图的图像特征的掩码图中每个像素点的像素值,对用于表征第一局部图的图像特征的掩码图进行二值化处理,将图像特征中所有像素点的像素强度值大于预设阈值的像素点的像素值调整为255,将像素点的像素强度值小于预设阈值的像素点的像素值调整为0。从而得到第一局部图的二值图,进而方便对目标特征元素进行识别选取,得到目标特征元素在第一局部图中的位置。Determine the pixel value of each pixel in the mask map used to characterize the image features of the first local map, perform binarization processing on the mask map used to characterize the image features of the first local map, and convert all the image features into The pixel value of a pixel whose pixel intensity value is greater than the preset threshold is adjusted to 255, and the pixel value of a pixel whose pixel intensity value is less than the preset threshold is adjusted to 0. Thus, a binary image of the first partial map is obtained, which facilitates the identification and selection of the target feature element and obtains the position of the target feature element in the first partial map.
S270、根据第一局部图和全景图像的位置关系,确定目标特征元素在全景图像中的位置。S270. Determine the position of the target feature element in the panoramic image according to the positional relationship between the first partial image and the panoramic image.
第一局部图和全景图像的位置关系是根据分割的比例确定的,由于在分割处理时,第一局部图和全景图像之间的分割比例确定,因此,可以快速的确定出目标特征元素在全景图像中的位置。The positional relationship between the first partial image and the panoramic image is determined based on the segmentation ratio. Since the segmentation ratio between the first partial image and the panoramic image is determined during the segmentation process, it is possible to quickly determine the position of the target feature element in the panorama. location in the image.
S280、基于目标特征元素在全景图像中的位置,对全景图像进行畸变修正处理,得到修正后的全景图像。S280. Based on the position of the target feature element in the panoramic image, perform distortion correction processing on the panoramic image to obtain a corrected panoramic image.
在本申请实施例中,通过对全景图像进行分割处理,从而快速确定出畸变程度较低的第一局部图,通过对第一局部图进行特征识别处理和提取处理,确定出第一局部图的目标特征元素,根据分割处理的分割比例,快速的确定第一局部图的目标特征元素在全景图像中的位置,并根据目标特征元素在全景图像中的位置对全景图像进行畸变修正处理,得到修正后的全景图像。由此,可提高对全景图像进行处理的速度,改善现有识别处理的过程较为繁琐,导致处理速度较慢的问题。 In the embodiment of the present application, the panoramic image is segmented to quickly determine the first partial image with a lower degree of distortion, and the first partial image is determined by performing feature recognition and extraction processing on the first partial image. The target feature element, according to the segmentation ratio of the segmentation process, quickly determines the position of the target feature element of the first partial image in the panoramic image, and performs distortion correction on the panoramic image according to the position of the target feature element in the panoramic image to obtain the correction panoramic image after. As a result, the speed of processing panoramic images can be increased, and the problem that the existing recognition processing process is cumbersome, resulting in slow processing speed, can be improved.
为了更好地实施以上方法,本申请实施例还提供一种图像处理装置,该图像处理装置具体可以集成在电子设备中,该电子设备可以为终端、服务器等设备。其中,终端可以为手机、平板电脑、智能蓝牙设备、笔记本电脑、个人电脑等设备;服务器可以是单一服务器,也可以是由多个服务器组成的服务器集群。In order to better implement the above method, embodiments of the present application also provide an image processing device. The image processing device can be integrated in an electronic device. The electronic device can be a terminal, a server, and other equipment. Among them, the terminal can be a mobile phone, tablet computer, smart Bluetooth device, laptop, personal computer and other devices; the server can be a single server or a server cluster composed of multiple servers.
比如,在本实施例中,将以图像处理装置具体集成在服务器为例,对本申请实施例的方法进行详细说明。For example, in this embodiment, the image processing device is specifically integrated in a server as an example to describe the method in the embodiment of the present application in detail.
例如,如图7所示,该图像处理装置可以包括:For example, as shown in Figure 7, the image processing device may include:
获取单元301,用于获取全景图像;Acquisition unit 301, used to acquire panoramic images;
分割处理单元302,用于采用预设分割模型对全景图像进行分割处理,得到第一局部图和第二局部图,其中,第一局部图的畸变程度低于第二局部图;The segmentation processing unit 302 is configured to segment the panoramic image using a preset segmentation model to obtain a first partial image and a second partial image, wherein the degree of distortion of the first partial image is lower than that of the second partial image;
特征识别处理单元303,用于根据多个所述特征元素确定目标特征元素,并确定所述目标特征元素在所述第一局部图中的位置;The feature recognition processing unit 303 is configured to determine a target feature element based on a plurality of the feature elements, and determine the position of the target feature element in the first partial map;
第一确定单元304,用于根据所述预设分割模型,确定所述第一局部图和所述全景图像的位置关系,以及所述目标特征元素在所述第一局部图中的位置,并确定所述目标特征元素在所述全景图像中的位置;The first determination unit 304 is configured to determine the positional relationship between the first partial image and the panoramic image and the position of the target feature element in the first partial image according to the preset segmentation model, and Determine the position of the target feature element in the panoramic image;
第二确定单元305,用于根据预设分割模型,确定第一局部图和全景图像的位置关系;根据第一局部图和全景图像的位置关系,以及目标特征元素在第一局部图中的位置,确定目标特征元素在全景图像中的位置;The second determination unit 305 is used to determine the positional relationship between the first partial image and the panoramic image according to the preset segmentation model; according to the positional relationship between the first partial image and the panoramic image, and the position of the target feature element in the first partial image. , determine the position of the target feature element in the panoramic image;
修正单元306,用于基于目标特征元素在全景图像中的位置,对全景图像进行畸变修正处理,得到修正后的全景图像。The correction unit 306 is configured to perform distortion correction processing on the panoramic image based on the position of the target feature element in the panoramic image to obtain a corrected panoramic image.
在本申请一些实施例中,特征识别处理单元303还具体用于:In some embodiments of the present application, the feature recognition processing unit 303 is also specifically used to:
确定第一局部图的尺寸信息;Determine the size information of the first partial image;
根据第一局部图的尺寸信息,确定滑窗的尺寸、滑动步长以及滑动方向;According to the size information of the first partial graph, determine the size, sliding step length and sliding direction of the sliding window;
根据滑窗的尺寸、滑动步长以及滑动方向,对第一局部图进行分割处理,得到多张子图像; According to the size of the sliding window, the sliding step and the sliding direction, the first partial image is segmented to obtain multiple sub-images;
对子图像进行特征识别处理,得到子图像的图像特征;Perform feature recognition processing on the sub-image to obtain the image features of the sub-image;
根据子图像的图像特征,确定第一局部图的图像特征。According to the image characteristics of the sub-image, the image characteristics of the first partial image are determined.
在本申请一些实施例中,特征识别处理单元303还具体用于:In some embodiments of the present application, the feature recognition processing unit 303 is also specifically used to:
确定子图像的图像特征,以及子图像在第一局部图中的位置;Determine the image characteristics of the sub-image and the position of the sub-image in the first partial image;
根据子图像的图像特征,以及子图像在第一局部图中的位置,确定第一局部图的图像特征。According to the image characteristics of the sub-image and the position of the sub-image in the first partial diagram, the image characteristics of the first partial diagram are determined.
在本申请一些实施例中,特征识别处理单元303还具体用于:In some embodiments of the present application, the feature recognition processing unit 303 is also specifically used to:
确定子图像以及子图像的形状;Determine the sub-image and the shape of the sub-image;
将子图像的形状与预设的形状进行比对:Compare the shape of the sub-image to the preset shape:
当子图像的形状与预设的形状相同时,确定子图像为目标子图像;当子图像的形状与预设的形状不相同时,对子图像进行形状调整,得到符合预设的形状的目标子图像;When the shape of the sub-image is the same as the preset shape, the sub-image is determined to be the target sub-image; when the shape of the sub-image is different from the preset shape, the shape of the sub-image is adjusted to obtain a target that conforms to the preset shape. subimage;
对目标子图像进行特征识别处理,得到目标子图像的图像特征;Perform feature recognition processing on the target sub-image to obtain the image features of the target sub-image;
根据目标子图像的图像特征,确定子图像的图像特征。According to the image characteristics of the target sub-image, the image characteristics of the sub-image are determined.
在本申请一些实施例中,特征识别处理单元303还具体用于:In some embodiments of the present application, the feature recognition processing unit 303 is also specifically used to:
对目标子图像进行编码解码处理,得到目标子图像的掩码图;Encode and decode the target sub-image to obtain a mask image of the target sub-image;
根据目标子图像的掩码图,确定目标子图像的图像特征;According to the mask map of the target sub-image, determine the image characteristics of the target sub-image;
确定所述子图像和目标子图像之间的比例关系;Determine the proportional relationship between the sub-image and the target sub-image;
根据所述子图像和目标子图像之间的比例关系,确定子图像的图像特征。According to the proportional relationship between the sub-image and the target sub-image, the image characteristics of the sub-image are determined.
在本申请一些实施例中,特征识别处理单元303还具体用于:In some embodiments of the present application, the feature recognition processing unit 303 is also specifically used to:
子图像包括第一子图像和第二子图像,第一子图像和第二子图像之间包括相互重叠的重叠区域,其中,第一子图像在重叠区域中的图像特征为第一图像特征,第二子图像在重叠区域中的图像特征为第二图像特征;The sub-image includes a first sub-image and a second sub-image, and the first sub-image and the second sub-image include an overlapping area that overlaps each other, wherein the image feature of the first sub-image in the overlapping area is the first image feature, The image features of the second sub-image in the overlapping area are the second image features;
对第一局部图进行特征识别处理,得到第一局部图的图像特征的方法还包括:The method of performing feature recognition processing on the first partial image and obtaining the image features of the first partial image also includes:
当第一图像特征与第二图像特征重叠时,计算第一图像特征的第一特征强度和第二图像特征的第二特征强度; When the first image feature overlaps with the second image feature, calculating a first feature intensity of the first image feature and a second feature intensity of the second image feature;
比较第一特征强度和第二特征强度:Compare the first feature strength with the second feature strength:
当第一特征强度大于第二特征强度时,选择第一图像特征作为第一局部图的图像特征;When the first feature intensity is greater than the second feature intensity, select the first image feature as the image feature of the first local map;
当第一特征强度小于第二特征强度时,选择第二图像特征作为第一局部图的图像特征;When the first feature intensity is less than the second feature intensity, select the second image feature as the image feature of the first local map;
当第一特征强度等于第二特征强度时,选择第一图像特征或第二图像特征作为第一局部图的图像特征。When the first feature intensity is equal to the second feature intensity, the first image feature or the second image feature is selected as the image feature of the first local map.
在本申请一些实施例中,第一确定单元304还具体用于:In some embodiments of this application, the first determining unit 304 is also specifically used to:
确定第一局部图中多个特征元素的像素值;Determine pixel values of multiple feature elements in the first local map;
将第一局部图中多个特征元素的像素值与预设的像素阈值进行比较,分别确定特征元素的像素值大于像素阈值的第一特征元素,以及特征元素的像素值小于像素阈值的第二特征元素,其中,第一特征元素为目标特征元素;Compare the pixel values of multiple feature elements in the first local map with a preset pixel threshold, and determine the first feature element whose pixel value is greater than the pixel threshold, and the second feature element whose pixel value is less than the pixel threshold. Feature elements, where the first feature element is the target feature element;
对第一特征元素的像素值和第二特征元素的像素值进行二值化处理,得到二值化处理后的第一局部图;Binarize the pixel value of the first feature element and the pixel value of the second feature element to obtain the first local image after binarization;
根据二值化处理后的第一局部图,确定目标特征元素在第一局部图中的位置。According to the binarized first local map, the position of the target feature element in the first local map is determined.
在本申请一些实施例中,修正单元306还具体用于:In some embodiments of this application, the correction unit 306 is also specifically used to:
获取目标特征元素在修正后的全景图像中的位置;Obtain the position of the target feature element in the corrected panoramic image;
基于目标特征元素在修正后的全景图像中的位置,获取下一帧全景图像,并对下一帧全景图像进行畸变修正处理,得到修正后的下一帧全景图像。Based on the position of the target feature element in the corrected panoramic image, the next frame of panoramic image is obtained, and distortion correction processing is performed on the next frame of panoramic image to obtain the corrected next frame of panoramic image.
具体实施时,以上各个单元可以作为独立的实体来实现,也可以进行任意组合,作为同一或若干个实体来实现,以上各个单元的具体实施可参见前面的方法实施例,在此不再赘述。During specific implementation, each of the above units can be implemented as an independent entity, or can be combined in any way to be implemented as the same or several entities. For the specific implementation of each of the above units, please refer to the previous method embodiments, and will not be described again here.
由上可知,本实施例的图像处理装置由获取单元301,用于获取全景图像;分割处理单元302,用于采用预设分割模型对全景图像进行分割处理,得到第一局部图和第二局部图,其中,第一局部图的畸变程度低于第二局部图;特征识别处理单元303,用于对第一局部图进行特征识别处理,得到第一 局部图的图像特征,其中,第一局部图的图像特征由多个特征元素构成;第一确定单元304,用于根据多个所述特征元素确定目标特征元素,并确定所述目标特征元素在所述第一局部图中的位置;第二确定单元305,用于根据所述预设分割模型,确定所述第一局部图和所述全景图像的位置关系,以及所述目标特征元素在所述第一局部图中的位置,并确定所述目标特征元素在所述全景图像中的位置;修正单元306,用于基于目标特征元素在全景图像中的位置,对全景图像进行畸变修正处理,得到修正后的全景图像。由此,本申请实施例可以提高对全景图像进行处理的速度,改善现有识别处理的过程较为繁琐,导致处理速度较慢的问题。It can be seen from the above that the image processing device of this embodiment includes an acquisition unit 301 for acquiring a panoramic image; a segmentation processing unit 302 for segmenting the panoramic image using a preset segmentation model to obtain the first partial image and the second partial image. Figure, where the degree of distortion of the first partial figure is lower than that of the second partial figure; the feature recognition processing unit 303 is used to perform feature recognition processing on the first partial figure to obtain the first The image features of the local map, where the image features of the first local map are composed of multiple feature elements; the first determination unit 304 is used to determine the target feature element based on the multiple feature elements, and determine whether the target feature element is in The position in the first partial image; the second determination unit 305 is used to determine the positional relationship between the first partial image and the panoramic image according to the preset segmentation model, and the location of the target feature element. The position in the first partial image is determined, and the position of the target feature element in the panoramic image is determined; the correction unit 306 is used to perform distortion correction processing on the panoramic image based on the position of the target feature element in the panoramic image, Get the corrected panoramic image. Therefore, embodiments of the present application can increase the speed of processing panoramic images, and improve the problem that the existing recognition processing process is cumbersome, resulting in slow processing speed.
本申请实施例还提供一种电子设备,该电子设备可以为终端、服务器等设备。其中,终端可以为手机、平板电脑、智能蓝牙设备、笔记本电脑、个人电脑,等等;服务器可以是单一服务器,也可以是由多个服务器组成的服务器集群,等等。An embodiment of the present application also provides an electronic device, which may be a terminal, a server, or other devices. Among them, the terminal can be a mobile phone, a tablet, a smart Bluetooth device, a laptop, a personal computer, etc.; the server can be a single server or a server cluster composed of multiple servers, etc.
在一些实施例中,该图像处理装置还可以集成在多个电子设备中,比如,图像处理装置可以集成在多个服务器中,由多个服务器来实现本申请的图像处理方法。In some embodiments, the image processing device can also be integrated in multiple electronic devices. For example, the image processing device can be integrated in multiple servers, and the image processing method of the present application is implemented by multiple servers.
在本实施例中,将以本实施例的电子设备是图像处理装置为例进行详细描述,比如,如图8所示,其示出了本申请实施例所涉及的图像处理装置的结构示意图,具体来讲:In this embodiment, a detailed description will be given taking the electronic device of this embodiment as an image processing device as an example. For example, as shown in Figure 8, it shows a schematic structural diagram of the image processing device involved in the embodiment of the present application. Specifically:
该图像处理装置可以包括一个或者一个以上处理核心的处理器401、一个或一个以上计算机可读存储介质的存储器402、电源403、输入模块404以及通信模块405等部件。本领域技术人员可以理解,图8中示出的图像处理装置结构并不构成对图像处理装置的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。其中:The image processing device may include components such as a processor 401 of one or more processing cores, a memory 402 of one or more computer-readable storage media, a power supply 403, an input module 404, and a communication module 405. Those skilled in the art can understand that the structure of the image processing device shown in FIG. 8 does not constitute a limitation on the image processing device, and may include more or fewer components than shown, or some components may be combined, or different components may be used. layout. in:
处理器401是该图像处理装置的控制中心,利用各种接口和线路连接整个图像处理装置的各个部分,通过运行或执行存储在存储器402内的软件程序和/或模块,以及调用存储在存储器402内的数据,执行图像处理装置的各种功能和处理数据。在一些实施例中,处理器401可包括一个或多个处理核 心;在一些实施例中,处理器401可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器401中。The processor 401 is the control center of the image processing device, using various interfaces and lines to connect various parts of the entire image processing device, by running or executing software programs and/or modules stored in the memory 402, and calling the software programs and/or modules stored in the memory 402. The data in the image processing device performs various functions and processes the data. In some embodiments, processor 401 may include one or more processing cores Center; In some embodiments, the processor 401 can integrate an application processor and a modem processor, where the application processor mainly processes the operating system, user interface, application programs, etc., and the modem processor mainly processes wireless communications. It can be understood that the above modem processor may not be integrated into the processor 401.
存储器402可用于存储软件程序以及模块,处理器401通过运行存储在存储器402的软件程序以及模块,从而执行各种功能应用以及数据处理。存储器402可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据图像处理装置的使用所创建的数据等。此外,存储器402可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。相应地,存储器402还可以包括存储器控制器,以提供处理器401对存储器402的访问。The memory 402 can be used to store software programs and modules. The processor 401 executes various functional applications and data processing by running the software programs and modules stored in the memory 402 . The memory 402 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function (such as a sound playback function, an image playback function, etc.), etc.; the storage data area may store data based on Data created using image processing devices, etc. In addition, memory 402 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. Accordingly, the memory 402 may also include a memory controller to provide the processor 401 with access to the memory 402 .
图像处理装置还包括给各个部件供电的电源403,在一些实施例中,电源403可以通过电源管理系统与处理器401逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。电源403还可以包括一个或一个以上的直流或交流电源、再充电系统、电源故障检测电路、电源转换器或者逆变器、电源状态指示器等任意组件。The image processing device also includes a power supply 403 that supplies power to various components. In some embodiments, the power supply 403 can be logically connected to the processor 401 through a power management system, thereby realizing functions such as managing charging, discharging, and power consumption management through the power management system. . The power supply 403 may also include one or more DC or AC power supplies, recharging systems, power failure detection circuits, power converters or inverters, power status indicators, and other arbitrary components.
该图像处理装置还可包括输入模块404,该输入模块404可用于接收输入的数字或字符信息,以及产生与用户设置以及功能控制有关的键盘、鼠标、操作杆、光学或者轨迹球信号输入。The image processing device may further include an input module 404 that may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal input related to user settings and function control.
该图像处理装置还可包括通信模块405,在一些实施例中通信模块405可以包括无线模块,图像处理装置可以通过该通信模块405的无线模块进行短距离无线传输,从而为用户提供了无线的宽带互联网访问。比如,该通信模块405可以用于帮助用户收发电子邮件、浏览网页和访问流式媒体等。The image processing device may also include a communication module 405. In some embodiments, the communication module 405 may include a wireless module. The image processing device may perform short-distance wireless transmission through the wireless module of the communication module 405, thereby providing users with wireless broadband. Internet access. For example, the communication module 405 can be used to help users send and receive emails, browse web pages, access streaming media, etc.
尽管未示出,图像处理装置还可以包括显示单元等,在此不再赘述。具体在本实施例中,图像处理装置中的处理器401会按照如下的指令,将一个或一个以上的应用程序的进程对应的可执行文件加载到存储器402中,并由 处理器401来运行存储在存储器402中的应用程序,从而实现各种功能。Although not shown, the image processing device may also include a display unit and the like, which will not be described again here. Specifically, in this embodiment, the processor 401 in the image processing device will load the executable files corresponding to the processes of one or more application programs into the memory 402 according to the following instructions, and use The processor 401 runs application programs stored in the memory 402 to implement various functions.
在一些实施例中,还提出一种计算机程序产品,包括计算机程序或指令,该计算机程序或指令被处理器执行时实现上述任一种图像处理方法中的步骤。In some embodiments, a computer program product is also proposed, including a computer program or instructions that implement the steps in any of the above image processing methods when executed by a processor.
以上各个操作的具体实施可参见前面的实施例,在此不再赘述。For the specific implementation of each of the above operations, please refer to the previous embodiments and will not be described again here.
本领域普通技术人员可以理解,上述实施例的各种方法中的全部或部分步骤可以通过指令来完成,或通过指令控制相关的硬件来完成,该指令可以存储于一计算机可读存储介质中,并由处理器进行加载和执行。Those of ordinary skill in the art can understand that all or part of the steps in the various methods of the above embodiments can be completed by instructions, or by controlling relevant hardware through instructions. The instructions can be stored in a computer-readable storage medium, and loaded and executed by the processor.
为此,本申请实施例提供一种计算机可读存储介质,其中存储有多条指令,该指令能够被处理器进行加载,以执行本申请实施例所提供的任一种图像处理方法中的步骤。To this end, embodiments of the present application provide a computer-readable storage medium in which a plurality of instructions are stored, and the instructions can be loaded by the processor to execute the steps in any image processing method provided by the embodiments of the present application. .
其中,该存储介质可以包括:只读存储器(ROM,Read Only Memory)、随机存取记忆体(RAM,Random Access Memory)、磁盘或光盘等。Among them, the storage medium may include: read-only memory (ROM, Read Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disk, etc.
根据本申请的一个方面,提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行上述实施例中提供的图像处理方面的各种可选实现方式中提供的方法。According to one aspect of the present application, a computer program product or computer program is provided, which computer program product or computer program includes computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the method provided in the various optional implementations of image processing provided in the above embodiments.
由于该存储介质中所存储的指令,可以执行本申请实施例所提供的任一种图像处理方法中的步骤,因此,可以实现本申请实施例所提供的任一种图像处理方法所能实现的有益效果,详见前面的实施例,在此不再赘述。Since the instructions stored in the storage medium can execute the steps in any image processing method provided by the embodiments of this application, therefore, it is possible to achieve what can be achieved by any image processing method provided by the embodiments of this application. The beneficial effects can be found in the previous embodiments for details and will not be described again here.
以上对本申请实施例所提供的一种图像处理方法、装置、终端、存储介质和计算机可读存储介质进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上,本说明书内容不应理解为对本申请的限制。 The above is a detailed introduction to an image processing method, device, terminal, storage medium and computer-readable storage medium provided by the embodiments of the present application. This article uses specific examples to illustrate the principles and implementation methods of the present application. The above The description of the embodiments is only used to help understand the method and the core idea of the present application; at the same time, for those skilled in the art, there will be changes in the specific implementation and application scope based on the ideas of the present application. In summary, , the content of this description should not be understood as a limitation of this application.

Claims (11)

  1. 一种图像处理方法,其特征在于,包括:An image processing method, characterized by including:
    获取全景图像;Get a panoramic image;
    采用预设分割模型对所述全景图像进行分割处理,得到第一局部图和第二局部图,其中,所述第一局部图的畸变程度低于所述第二局部图;Segment the panoramic image using a preset segmentation model to obtain a first partial image and a second partial image, wherein the degree of distortion of the first partial image is lower than that of the second partial image;
    对所述第一局部图进行特征识别处理,得到所述第一局部图的图像特征,其中,所述第一局部图的图像特征由多个特征元素构成;Perform feature recognition processing on the first partial map to obtain image features of the first partial map, where the image features of the first partial map are composed of a plurality of feature elements;
    根据多个所述特征元素确定目标特征元素,并确定所述目标特征元素在所述第一局部图中的位置;Determine a target feature element based on a plurality of the feature elements, and determine the position of the target feature element in the first partial map;
    根据所述预设分割模型,确定所述第一局部图和所述全景图像的位置关系,以及所述目标特征元素在所述第一局部图中的位置,并确定所述目标特征元素在所述全景图像中的位置;According to the preset segmentation model, the positional relationship between the first partial image and the panoramic image is determined, as well as the position of the target feature element in the first partial image, and the location of the target feature element is determined. The position in the panoramic image;
    基于所述目标特征元素在所述全景图像中的位置,对所述全景图像进行畸变修正处理,得到修正后的全景图像。Based on the position of the target feature element in the panoramic image, distortion correction processing is performed on the panoramic image to obtain a corrected panoramic image.
  2. 根据权利要求1所述的图像处理方法,其特征在于,所述得到修正后的全景图像之后,所述还方法包括:The image processing method according to claim 1, characterized in that after obtaining the corrected panoramic image, the method includes:
    获取所述目标特征元素在修正后的所述全景图像中的位置;Obtain the position of the target feature element in the corrected panoramic image;
    基于所述目标特征元素在修正后的所述全景图像中的位置,获取下一帧全景图像,并对所述下一帧全景图像进行畸变修正处理,得到修正后的所述下一帧全景图像。Based on the position of the target feature element in the corrected panoramic image, obtain the next frame of panoramic image, and perform distortion correction processing on the next frame of panoramic image to obtain the corrected next frame of panoramic image. .
  3. 根据权利要求1所述的图像处理方法,其特征在于,所述对所述第一局部图进行特征识别处理,得到所述第一局部图的图像特征,包括:The image processing method according to claim 1, characterized in that, performing feature recognition processing on the first partial image to obtain image features of the first partial image includes:
    确定所述第一局部图的尺寸信息;Determine the size information of the first partial image;
    根据所述第一局部图的尺寸信息,确定滑窗的尺寸、滑动步长以及滑动方向;Determine the size, sliding step length and sliding direction of the sliding window according to the size information of the first partial graph;
    根据所述滑窗的尺寸、滑动步长以及滑动方向,对所述第一局部图进行分割处理,得到多张子图像;According to the size, sliding step size and sliding direction of the sliding window, segment the first partial image to obtain multiple sub-images;
    对所述子图像进行特征识别处理,得到所述子图像的图像特征;Perform feature recognition processing on the sub-image to obtain image features of the sub-image;
    根据所述子图像的图像特征,确定所述第一局部图的图像特征。 According to the image characteristics of the sub-image, the image characteristics of the first partial image are determined.
  4. 根据权利要求3所述的图像处理方法,其特征在于,所述根据所述子图像的图像特征,确定所述第一局部图的图像特征,包括:The image processing method according to claim 3, wherein determining the image characteristics of the first local image according to the image characteristics of the sub-image includes:
    确定所述子图像的图像特征,以及所述子图像在所述第一局部图中的位置;Determine the image characteristics of the sub-image and the position of the sub-image in the first partial image;
    根据所述子图像的图像特征,以及所述子图像在所述第一局部图中的位置,确定所述第一局部图的图像特征。The image characteristics of the first partial map are determined according to the image characteristics of the sub-image and the position of the sub-image in the first partial map.
  5. 根据权利要求3所述的图像处理方法,其特征在于,所述对所述子图像进行特征识别处理,确定所述子图像的图像特征,包括:The image processing method according to claim 3, characterized in that, performing feature recognition processing on the sub-image and determining image features of the sub-image includes:
    确定所述子图像以及所述子图像的形状;determining the sub-image and the shape of the sub-image;
    将所述子图像的形状与预设的形状进行比对:Compare the shape of the sub-image with the preset shape:
    当所述子图像的形状与预设的形状相同时,确定所述子图像为目标子图像;When the shape of the sub-image is the same as the preset shape, determine the sub-image to be the target sub-image;
    当所述子图像的形状与预设的形状不相同时,对所述子图像进行形状调整,得到符合预设的形状的目标子图像;When the shape of the sub-image is different from the preset shape, perform shape adjustment on the sub-image to obtain a target sub-image that conforms to the preset shape;
    对所述目标子图像进行特征识别处理,得到所述目标子图像的图像特征;Perform feature recognition processing on the target sub-image to obtain image features of the target sub-image;
    根据所述目标子图像的图像特征,确定所述子图像的图像特征。According to the image characteristics of the target sub-image, the image characteristics of the sub-image are determined.
  6. 根据权利要求5所述的图像处理方法,其特征在于,所述确定所述子图像的图像特征的方法包括:The image processing method according to claim 5, characterized in that the method of determining the image characteristics of the sub-image includes:
    对所述目标子图像进行编码解码处理,得到所述目标子图像的掩码图;Perform encoding and decoding processing on the target sub-image to obtain a mask image of the target sub-image;
    根据所述目标子图像的掩码图,确定所述目标子图像的图像特征;Determine the image characteristics of the target sub-image according to the mask image of the target sub-image;
    确定所述子图像和所述目标子图像之间的比例关系;Determine a proportional relationship between the sub-image and the target sub-image;
    根据所述子图像和所述目标子图像之间的比例关系,确定所述子图像的图像特征。Image features of the sub-image are determined based on the proportional relationship between the sub-image and the target sub-image.
  7. 根据权利要求3所述的图像处理方法,其特征在于,所述子图像包括第一子图像和第二子图像,所述第一子图像和所述第二子图像之间包括相互重叠的重叠区域,其中,所述第一子图像在所述重叠区域中的图像特征为第一图像特征,所述第二子图像在所述重叠区域中的图像特征为第二图像特 征;The image processing method according to claim 3, characterized in that the sub-image includes a first sub-image and a second sub-image, and the first sub-image and the second sub-image include overlapping overlaps. area, wherein the image feature of the first sub-image in the overlapping area is the first image feature, and the image feature of the second sub-image in the overlapping area is the second image feature. sign; sign
    对所述第一局部图进行特征识别处理,得到所述第一局部图的图像特征的方法还包括:Perform feature recognition processing on the first partial map, and the method of obtaining image features of the first partial map further includes:
    当所述第一图像特征与第二图像特征重叠时,计算所述第一图像特征的第一特征强度和所述第二图像特征的第二特征强度;When the first image feature overlaps with the second image feature, calculating a first feature intensity of the first image feature and a second feature intensity of the second image feature;
    比较所述第一特征强度和所述第二特征强度:Compare the first characteristic intensity and the second characteristic intensity:
    当所述第一特征强度大于所述第二特征强度时,选择所述第一图像特征作为第一局部图的图像特征;When the first feature intensity is greater than the second feature intensity, select the first image feature as the image feature of the first local map;
    当所述第一特征强度小于所述第二特征强度时,选择所述第二图像特征作为第一局部图的图像特征;When the first feature intensity is less than the second feature intensity, select the second image feature as the image feature of the first local map;
    当所述第一特征强度等于所述第二特征强度时,选择所述第一图像特征或所述第二图像特征作为第一局部图的图像特征。When the first feature intensity is equal to the second feature intensity, the first image feature or the second image feature is selected as the image feature of the first local map.
  8. 根据权利要求1所述的图像处理方法,其特征在于,所述根据多个所述特征元素确定目标特征元素,并确定所述目标特征元素在所述第一局部图中的位置,包括:The image processing method according to claim 1, wherein determining a target feature element based on a plurality of the feature elements and determining the position of the target feature element in the first partial image includes:
    确定所述第一局部图中多个所述特征元素的像素值;Determine pixel values of a plurality of the feature elements in the first local map;
    将所述第一局部图中多个所述特征元素的像素值与预设的像素阈值进行比较,分别确定所述特征元素的像素值大于所述像素阈值的第一特征元素,以及所述特征元素的像素值小于所述像素阈值的第二特征元素,其中,所述第一特征元素为目标特征元素;Compare the pixel values of multiple feature elements in the first local image with a preset pixel threshold, and determine the first feature element whose pixel value is greater than the pixel threshold, and the feature A second feature element whose pixel value is less than the pixel threshold, wherein the first feature element is a target feature element;
    对所述第一特征元素的像素值和所述第二特征元素的像素值进行二值化处理,得到二值化处理后的所述第一局部图;Perform a binarization process on the pixel value of the first feature element and the pixel value of the second feature element to obtain the binarized first local map;
    根据二值化处理后的所述第一局部图,确定所述目标特征元素在所述第一局部图中的位置。According to the binarized first local map, the position of the target feature element in the first local map is determined.
  9. 一种图像处理装置,其特征在于,包括:An image processing device, characterized in that it includes:
    获取单元,用于获取全景图像;Acquisition unit, used to acquire panoramic images;
    分割处理单元,用于采用预设分割模型对所述全景图像进行分割处理,得到第一局部图和第二局部图,其中,所述第一局部图的畸变程度低于所述 第二局部图;A segmentation processing unit configured to segment the panoramic image using a preset segmentation model to obtain a first partial image and a second partial image, wherein the degree of distortion of the first partial image is lower than that of the first partial image. The second partial picture;
    特征识别处理单元,用于对所述第一局部图进行特征识别处理,得到所述第一局部图的图像特征,其中,所述第一局部图的图像特征由多个特征元素构成;A feature recognition processing unit, configured to perform feature recognition processing on the first partial map to obtain image features of the first partial map, wherein the image features of the first partial map are composed of a plurality of feature elements;
    第一确定单元,用于根据多个所述特征元素确定目标特征元素,并确定所述目标特征元素在所述第一局部图中的位置;A first determination unit configured to determine a target feature element based on a plurality of the feature elements, and determine the position of the target feature element in the first partial map;
    第二确定单元,用于根据所述预设分割模型,确定所述第一局部图和所述全景图像的位置关系,以及所述目标特征元素在所述第一局部图中的位置,并确定所述目标特征元素在所述全景图像中的位置;A second determination unit, configured to determine the positional relationship between the first partial image and the panoramic image and the position of the target feature element in the first partial image according to the preset segmentation model, and determine The position of the target feature element in the panoramic image;
    修正单元,用于基于所述目标特征元素在所述全景图像中的位置,对所述全景图像进行畸变修正处理,得到修正后的全景图像。A correction unit configured to perform distortion correction processing on the panoramic image based on the position of the target feature element in the panoramic image to obtain a corrected panoramic image.
  10. 一种电子设备,其特征在于,包括处理器和存储器,所述存储器存储有多条指令;所述处理器从所述存储器中加载指令,以执行如权利要求1~8任一项所述的图像处理方法中的步骤。An electronic device, characterized in that it includes a processor and a memory, the memory stores a plurality of instructions; the processor loads instructions from the memory to execute the method as described in any one of claims 1 to 8 Steps in image processing methods.
  11. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有多条指令,所述指令适于处理器进行加载,以执行权利要求1~8任一项所述的图像处理方法中的步骤。 A computer-readable storage medium, characterized in that the computer-readable storage medium stores a plurality of instructions, and the instructions are suitable for loading by a processor to perform the image processing described in any one of claims 1 to 8 steps in the method.
PCT/CN2023/084967 2022-04-08 2023-03-30 Image processing method and apparatus, electronic device, and storage medium WO2023193648A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210369422.0 2022-04-08
CN202210369422.0A CN116958164A (en) 2022-04-08 2022-04-08 Image processing method, device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
WO2023193648A1 true WO2023193648A1 (en) 2023-10-12

Family

ID=88244020

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/084967 WO2023193648A1 (en) 2022-04-08 2023-03-30 Image processing method and apparatus, electronic device, and storage medium

Country Status (2)

Country Link
CN (1) CN116958164A (en)
WO (1) WO2023193648A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170330311A1 (en) * 2014-12-04 2017-11-16 Mitsubishi Electric Corporation Image processing device and method, image capturing device, program, and record medium
US20180220156A1 (en) * 2015-07-08 2018-08-02 Kt Corporation Method and device for correcting distortion of panoramic video
CN110458753A (en) * 2019-08-12 2019-11-15 杭州环峻科技有限公司 A kind of adaptivenon-uniform sampling of overall view ring belt image and undistorted development system and method
CN111598777A (en) * 2020-05-13 2020-08-28 上海眼控科技股份有限公司 Sky cloud image processing method, computer device and readable storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170330311A1 (en) * 2014-12-04 2017-11-16 Mitsubishi Electric Corporation Image processing device and method, image capturing device, program, and record medium
US20180220156A1 (en) * 2015-07-08 2018-08-02 Kt Corporation Method and device for correcting distortion of panoramic video
CN110458753A (en) * 2019-08-12 2019-11-15 杭州环峻科技有限公司 A kind of adaptivenon-uniform sampling of overall view ring belt image and undistorted development system and method
CN111598777A (en) * 2020-05-13 2020-08-28 上海眼控科技股份有限公司 Sky cloud image processing method, computer device and readable storage medium

Also Published As

Publication number Publication date
CN116958164A (en) 2023-10-27

Similar Documents

Publication Publication Date Title
US20210272236A1 (en) Image enhancement method and apparatus, and storage medium
EP3454250B1 (en) Facial image processing method and apparatus and storage medium
US10509954B2 (en) Method and system of image segmentation refinement for image processing
US6912313B2 (en) Image background replacement method
CN110839129A (en) Image processing method and device and mobile terminal
US10204432B2 (en) Methods and systems for color processing of digital images
US10430694B2 (en) Fast and accurate skin detection using online discriminative modeling
WO2018082185A1 (en) Image processing method and device
JP2003058894A (en) Method and device for segmenting pixeled image
CN112614060A (en) Method and device for rendering human face image hair, electronic equipment and medium
CN108665415B (en) Image quality improving method and device based on deep learning
WO2021115242A1 (en) Super-resolution image processing method and related apparatus
CN110889824A (en) Sample generation method and device, electronic equipment and computer readable storage medium
WO2023284401A1 (en) Image beautification processing method and apparatus, storage medium, and electronic device
CN111028276A (en) Image alignment method and device, storage medium and electronic equipment
CN111353965A (en) Image restoration method, device, terminal and storage medium
CN110286868A (en) Video display adjustment method and device, electronic equipment and storage medium
CN113506305A (en) Image enhancement method, semantic segmentation method and device for three-dimensional point cloud data
CN110570441B (en) Ultra-high definition low-delay video control method and system
CN112839167A (en) Image processing method, image processing device, electronic equipment and computer readable medium
WO2023193648A1 (en) Image processing method and apparatus, electronic device, and storage medium
CN113989404B (en) Picture processing method, apparatus, device, storage medium, and program product
CN111383289A (en) Image processing method, image processing device, terminal equipment and computer readable storage medium
US20170372495A1 (en) Methods and systems for color processing of digital images
US20230325980A1 (en) Electronic device and image processing method thereof

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23784212

Country of ref document: EP

Kind code of ref document: A1