WO2023206475A1 - Procédé et appareil de traitement d'image, dispositif électronique et support de stockage - Google Patents

Procédé et appareil de traitement d'image, dispositif électronique et support de stockage Download PDF

Info

Publication number
WO2023206475A1
WO2023206475A1 PCT/CN2022/090570 CN2022090570W WO2023206475A1 WO 2023206475 A1 WO2023206475 A1 WO 2023206475A1 CN 2022090570 W CN2022090570 W CN 2022090570W WO 2023206475 A1 WO2023206475 A1 WO 2023206475A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
pixel
target
grid
target object
Prior art date
Application number
PCT/CN2022/090570
Other languages
English (en)
Chinese (zh)
Inventor
刘阳晨旭
石啟凡
江浩
张锐
王宇
许兴涛
Original Assignee
北京小米移动软件有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京小米移动软件有限公司 filed Critical 北京小米移动软件有限公司
Priority to PCT/CN2022/090570 priority Critical patent/WO2023206475A1/fr
Publication of WO2023206475A1 publication Critical patent/WO2023206475A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems

Definitions

  • the present disclosure relates to the field of image processing but is not limited to the field of image processing, and in particular, to an image processing method, device, electronic equipment and storage medium.
  • image acquisition component After the image acquisition component collects the image, it needs to process the image to obtain the image that meets the needs.
  • the images that different image acquisition components can obtain when collecting images may be different. For example, in the same shooting scene, images of the same target object are collected, different image acquisition modules or the same image acquisition module uses different configuration parameters. Under this condition, the collected image of the target object will also be different.
  • the present disclosure provides an image processing method, device, electronic equipment and storage medium.
  • a first aspect of an embodiment of the present disclosure provides an image processing method, including: acquiring a first image and a second image; wherein both the first image and the second image include the same target object; in the first direction, adjust the position of the second pixel in the target object in the second image to the same position as the first pixel in the target object in the first image, to obtain a third image; wherein, The first pixel and the second pixel are pixels at the same position in the target object; the target grid in the third image is determined according to the reference grid in the first image; wherein, The target grid includes: a target position; a target pixel value of a pixel corresponding to the target position, and a reference pixel value of a pixel corresponding to the reference position in the reference grid that satisfies a preset condition; according to the reference grid and the The position correspondence relationship of the target grid is to adjust the position of the second pixel in the third image to the same position as the first pixel in the first image in the second direction; wherein, the The second direction is
  • determining the target grid in the third image based on the reference grid in the first image includes: creating M*N preset sizes in the first image a reference grid; according to the reference pixel values of the K pixels corresponding to the reference positions in the reference grid, determine in the third image that the K reference pixel values satisfy the preset condition K target pixel values; according to the K target pixel values, determine K target pixels corresponding to the target pixel value in the third image; determine the target network according to the K target pixels Grid; M, N and K are all positive integers.
  • the method further includes: determining the first average pixel value of each pixel in the preset area; wherein the preset area includes: centered on the pixel corresponding to the reference position, the preset number of pixels is an area of radius; determining the first average pixel value as the reference pixel value.
  • K target pixel values that meet the preset conditions with the K reference pixel values are determined in the third image
  • the method includes: determining K candidate pixel values in the third image; determining a second average pixel value of each pixel in the candidate area; wherein the candidate area includes: corresponding to the candidate pixel value The pixel is the center, and the preset number of pixels is the area of radius; the candidate pixel values corresponding to the K second average pixel values whose K reference pixel values meet the preset condition are, Determine the target pixel value.
  • the reference grid at least includes: a square grid or a triangular grid; the reference position at least includes: a corner point of the grid or a midpoint of an edge of the grid.
  • the preset condition at least includes: the sum of absolute values of differences between the reference pixel value and the target pixel value is less than a target threshold.
  • the position of the second pixel in the third image is adjusted to be consistent with the first pixel in the second direction.
  • the position in the first image is the same, including: determining the position correspondence relationship according to the reference position and the target position; the position correspondence relationship includes: a mapping transformation relationship in the second direction; according to the second The mapping transformation relationship in the direction, in the second direction, adjusts the position of the second pixel located in the target grid in the third image to be consistent with that of all the pixels located in the reference grid.
  • the positions of the first pixels in the first image are the same.
  • the mapping transformation relationship in the second direction includes: a homography transformation matrix in the second direction.
  • the position of the second pixel in the target object in the second image is adjusted to be consistent with the position of the first pixel in the target object in the first image. have the same position, and obtain the third image, including: extracting the first feature point of the target object in the first image; extracting the second feature point of the target object in the second image; wherein, The first feature point and the second feature point are matching feature point pairs at the same position in the target object; the mapping transformation relationship in the first direction is determined based on the first feature point and the second feature point.
  • the mapping transformation relationship in the first direction includes: a homography transformation matrix in the first direction.
  • a second aspect of an embodiment of the present disclosure provides an image processing device, including: an image acquisition module, configured to acquire a first image and a second image; wherein the first image and the second image both include the same target. Object; a first processing module configured to adjust the position of the second pixel in the target object in the second image in the first direction to the position of the first pixel in the target object in the first image.
  • the positions in are the same, and a third image is obtained; wherein the first pixel and the second pixel are pixels at the same position in the target object; a target grid determination module is used to determine according to the first image Reference grid, determine the target grid in the third image; wherein, the target grid includes: target position; target pixel value of the pixel corresponding to the target position, and the reference position in the reference grid The reference pixel value of the corresponding pixel satisfies the preset condition; the second processing module is configured to convert the second pixel in the second direction according to the position correspondence relationship between the reference grid and the target grid.
  • the position in the third image is adjusted to be the same as the position of the first pixel in the first image; wherein the second direction is perpendicular to the first direction.
  • a third aspect of the embodiment of the present disclosure provides an electronic device, including:
  • a fourth aspect of the embodiments of the present disclosure provides a non-transitory computer-readable storage medium.
  • Computer-executable instructions are stored in the computer-readable storage medium.
  • the computer-executable instructions are executed by a processor, any of the above are implemented. methods described in the examples.
  • the technical solution of the embodiment of the present disclosure is to adjust the position of the second pixel of the target object in the second image to be the same as the position of the first pixel in the first image in one direction. Since the first image and the second image are planar images, the positions of the pixels can be determined in two directions, and in the other direction, the adjusted second image can be adjusted again according to the positional relationship between the preset grid and the target grid. Adjustment, since the grid includes multiple pixels, this can reduce the determination of the position correspondence of a single pixel in the other direction, thereby reducing the amount of calculations required to adjust the pixel position of the target object in the other direction.
  • Figure 1 is a schematic flowchart of an image processing method according to an exemplary embodiment
  • Figure 2 is a schematic diagram of a first image and a second image according to an exemplary embodiment
  • Figure 3 is a schematic diagram of adjusting the position of the second pixel in the first direction according to an exemplary embodiment
  • Part (a) of Figure 4 is a schematic diagram of a first image according to an exemplary embodiment
  • Part (b) of Figure 4 is a schematic diagram of a second image according to an exemplary embodiment
  • Figure 5 is a schematic diagram after adjustment in the first direction according to an exemplary embodiment
  • Figure 6 is a schematic diagram of determining a target grid according to an exemplary embodiment
  • Part (a) of Figure 7 is a schematic diagram of a first image including a reference grid according to an exemplary embodiment
  • Part (b) of Figure 7 is a schematic diagram of a second image including a target grid according to an exemplary embodiment
  • Figure 8 is a schematic diagram of another image processing method according to an exemplary embodiment
  • Figure 9 is a schematic diagram of determining a target pixel value according to an exemplary embodiment
  • Figure 10 is a schematic diagram of adjustment in the second direction according to an exemplary embodiment
  • Figure 11 is a schematic diagram of an image processing device according to an exemplary embodiment
  • Figure 12 is a schematic diagram of another image processing method according to an exemplary embodiment
  • Figure 13 is a block diagram of a terminal device according to an exemplary embodiment.
  • first, second, third, etc. may be used to describe various information in the embodiments of the present disclosure, the information should not be limited to these terms. These terms are only used to distinguish information of the same type from each other.
  • first information may also be called second information, and similarly, the second information may also be called first information.
  • word “if” as used herein may be interpreted as "when” or "when” or "in response to determining.”
  • image processing such as high dynamic range imaging (High Dynamic Range, HDR), multi-frame image super-resolution and image fusion
  • HDR High Dynamic Range
  • image processing it is necessary to use the image information of multiple images to fuse to obtain an image with improved image quality.
  • Multiple images need to be aligned first to achieve alignment between image pixels, and then subsequent processing can be carried out.
  • alignment between image pixels can be achieved through dense alignment methods. Most of these methods are based on dense optical flow methods for image alignment. This method requires determining the correspondence between pixels in a two-dimensional direction. Due to performance limitations, most of them can only handle images with smaller resolutions; at the same time, this method requires that the initial misaligned pixel distance is within a certain range, so when the pixel differences in multiple images are large, such as with large amplitude When the field of view changes, the alignment effect is poor.
  • the grid alignment method which is mostly used in the field of image splicing.
  • the grid is determined based on the characteristic feature points and the movement of the grid points is constrained. Alignment of images. This method requires the distribution of feature points across the entire image, so when aligning images with fewer feature points, the alignment effect is poor. For example, in images with weak textures where it is difficult to extract image feature points, it is difficult to obtain better image alignment effects.
  • FIG. 1 a schematic flow chart of an image processing method is provided as an example of the present disclosure.
  • the method includes:
  • Step S100 obtain a first image and a second image; wherein both the first image and the second image include the same target object.
  • Step S200 in the first direction, adjust the position of the second pixel in the target object in the second image to the same position as the first pixel in the target object in the first image, to obtain a third image; wherein, the first The pixel and the second pixel are pixels at the same position in the target object.
  • Step S300 determine the target grid in the third image based on the reference grid in the first image; wherein the target grid includes: the target position; the target pixel value of the pixel corresponding to the target position, and the target pixel value in the reference grid.
  • the reference pixel value of the pixel corresponding to the position meets the preset conditions.
  • Step S400 according to the positional relationship between the reference grid and the target grid, adjust the position of the second pixel located in the target grid in the third image in the second direction to be consistent with the position of the first pixel located in the reference grid.
  • the position in the first image is the same; where the second direction is perpendicular to the first direction.
  • the method of this disclosed example can be applied at least in user terminals, that is, the execution subject of the method can at least include user terminals, and user terminals can include mobile terminals and fixed terminals.
  • Mobile terminals can include mobile phones, tablet computers, vehicle-mounted central control devices, wearable devices, smart devices, etc. Smart devices can also include smart office equipment and smart home equipment.
  • the first image and the second image acquired in step S100 may be two-dimensional plane images, and both the first image and the second image include at least one identical target object. That is, the first image and the second image include common image information and are not exactly the same two images.
  • the target object can be determined based on the actual image collection scene. For example, it can be the object of the captured image in any scene such as people, animals, plants, mountains, and rivers.
  • the first image and the second image may be images obtained under different image acquisition parameters. They may be images acquired by the same image acquisition module under different image acquisition parameters, including images of the same target object.
  • the image acquisition parameters may be the field of view. angle and focal length etc.
  • the first image and the second image may also be images of the same target object collected by different image acquisition modules.
  • the first image acquisition module and the second image acquisition module respectively capture the same scene under different image acquisition parameters.
  • the captured image contains the target object in the scene.
  • the image acquisition parameters can be determined according to the configuration parameters of the image acquisition module and the image acquisition angle.
  • the first image and the second image only need to include the same part of image information, that is, there is common image information between the first image and the second image, and this part of the common image information is the target object.
  • the target object is person A.
  • the presentation effects of person A in the first image and person A in the second image may be different.
  • the magnification factor of person A in the first image is greater than the magnification factor of person A in the second image, and the magnification factor of person A in the first image may be different.
  • the position in the image is different from the position of person A in the second image.
  • Part (a) of Figure 2 is a schematic diagram of a target object.
  • Part (b) of Figure 2 is a schematic diagram of the first image.
  • Part (c) of Figure 2 Part is a schematic diagram of the second image.
  • the target objects are trees and people
  • the first image and the second image are images of the target object at different magnifications
  • both the first image and the second image include the same target object.
  • the first image may be an image of the target object collected by an image acquisition module with a magnification of 1 times
  • the second image may be an image of the target object collected by an image acquisition module with a magnification of 3 times.
  • the first image is an image of the target object at 1x magnification (1X)
  • the second image is an image of the target object at 3x magnification (3X).
  • the first image includes more image information than the second image. That is, in addition to the image information included in the second image, the first image also includes image information that is not included in the second image.
  • the image information included in the second image is the image information common to the first image and the second image
  • the content displayed in the second image is the target object.
  • the target object includes trees and people, and includes other than Background image beyond trees and people.
  • step S200 after obtaining the first image and the second image, since any pixel in the image has a corresponding position, the position of the pixel in the image can be determined through the positions in any two directions, for example, through the coordinates
  • the system can determine the position of any pixel in the first image in the first image, and can also determine the position of any pixel in the second image in the second image.
  • a coordinate system can be determined from two directions.
  • the first direction can be any direction, and the inclination angle between the first direction and the second direction is not limited, as long as the first direction is perpendicular to the second direction.
  • the first direction is a vertical direction
  • the second direction is a horizontal direction
  • the second direction is a vertical direction
  • the first direction is a horizontal direction, etc.
  • the pixels of the target object in the first image are first pixels, and the pixels of the target object in the second image are second pixels.
  • the pixel corresponding to the position in the first image is the first pixel
  • the pixel corresponding to the position in the second image is the second pixel. That is, the first pixel and the second pixel are pixels at the same position in the target object.
  • the first pixel and the second pixel match, and both the first pixel and the second pixel are the target. The position in pixels.
  • the pixel of the tree tip in the first image is the first pixel
  • the pixel in the second image is the second pixel
  • the position of the second pixel in the target object in the second image is adjusted to the same position as the position of the first pixel in the target object in the first image, and the adjusted second image is used as the third image.
  • each pixel in the target object is aligned in the first direction with the corresponding pixel of the target object in the first image, and the aligned first pixel and the second pixel have the same position in the first direction. .
  • the first direction is the vertical direction
  • the position in the vertical direction is represented by y.
  • the position of the tree tip in the first direction of the corresponding pixel in the first image is y1
  • the position of the tree tip in the first direction of the corresponding pixel in the second image is y2.
  • y2 is adjusted to be the same as y1.
  • the position of each corresponding pixel of the target object in the first direction in the second image is adjusted to be the same as the position in the first direction in the first image.
  • the position of the corresponding pixel of the target object in the second image is aligned with the position of the corresponding pixel in the first image, thereby aligning the position of the entire target object in the second image with the position of the corresponding pixel in the first image. Position alignment within an image.
  • the target grid in the third image is determined based on the reference grid in the first image.
  • the first image has a preset reference grid.
  • the reference grid may also be called a reference window.
  • the reference grid may be a virtual grid used to divide the first image into regions. There may be multiple reference grids in the first image, and the size and number of the reference grids may be determined according to actual usage requirements.
  • the shape of the reference grid can be a square grid or a triangular grid, etc., and can be determined according to actual needs. For example, there are N*N uniformly distributed square grids in the first image.
  • the square grids may be positive direction grids, and the reference grid divides the first image into N*N grid areas.
  • the reference grid includes a reference position, which can be the corner point of the grid or the midpoint of the edge of the grid, etc., and can be adjusted according to actual needs.
  • the reference position corresponds to a pixel in the first image, and the pixel value of the pixel corresponding to the reference position in the first image can be determined.
  • the target grid includes the target position, the target pixel value of the pixel corresponding to the target position, and the reference pixel value of the pixel corresponding to the reference position in the reference grid that meets the preset conditions.
  • the target grid can be determined by the target position, and the preset condition can be determined according to actual needs.
  • the preset condition at least includes: the sum of the absolute values of the differences between the reference pixel value and the target pixel value is less than the target threshold.
  • Each reference grid corresponds to a target grid. This embodiment takes one of the grids as an example for explanation.
  • the corresponding target grid can be determined based on each reference grid.
  • Step S400 after determining the target grid in the third image, adjust the position of the second pixel in the third image in the second direction to be consistent with the position of the reference grid and the target grid.
  • a pixel has the same position in the first image.
  • the position correspondence between the reference grid and the target grid can be determined based on the reference position and the target position.
  • the detailed determination process is not limited. You can refer to subsequent embodiments as long as the position correspondence can be determined.
  • the second direction is the horizontal direction
  • the position in the horizontal direction is represented by x.
  • the position of the tree tip in the first direction of the corresponding pixel in the first image is y1
  • the position of the tree tip in the first direction of the corresponding pixel in the second image is y2.
  • y2 is adjusted to be the same as y1.
  • the position of each corresponding pixel of the target object in the first direction in the second image is adjusted to be the same as the position in the first direction in the first image.
  • the position of the corresponding pixel of the target object in the second image is aligned with the position of the corresponding pixel in the first image, thereby aligning the position of the entire target object in the second image with the position of the corresponding pixel in the first image. Position alignment within an image.
  • the position of the second pixel in both the first direction and the second direction is adjusted to be the same as the position of the first pixel in the first image, so that the position of each pixel in the target object in the second image is the same as the position in the second image.
  • the positions in the first image are the same. When the positions are the same, the target object in the second image and the terminal target object in the first image are aligned.
  • the two pixels are not aligned, so when determining the target grid in step S300, the pixel value of the pixel corresponding to the target position in the second direction and the pixel value of the pixel corresponding to the reference position can be separately considered, thereby increasing the computational complexity of determining the target grid. , efficiency and accuracy.
  • the target grid is determined according to the pixel value of the pixel corresponding to the reference position in the reference grid, and then the pixel position is adjusted in the second direction according to the position correspondence between the target grid and the reference grid, which reduces the need for
  • the dependence on the feature points of the target object in the third image does not require determining the target grid based on the position of the feature points in the target object, nor does it need to be aligned in the second direction based on the position of the feature points of the target object in the third image.
  • Step S200 in the first direction, adjust the position of the second pixel in the target object in the second image to the same position as the first pixel in the target object in the first image, to obtain a third image, including:
  • Step S201 Extract the first feature point of the target object in the first image.
  • Step S202 extract the second feature point of the target object in the second image; wherein the first feature point and the second feature point are matching feature point pairs at the same position in the target object.
  • step S201 and step S202 can be executed at the same time, or any one of them can be executed first.
  • Step S203 Determine the mapping transformation relationship in the first direction based on the first feature point and the second feature point.
  • Step S204 According to the mapping transformation relationship in the first direction, adjust the position of the second pixel in the target object in the second image in the first direction to the same position as the position of the first pixel in the target object in the first image. , get the third image.
  • the first feature point and the second feature point are matching feature point pairs corresponding to the feature at the same position in the target object. For the same feature in the target object, it is the first feature point in the first image and it is the first feature point in the second image.
  • the second feature point, the first feature point and the second feature point are a pair of matching feature points.
  • part (a) of Figure 4 is a schematic diagram of a first image
  • part (b) of Figure 4 is a schematic diagram of a second image
  • the house and the person are the target objects.
  • the dots on the house and the dots on the person shown in the first image are the first features.
  • the dots on the house and the dots on the person shown in the second image are as the second characteristic.
  • the first feature and the second feature which is a pair of feature points matching the first feature, are connected by a dotted line. For a certain first feature point of the target object in the first image, there is a second feature point matching the first feature point in the second image.
  • the feature points of the feet in the first image are the first feature points
  • the feature points of the feet in the second image are the second feature points
  • the dotted lines represent the features of the feet.
  • the first feature point and the second feature point are connected, and the first feature point and the second feature point, which are both feature points of the foot, are a matching pair of feature points. The same applies to other feature points in Figure 4.
  • the feature points of the feet in the first image and the feature points of the character's hands in the second image are not matching feature point pairs.
  • the feature points of the feet in the first image are the first feature points
  • the character's hands in the second image The feature point of the part is not the second feature point that matches the first feature point.
  • first feature points and multiple second feature points there are multiple first feature points and multiple second feature points respectively, and the numbers of the first feature points and the second feature points may be the same.
  • the method of extracting the first feature point and the second feature point can be determined according to the business needs, as long as the first feature point and the second feature point can be extracted, for example, the feature point extraction algorithm includes scale-invariant feature transformation (Scale-invariant feature transform, SIFT) algorithm and ORB (ORiented Brief) feature extraction algorithm, etc.
  • SIFT Scale-invariant feature transform
  • ORB ORiented Brief
  • step S203 the mapping transformation relationship in the first direction is determined based on the matching first feature points and second feature points.
  • the position of the first feature point in the first image and the position of the second feature point in the second image can be obtained. According to multiple pairs of matching first features The position of the point and the second feature point can determine the mapping transformation relationship in the first direction.
  • mapping transformation relationship in the first direction can be determined based on the positions of four pairs of matching first feature points and second feature points, and the mapping transformation relationship can be a unitary transformation matrix.
  • the homography transformation matrix can be a 3*3 matrix, which can be expressed by the following formula:
  • H 1 represents the homography transformation matrix in the first direction; h 0 to h 8 represent the elements in H 1 .
  • the mapping transformation relationship can also be other forms of transformation matrices. According to the matching positions of the first feature point and the position of the second feature point, the position of the second pixel in the second image can be adjusted in the first direction. That’s it.
  • Step S204 After obtaining the mapping transformation relationship in the first direction, adjust the position of the second pixel in the target object in the second image in the first direction to be consistent with the target object according to the mapping transformation relationship in the first direction.
  • the position of the first pixel in the first image is the same, and the third image is obtained.
  • the homography transformation matrix H 1 in the first direction may also satisfy the following conditions:
  • i represents the matched i-th pair of feature points, that is, the i-th first feature point and the i-th second feature point that match
  • x i represents the first feature point in the i -th pair of feature points in the second direction Coordinates
  • y i represents the coordinate of the first feature point in the i-th pair of feature points in the first direction
  • y i ' represents the coordinate of the second feature point in the i-th pair of feature points in the first direction.
  • the position of the second pixel in the target object in the second image can be adjusted in the first direction to the same position as the position of the first pixel in the target object in the first image, Get the third image.
  • the first direction is a vertical direction.
  • the homography transformation matrix H 1 can also satisfy the following conditions:
  • x i ' represents the coordinate of the second feature point in the i-th pair of feature points in the first direction.
  • FIG. 5 is a schematic diagram of alignment in the first direction.
  • a schematic diagram is shown in which the position of each pixel of the target object in the first direction in the second image shown in part (b) of FIG. 4 is aligned with the position of the corresponding pixel in the first image.
  • the first direction here is the vertical direction.
  • the positions of the pixels corresponding to the target object are on the same line, achieving line alignment.
  • Step S300 determine the target grid in the third image based on the reference grid in the first image, including:
  • Step S301 Create M*N reference grids of preset sizes in the first image.
  • Step S302 Based on the reference pixel values of the pixels corresponding to the K reference positions in the reference grid, determine K target pixel values that meet the preset conditions with the K reference pixel values in the third image.
  • Step S303 Based on the K target pixel values, K target pixels corresponding to the target pixel values are determined in the third image.
  • Step S304 determine the target grid based on K target pixels; M, N and K are all positive integers. Through step S300, the target grid matching each reference grid can be determined.
  • the reference grid in the first image may be created in advance or may be created when step S300 is performed.
  • the size of the reference grid can be determined according to actual needs.
  • the size, shape, and quantity of the preset grid can be determined according to the preset configuration parameters.
  • M and N can be equal.
  • the reference positions and the number of reference positions in the reference grid can also be determined according to actual needs. For example, when the reference grid is a square, the reference positions are the four corner points of the reference grid, that is, four reference positions are included.
  • part (a) of FIG. 7 is a first image including a reference grid
  • part (b) of FIG. 7 is a second image including a target grid.
  • Part (a) of Figure 7 shows a reference grid.
  • the positive direction grid including four dots around the character is one of the reference grids.
  • the positions of the four dots are the reference grid.
  • Reference location For other reference grids, the four corner points of each reference grid are the reference locations.
  • the number of reference grids is 5*5, and the first image is divided into 5*5 reference grids.
  • Step S302 is explained by taking one of the reference grids as an example. According to the reference pixel values of the pixels corresponding to the K reference positions in the reference grid, determine the K reference pixel values that meet the preset conditions in the third image. target pixel value.
  • the pixel corresponding to the reference position in the first image can be determined, and then the pixel value of the corresponding pixel in the first image for each reference position can be determined. Then, based on the preset conditions and the pixel values of the corresponding pixels in the K reference positions in the first image, K target pixel values that meet the preset conditions with the K reference pixel values in the third image can be determined.
  • the determined target pixel value may also be different, and the preset conditions can be determined according to actual needs.
  • K target pixels corresponding to the K target pixel values can be determined in the third image based on the K target pixel values.
  • One target pixel value can correspond to one target pixel.
  • the target grid is determined based on the K target pixels; M, N, and K are all positive integers.
  • the position of the target pixel in the third image is the target position in the target grid, and the target grid can be determined based on the target position. For example, one target pixel corresponds to one target position, K target pixels correspond to K target positions, and the area surrounded by the K target positions is determined as the target grid.
  • the pixel value of the pixel corresponding to the four dots around the character is the target pixel value
  • the pixel corresponding to the four dots is the target pixel
  • the positive direction surrounded by the target pixels is the target grid.
  • the process of determining the corresponding target grid is the same as the above process, that is, the above process applies to each reference grid.
  • a target position corresponds to a reference position, and in the corresponding target position and reference position, the position of the target pixel and the position of the reference pixel are the same in the first direction.
  • the reference position corresponding to the upper left corner of the reference grid is recorded as the first reference position, and the reference position corresponding to the upper right corner is recorded as the second reference position.
  • the position corresponding to the upper left corner of the target grid is recorded as the first target position, and the position corresponding to the upper right corner is recorded as the second target position.
  • the first direction is the vertical direction
  • the first target position and the first reference position are the same in the first direction, that is, they are row aligned and on the same row.
  • the second target position and the second reference position are the same in the first direction, that is, row aligned, on the same row.
  • the position in the second direction is left with a one-dimensional variable, which can be convenient
  • the position in the second direction is adjusted so that the positions of the target position and the reference position in the first direction and the second direction are adjusted to be the same.
  • step S400 it also reduces the problem of large calculation amount and low accuracy of position adjustment when performing step S400 when the corresponding target position and reference position are different in the first direction and the second direction.
  • the amount of calculation is reduced and the accuracy of position adjustment is improved.
  • FIG. 8 is a schematic diagram of another image processing method, the method includes:
  • Step S10 determine the first average pixel value of each pixel in the preset area; wherein the preset area includes: an area with the pixel corresponding to the reference position as the center and the preset number of pixels as the radius;
  • Step S20 determine the first average pixel value as the reference pixel value.
  • the reference pixel value is determined by combining the pixel value of the pixel corresponding to the reference position and the pixel values of the surrounding pixels corresponding to the reference position.
  • the reference position may correspond to a certain pixel in the first image.
  • the pixel value of each pixel in the preset area is used as the basis.
  • the average value of the pixels is determined as the reference pixel.
  • the average value of each pixel in the preset area is recorded as the first average pixel value.
  • step S302 includes:
  • Step S3021 determine K candidate pixel values in the third image.
  • Step S3022 determine the second average pixel value of each pixel in the candidate area; wherein the candidate area includes: an area with the pixel corresponding to the candidate pixel value as the center and a preset number of pixels as the radius.
  • Step S3023 Determine the candidate pixel values corresponding to the K second average pixel values whose K reference pixel values meet the preset conditions as target pixel values.
  • K candidate pixel values are first determined in the third image, and then the average pixel value of the pixel values of each pixel in the candidate area corresponding to each candidate pixel value is determined, that is, the second average Pixel values.
  • One second average pixel value corresponds to one candidate pixel value, and the corresponding K candidate pixel values can be determined based on the K second average pixel values.
  • the candidate pixel values corresponding to the K second average pixel values whose K reference pixel values meet the preset conditions are determined as target pixel values.
  • This method improves the accuracy of determining the target pixel value and reduces the problem of low accuracy caused by determining the target pixel value based on the pixel value corresponding to a single pixel, thereby improving the accuracy of determining the target grid and further improving the accuracy of the target pixel value.
  • the position of each pixel of the target object in the third image is adjusted to the same accuracy as the position of the corresponding pixel of the target object in the first image in the first image.
  • Step S400 According to the position correspondence relationship between the reference grid and the target grid, in the second direction, adjust the position of the second pixel located in the target grid in the third image to be consistent with the position of the first pixel located in the reference grid.
  • the pixels are at the same position in the first image, including:
  • Step S401 determine the position correspondence relationship according to the reference position and the target position.
  • the position correspondence relationship includes: the mapping transformation relationship in the second direction;
  • Step S402 According to the mapping transformation relationship in the second direction, adjust the position of the second pixel located in the target grid in the third image in the second direction to be in the same position as the first pixel located in the reference grid. The same position in an image.
  • the position correspondence can be determined.
  • a reference grid and a matching target grid form a grid pair.
  • the position correspondence relationship of each grid pair can be determined, that is, the mapping transformation relationship in the second direction.
  • the position correspondence relationship in the second direction can be determined based on the target position in the target grid and the reference position in the reference grid.
  • the position in the second direction can be determined.
  • the corresponding relationship includes: mapping transformation relationship in the second direction.
  • the mapping transformation relationship in the second direction may be a transformation relationship in the form of a homography transformation matrix or the like.
  • the position of the second pixel of the target object in the target grid in the third image is adjusted, and the adjusted position of the second pixel of the target object in the target grid in the third image is,
  • the second direction is the same as the position corresponding to the first pixel in the reference grid in the first image, so that the position of each pixel of the target object in the target grid is aligned with the position of the pixel corresponding to the target object in the reference grid. That is, according to the second upward mapping transformation relationship, the position of each pixel in the target grid is adjusted in the second direction.
  • the position of each pixel of the target object located in the target grid in the second direction can be compared with the corresponding pixel of the target object in the reference grid in the second direction. Align the position on.
  • the target grid corresponding to the four dots where the character is located as shown in part (b) of Figure 7, and the position of each pixel in the target grid in the second direction are different from the target grid shown in part (a) of Figure 7.
  • the corresponding pixels in the reference grid corresponding to the four dots where the character is located are aligned in the second direction.
  • the position of the corresponding pixel in the second direction of the left fingertip of the person shown in part (b) of Figure 7 is compared with the position of the left fingertip of the person's left finger shown in part (a) of Figure 7
  • the positions of the corresponding pixels in the second direction are adjusted to be the same, that is, aligned.
  • the position of the pixel corresponding to the person's head shown in part (b) of Figure 7 in the second direction is compared with the position of the pixel corresponding to the person's head shown in part (a) of Figure 7 in the second direction. Adjust the positions in the two directions to be the same, that is, align.
  • mapping transformation relationship in the first direction can be determined based on the positions of four pairs of matching first feature points and second feature points, and the mapping transformation relationship can be a unitary transformation matrix.
  • the homography transformation matrix can be a 3*3 matrix, which can be expressed by the following formula:
  • H 2 represents the homography transformation matrix in the second direction; h 0 to h 8 represent the elements in H 2 .
  • the homography transformation matrix H 2 can also satisfy the following conditions:
  • the position of the second pixel located in the target grid in the third image in the second direction can be adjusted to be the same as the position of the first pixel located in the reference grid in the first image. are in the same position.
  • the homography transformation matrix H 2 can also satisfy the following conditions:
  • FIG. 11 is a schematic diagram of an image processing device, the device includes:
  • Image acquisition module 1 used to acquire a first image and a second image; wherein the first image and the second image include the same target object;
  • the first processing module 2 is configured to adjust the position of the second pixel in the target object in the second image in the first direction to the position of the first pixel in the target object in the first image.
  • the positions are the same, and a third image is obtained; wherein the first pixel and the second pixel are pixels at the same position in the target object;
  • Target grid determination module 3 configured to determine the target grid in the third image according to the reference grid in the first image; wherein the target grid includes: target position; the target The target pixel value of the pixel corresponding to the position satisfies the preset conditions with the reference pixel values of the pixels corresponding to the N reference positions in the reference grid;
  • the second processing module 4 is configured to add the second pixels located in the target grid to the third image in the second direction according to the positional relationship between the reference grid and the target grid.
  • the middle position is adjusted to the same position in the first image as the first pixel located within the reference grid; wherein the second direction is perpendicular to the first direction.
  • the target grid determination module 3 includes:
  • a creation unit configured to create M*N reference grids of preset sizes in the first image
  • a target pixel value determination unit configured to determine, in the third image, the K reference pixel values that satisfy the requirement based on the reference pixel values of the K pixels corresponding to the reference positions in the reference grid. K target pixel values of preset conditions;
  • a target pixel determination unit configured to determine K target pixels corresponding to the target pixel value in the third image based on the K target pixel values
  • a target grid determining unit is used to determine the target grid based on K target pixels; M, N and K are all positive integers.
  • the device further includes:
  • the first average pixel value determination module is used to determine the first average pixel value of each pixel in the preset area; wherein the preset area includes: with the pixel corresponding to the reference position as the center and the preset number of pixels as the radius. area;
  • a reference pixel value determination module configured to determine the first average pixel value as the reference pixel value.
  • the target pixel value determination unit includes:
  • Alternative pixel value determination subunit used to determine K alternative pixel values in the third image
  • the average pixel value determination subunit is used to determine the second average pixel value of each pixel in the candidate area; wherein the candidate area includes: centered on the pixel corresponding to the candidate pixel value, the preset The number of pixels is the area of the radius;
  • Target pixel value determination subunit configured to determine the candidate pixel values corresponding to the K second average pixel values whose K reference pixel values satisfy the preset condition as the target pixels. value.
  • the reference grid at least includes: a square grid or a triangular grid; the reference position at least includes: a corner point of the grid or a midpoint of an edge of the grid.
  • the preset condition at least includes: the sum of absolute values of differences between the reference pixel value and the target pixel value is less than a target threshold.
  • the second processing module 4 includes:
  • a first position correspondence determination unit configured to determine the position correspondence according to the reference position and the target position; the position correspondence includes: a mapping transformation relationship in the second direction;
  • a first processing unit configured to position the second pixel located in the target grid in the third image in the second direction according to the mapping transformation relationship in the second direction, Adjusted to the same position in the first image as the first pixel located within the reference grid.
  • the mapping transformation relationship in the second direction includes: a homography transformation matrix in the second direction.
  • the first processing module 2 includes:
  • a first feature point extraction unit configured to extract the first feature point of the target object in the first image
  • a second feature point extraction unit is used to extract the second feature point of the target object in the second image; wherein the first feature point and the second feature point are at the same position in the target object. matching feature point pairs;
  • a second position correspondence relationship determination unit configured to determine the mapping transformation relationship in the first direction based on the first feature point and the second feature point;
  • a second processing unit configured to adjust the position of the second pixel in the target object in the second image in the first direction to be consistent with the position of the second pixel in the second image according to the mapping transformation relationship in the first direction.
  • the position of the first pixel in the target object in the first image is the same, and the third image is obtained.
  • the mapping transformation relationship in the first direction includes: a homography transformation matrix in the first direction.
  • an electronic device including:
  • the executable instructions execute the method described in any of the above embodiments.
  • a non-transitory computer-readable storage medium is also provided.
  • Computer-executable instructions are stored in the computer-readable storage medium. When the computer-executable instructions are executed by a processor, any of the above are implemented. methods described in the examples.
  • the disclosed example can be applied to the field of image alignment, and can be used to align images of the same scene captured by a single camera at different viewing angles, or can be used to align images of the same scene captured by multiple cameras.
  • the aligned images can be used as image input for multi-frame HDR and super-resolution algorithms, and can also be used as input for multi-camera image fusion algorithms.
  • the method includes:
  • Step 1 Extract feature points of the target object in the first image and the second image.
  • the same feature of the target object forms a feature point pair in the first image and the second image.
  • Step 2 By matching the feature point pairs, determine that in the first direction, the position of the pixel of the target object in the second image and the position of the corresponding pixel of the target object in the first image are adjusted to the same homography transformation matrix, for example
  • the first direction is the vertical direction
  • the homography matrix of the global row alignment of the image is used to realize the row alignment of the target object in the first image and the target object in the second image, and obtain the third image.
  • Step 3 Determine the corresponding target grid in the third image based on the reference grid in the first image. It includes: dividing the grid on the third image, that is, the row-aligned image, and obtaining the target grid corresponding to the grid point on the image through row search and matching of grid points.
  • Step 4 For each reference grid and the corresponding target grid, calculate the position transformation relationship of the target object in the first image and the third image in the second direction using the corresponding relationship between the preset position in the grid and the target position. The position of the target object in the third image is adjusted according to the position transformation relationship.
  • Step 1 Extract matching feature points of the target object in the first image and the target object in the second image.
  • Step 2 Calculate the global row-aligned homography matrix and perform row alignment on the image.
  • the global homography transformation matrix H specifically a 3*3 matrix, has the following form:
  • h 0 to h 8 represent elements of H.
  • the homography matrix contains a total of 8 degrees of freedom, and a set of matching feature point pairs in step 1 can provide two sets of constraints. Therefore, at least four pairs of matching feature point pairs are required to obtain a row-aligned homography matrix. At the same time, in order to achieve row alignment of the image, the obtained H matrix also needs to satisfy the transformed image, and the coordinates (i.e. y coordinate) of the matching feature point pairs in the row direction are equal, and the H matrix satisfies:
  • Step 3 On the third image after row alignment, use grid point row search to determine the target grid.
  • Step 4 Use the reference position of the reference grid and the target position of the target grid to determine the position transformation relationship between the target object in the first image and the target object in the third image.
  • step three the corresponding relationship between the reference position in each reference grid and the target position in the target grid can be obtained. Since each grid has four grid points, homography transformation can be used to calculate each reference The position correspondence between the pixels in the grid and the corresponding target grid.
  • the form of homography transformation is similar to the H matrix in step 2. By mapping the pixels in each target grid using homography transformation, the third image is aligned with the pixel positions of the target object in the first image.
  • Figure 13 is a block diagram of a terminal device according to an exemplary embodiment.
  • the terminal device may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, etc.
  • the terminal device may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and communications component 816.
  • a processing component 802 a memory 804
  • a power component 806 a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and communications component 816.
  • I/O input/output
  • the processing component 802 generally controls the overall operations of the terminal device, such as operations associated with presentations, phone calls, data communications, camera operations, and recording operations.
  • the processing component 802 may include one or more processors 820 to execute instructions to complete all or part of the steps of the above method. Additionally, processing component 802 may include one or more modules that facilitate interaction between processing component 802 and other components. For example, processing component 802 may include a multimedia module to facilitate interaction between multimedia component 808 and processing component 802.
  • the memory 804 is configured to store various types of data to support operations at the terminal device. Examples of such data include instructions for any application or method operating on the terminal device, contact data, phonebook data, messages, pictures, videos, etc.
  • Memory 804 may be implemented by any type of volatile or non-volatile storage device, or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EEPROM), Programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read-only memory
  • EEPROM erasable programmable read-only memory
  • EPROM Programmable read-only memory
  • PROM programmable read-only memory
  • ROM read-only memory
  • magnetic memory flash memory
  • flash memory magnetic or optical disk.
  • Power component 806 provides power to various components of the terminal device.
  • Power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power to end devices.
  • Multimedia component 808 includes a screen that provides an output interface between the terminal device and the user.
  • the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. A touch sensor can not only sense the boundaries of a touch or swipe action, but also detect the duration and pressure associated with the touch or swipe action.
  • multimedia component 808 includes a front-facing camera and/or a rear-facing camera.
  • the front camera and/or the rear camera can receive external multimedia data.
  • Each front-facing camera and rear-facing camera can be a fixed optical lens system or have a focal length and optical zoom capabilities.
  • Audio component 810 is configured to output and/or input audio signals.
  • the audio component 810 includes a microphone (MIC) configured to receive external audio signals when the terminal device is in an operating mode, such as a call mode, a recording mode, and a voice recognition mode.
  • the received audio signal may be further stored in memory 804 or sent via communication component 816 .
  • audio component 810 also includes a speaker for outputting audio signals.
  • the I/O interface 812 provides an interface between the processing component 802 and a peripheral interface module, which may be a keyboard, a click wheel, a button, etc. These buttons may include, but are not limited to: Home button, Volume buttons, Start button, and Lock button.
  • Sensor component 814 includes one or more sensors for providing various aspects of status assessment for the terminal device.
  • the sensor component 814 can detect the open/closed state of the terminal device, the relative positioning of components, such as the display and keypad of the terminal device, and the sensor component 814 can also detect the position change of the terminal device or a component of the terminal device, The presence or absence of user contact with the terminal device, terminal device orientation or acceleration/deceleration and temperature changes of the terminal device.
  • Sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact.
  • Sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
  • the sensor component 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • the communication component 816 is configured to facilitate wired or wireless communication between the terminal device and other devices.
  • Terminal devices can access wireless networks based on communication standards, such as WiFi, 4G or 5G, or a combination thereof.
  • the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel.
  • communications component 816 also includes a near field communications (NFC) module to facilitate short-range communications.
  • NFC near field communications
  • the NFC module can be implemented based on frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
  • RFID frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • the terminal device may be configured by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable Gate array (FPGA), controller, microcontroller, microprocessor or other electronic components are implemented for executing the above method.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGA field programmable Gate array
  • controller microcontroller, microprocessor or other electronic components are implemented for executing the above method.

Abstract

Des modes de réalisation de la présente divulgation concernent un procédé et un appareil de traitement d'image, un dispositif électronique et un support de stockage. Le procédé consiste : à obtenir une première image et une deuxième image qui comprennent un même objet cible ; à régler la position d'un deuxième pixel dans l'objet cible dans la deuxième image à la même position qu'un premier pixel dans l'objet cible dans la première image dans une première direction afin d'obtenir une troisième image, le premier pixel et le deuxième pixel étant des pixels à une même position dans l'objet cible ; à déterminer une grille cible dans la troisième image en fonction d'une grille de référence dans la première image, la grille cible comprenant une position cible, et la valeur de pixel cible d'un pixel correspondant à la position cible et la valeur de pixel de référence d'un pixel correspondant à une position de référence dans la grille de référence satisfaisant une condition prédéfinie ; et à régler la position du deuxième pixel dans la troisième image à la même position que le premier pixel dans la première image dans une deuxième direction en fonction de la correspondance de position entre la grille de référence et la grille cible. Grâce au procédé, l'effet d'alignement de l'objet cible dans la première image et la deuxième image est amélioré.
PCT/CN2022/090570 2022-04-29 2022-04-29 Procédé et appareil de traitement d'image, dispositif électronique et support de stockage WO2023206475A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/090570 WO2023206475A1 (fr) 2022-04-29 2022-04-29 Procédé et appareil de traitement d'image, dispositif électronique et support de stockage

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/090570 WO2023206475A1 (fr) 2022-04-29 2022-04-29 Procédé et appareil de traitement d'image, dispositif électronique et support de stockage

Publications (1)

Publication Number Publication Date
WO2023206475A1 true WO2023206475A1 (fr) 2023-11-02

Family

ID=88516817

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/090570 WO2023206475A1 (fr) 2022-04-29 2022-04-29 Procédé et appareil de traitement d'image, dispositif électronique et support de stockage

Country Status (1)

Country Link
WO (1) WO2023206475A1 (fr)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011070579A (ja) * 2009-09-28 2011-04-07 Dainippon Printing Co Ltd 撮影画像表示装置
CN102065227A (zh) * 2009-11-17 2011-05-18 新奥特(北京)视频技术有限公司 一种图形图像处理中对象水平和竖直对齐的方法和装置
CN106952630A (zh) * 2017-03-23 2017-07-14 深圳市茁壮网络股份有限公司 像素区域处理方法、装置以及像素区域切换方法和装置
CN109166077A (zh) * 2018-08-17 2019-01-08 广州视源电子科技股份有限公司 图像对齐方法、装置、可读存储介质及计算机设备
CN111815690A (zh) * 2020-09-11 2020-10-23 湖南国科智瞳科技有限公司 一种用于显微图像实时拼接的方法、系统和计算机设备
CN113781369A (zh) * 2020-06-09 2021-12-10 安讯士有限公司 对齐数字图像
CN114298902A (zh) * 2021-12-02 2022-04-08 上海闻泰信息技术有限公司 一种图像对齐方法、装置、电子设备和存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011070579A (ja) * 2009-09-28 2011-04-07 Dainippon Printing Co Ltd 撮影画像表示装置
CN102065227A (zh) * 2009-11-17 2011-05-18 新奥特(北京)视频技术有限公司 一种图形图像处理中对象水平和竖直对齐的方法和装置
CN106952630A (zh) * 2017-03-23 2017-07-14 深圳市茁壮网络股份有限公司 像素区域处理方法、装置以及像素区域切换方法和装置
CN109166077A (zh) * 2018-08-17 2019-01-08 广州视源电子科技股份有限公司 图像对齐方法、装置、可读存储介质及计算机设备
CN113781369A (zh) * 2020-06-09 2021-12-10 安讯士有限公司 对齐数字图像
CN111815690A (zh) * 2020-09-11 2020-10-23 湖南国科智瞳科技有限公司 一种用于显微图像实时拼接的方法、系统和计算机设备
CN114298902A (zh) * 2021-12-02 2022-04-08 上海闻泰信息技术有限公司 一种图像对齐方法、装置、电子设备和存储介质

Similar Documents

Publication Publication Date Title
US11403763B2 (en) Image segmentation method and apparatus, computer device, and storage medium
CN111541845B (zh) 图像处理方法、装置及电子设备
WO2019101021A1 (fr) Procédé de reconnaissance d'image, appareil et dispositif électronique
JP7058760B2 (ja) 画像処理方法およびその、装置、端末並びにコンピュータプログラム
JP5450739B2 (ja) 画像処理装置及び画像表示装置
CN108470322B (zh) 处理人脸图像的方法、装置及可读存储介质
US11176687B2 (en) Method and apparatus for detecting moving target, and electronic equipment
US11301051B2 (en) Using natural movements of a hand-held device to manipulate digital content
US11030733B2 (en) Method, electronic device and storage medium for processing image
WO2016127671A1 (fr) Procédé et dispositif de génération de filtre d'images
US11308692B2 (en) Method and device for processing image, and storage medium
CN109325908B (zh) 图像处理方法及装置、电子设备和存储介质
CN108776822B (zh) 目标区域检测方法、装置、终端及存储介质
WO2022121577A1 (fr) Procédé et appareil de traitement d'images
CN112738420B (zh) 特效实现方法、装置、电子设备和存储介质
CN114529606A (zh) 位姿检测方法及装置、电子设备和存储介质
KR20210053121A (ko) 이미지 처리 모델의 훈련 방법, 장치 및 매체
WO2023206475A1 (fr) Procédé et appareil de traitement d'image, dispositif électronique et support de stockage
CN114390206A (zh) 拍摄方法、装置和电子设备
CN109934168B (zh) 人脸图像映射方法及装置
CN114241127A (zh) 全景图像生成方法、装置、电子设备和介质
CN110012208B (zh) 拍照对焦方法、装置、存储介质及电子设备
CN113592874A (zh) 图像显示方法、装置和计算机设备
WO2023240452A1 (fr) Procédé et appareil de traitement d'image, dispositif électronique et support de stockage
WO2023225910A1 (fr) Procédé et appareil d'affichage vidéo, dispositif terminal et support de stockage informatique

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22939322

Country of ref document: EP

Kind code of ref document: A1