WO2023005359A1 - 图像处理方法和装置 - Google Patents

图像处理方法和装置 Download PDF

Info

Publication number
WO2023005359A1
WO2023005359A1 PCT/CN2022/093171 CN2022093171W WO2023005359A1 WO 2023005359 A1 WO2023005359 A1 WO 2023005359A1 CN 2022093171 W CN2022093171 W CN 2022093171W WO 2023005359 A1 WO2023005359 A1 WO 2023005359A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
pixel
coordinates
trigger
processing
Prior art date
Application number
PCT/CN2022/093171
Other languages
English (en)
French (fr)
Inventor
吴金远
张元煌
郭燚
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Publication of WO2023005359A1 publication Critical patent/WO2023005359A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/27Server based end-user applications
    • H04N21/274Storing end-user multimedia data in response to end-user request, e.g. network recorder
    • H04N21/2743Video hosting of uploaded data from client
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects

Definitions

  • the present disclosure relates to the field of image processing, and in particular to an image processing method and device.
  • the present disclosure provides an image processing method and device, which can process an image in a video into an image with a special effect of a magnifying glass, which increases the diversity of special effects and improves user experience.
  • the present disclosure provides an image processing method.
  • obtain the screen coordinates of the first trigger operation For each frame of the first image obtained after the first trigger operation, According to the screen coordinates of the first trigger operation, the coordinates of the trigger pixel on the first image are obtained; according to the coordinates of the trigger pixel, special effect processing is performed on the first image to obtain an image with a special effect of a magnifying glass.
  • performing special effect processing on the first image according to the coordinates of the trigger pixel to obtain an image with a magnifying glass special effect includes: obtaining the first image according to the frame number of the first image and the first mapping relationship
  • the processing parameters corresponding to the first image, the processing parameters include at least one of the following: color difference intensity coefficient, distortion coefficient, scaling coefficient and blur coefficient, and the first mapping relationship is used to indicate the corresponding relationship between frame numbers and processing parameters ;
  • the special effect processing and the processing parameters Correspondingly, and include at least one of the following: radial chromatic aberration processing, distortion processing, scaling processing or radial blur processing.
  • the processing parameters include: chromatic aberration intensity coefficient
  • the special effect processing includes: radial chromatic aberration processing; according to the coordinates of the trigger pixel and the processing parameters corresponding to the first image, the second Performing special effects processing on an image, including: for each pixel on the first image, according to the coordinates of the trigger pixel, the coordinates of the pixel, the number of sampling points, the step coefficient, and the corresponding color channel Intensity coefficient, weight coefficient, the first image texture, and the color difference intensity coefficient, obtain the sum of the color values of a plurality of sampling points corresponding to the pixel point in each color channel; according to the multi-color value corresponding to the pixel point The sum of the color values of the sampling points in each color channel determines the color value of the pixel in each color channel.
  • the coordinates of the trigger pixel point, the coordinates of the pixel point, the number of sampling points, the step size coefficient, the intensity coefficient corresponding to each color channel, the weight coefficient, the first image texture and the The color difference intensity coefficient, obtaining the sum of the color values of a plurality of sampling points corresponding to the pixel point in each color channel includes: determining the trigger pixel according to the coordinates of the trigger pixel point and the coordinates of the pixel point The direction from the point to the pixel point; determine the sampling step size according to the direction from the trigger pixel point to the pixel point, the step size coefficient and the number of sampling points; for each color channel in the RGB channel , according to the direction from the trigger pixel point to the pixel point, the color difference intensity coefficient and the intensity coefficient corresponding to the color channel, determine the offset corresponding to the color channel; for each color channel in the RGB channel, According to the first image texture, the coordinates of the pixel point, the offset corresponding to the color channel, the sampling step size, the number of sampling
  • determining the color value of the pixel in each color channel according to the sum of the color values in each color channel of a plurality of sampling points corresponding to the pixel includes: for each RGB channel color channels, and divide the sum of the color values of the plurality of sampling points corresponding to the pixel in the color channel by the number of sampling points to obtain the color value of the pixel in the color channel.
  • the processing parameters include: distortion coefficients, and the special effect processing includes: distortion processing;
  • the performing special effect processing on the first image according to the coordinates of the trigger pixel and the processing parameters corresponding to the first image includes: obtaining a distortion function according to the distortion coefficient; For each pixel of , according to the coordinates of the trigger pixel, the coordinates of the pixel, the distance from the trigger pixel to the pixel, and the distortion function, determine the pixel on the first image The pixel point before distortion corresponding to the point; the color value of the pixel point before distortion is used as the color value of the pixel point.
  • the processing parameters include: scaling coefficients
  • the special effect processing includes: scaling processing; performing processing on the first image according to the coordinates of the trigger pixel and the processing parameters corresponding to the first image
  • Special effects processing including: determining the scaled vertex coordinates according to the coordinates of the trigger pixel, the current vertex coordinates of the quadrilateral model, and the scaling factor, the quadrilateral model is used to change the display size of the image; the quadrilateral model The vertex coordinates of are updated to the zoomed vertex coordinates; the first image is mapped to the quadrilateral model to obtain the image with the special effect of a magnifying glass.
  • the processing parameters include: a blur coefficient
  • the special effect processing includes: radial blur processing; according to the coordinates of the trigger pixel and the processing parameters corresponding to the first image, the first Performing special effect processing on the image, including: for each pixel on the first image, according to the coordinates of the trigger pixel, the coordinates of the pixel, the number of sampling points, the texture of the first image, and the blur coefficient , acquiring the sum of the color values of multiple sampling points corresponding to the pixel; acquiring the color value of the pixel according to the sum of the color values of the multiple sampling points corresponding to the pixel.
  • a plurality of sampling points corresponding to the pixel are acquired
  • the sum of the color values includes: determining the direction from the trigger pixel point to the pixel point according to the coordinates of the trigger pixel point and the coordinates of the pixel point; according to the coordinates of the pixel point, the sampling point.
  • the number, the blur coefficient, the first image texture, and the direction from the trigger pixel to the pixel determine the sum of the color values of a plurality of sampling points corresponding to the pixel.
  • the acquiring the color value of the pixel according to the sum of the color values of the plurality of sampling points corresponding to the pixel includes: using the sum of the color values of the plurality of sampling points corresponding to the pixel Divide by the number of sampling points to obtain the color value of the pixel.
  • performing special effect processing on the first image according to the coordinates of the trigger pixel and processing parameters corresponding to the first image includes: according to the coordinates of the trigger pixel and the first The processing parameter corresponding to the image is to sequentially perform the radial chromatic aberration processing, the distortion processing, the scaling processing, and the radial blur processing on the first image.
  • the frame number is positively correlated with the color difference intensity coefficient, scaling coefficient, and blur coefficient, and the frame number is negatively correlated with the distortion coefficient.
  • the method further includes: in response to a second trigger operation by the user, acquiring the screen coordinates of the second trigger operation; for each frame of the second image acquired after the second trigger operation, according to the The screen coordinates of the second trigger operation, to obtain the coordinates of the trigger pixel on the second image; according to the frame number of the second image and the second mapping relationship, to obtain the processing parameters corresponding to the second image, according to the The coordinates of the trigger pixel on the second image and the processing parameters corresponding to the second image are used to perform special effect processing on the second image.
  • the frame number is respectively related to the color difference intensity coefficient, scaling factor and blur
  • the coefficient is negatively correlated and positively correlated with the distortion coefficient.
  • the present disclosure provides a terminal device, including: an acquisition module, configured to acquire the screen coordinates of the first trigger operation in response to a user's first trigger operation; a special effect processing module, configured to target the first trigger operation For each frame of the first image obtained after the trigger operation, according to the screen coordinates of the first trigger operation, the coordinates of the trigger pixel on the first image are obtained; according to the coordinates of the trigger pixel, the first The image is processed with special effects to obtain an image with a special effect of a magnifying glass.
  • the present disclosure provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the processor implements the method in the first aspect.
  • the present disclosure provides a terminal device, including: a processor; and a memory configured to store executable instructions of the processor; wherein the processor is configured to implement the first step by executing the executable instructions.
  • a terminal device including: a processor; and a memory configured to store executable instructions of the processor; wherein the processor is configured to implement the first step by executing the executable instructions.
  • the present disclosure provides a computer program product, the computer program product includes a computer program, the computer program is stored in a computer-readable storage medium, at least one processor can read from the computer-readable storage medium The computer program, when the at least one processor executes the computer program, realizes the method of the first aspect.
  • the present disclosure provides a computer program.
  • the processor implements the method of the first aspect.
  • the image processing method and device provided by the present disclosure acquire the screen coordinates of the first trigger operation in response to the first trigger operation of the user; for each frame of the first image obtained after the first trigger operation, according to the screen coordinates of the first trigger The coordinates are used to obtain the coordinates of the trigger pixel on the first image; according to the coordinates of the trigger pixel, special effect processing is performed on the first image to obtain an image with a special effect of a magnifying glass.
  • the above method can process the image in the video into an image with a special effect of a magnifying glass, which increases the variety of special effects and improves user experience.
  • FIG. 1 is a schematic flowchart of Embodiment 1 of the image processing method provided by the present disclosure
  • FIG. 2 is a user interface diagram provided by the present disclosure
  • FIG. 3 is a schematic flowchart of Embodiment 2 of the image processing method provided by the present disclosure
  • FIG. 4 is a second schematic flow diagram of Embodiment 2 of the image processing method provided by the present disclosure.
  • FIG. 5 is a schematic diagram of radial chromatic aberration processing provided by the present disclosure.
  • FIG. 6 is a schematic flowchart of Embodiment 3 of the image processing method provided by the present disclosure.
  • FIG. 7 is a schematic diagram of distortion processing provided by the present disclosure.
  • FIG. 8 is a schematic flowchart of Embodiment 4 of the image processing method provided by the present disclosure.
  • FIG. 9 is a schematic diagram of scaling processing provided by the present disclosure.
  • FIG. 10 is a schematic flowchart of Embodiment 5 of the image processing method provided by the present disclosure.
  • FIG. 11 is a schematic structural diagram of an image processing device provided by the present disclosure.
  • Fig. 12 is a schematic diagram of a hardware structure of a terminal device provided by the present disclosure.
  • At least one (piece) of a, b, or c can represent: a alone, b alone, c alone, a combination of a and b, a combination of a and c, b and A combination of c, or a, b, and c, where a, b, and c can be single or multiple.
  • the present disclosure provides an image processing method, which can process an image in a video into an image with a special effect of a magnifying glass, which increases the special effect of the image and improves user experience.
  • Observing the magnification effect of a real magnifying glass there are phenomena such as radial chromatic aberration, distortion, zooming, and radial blurring.
  • This disclosure simulates these phenomena and performs corresponding processing on the image in the video, so that the processed image can be close to the magnification of a real magnifying glass effect, making the special effects more realistic.
  • the image processing method provided by the present disclosure can be executed by a terminal device.
  • the form of the terminal device includes but is not limited to: a smart phone, a tablet computer, a notebook computer, a wearable electronic device, or a smart home device such as a smart TV.
  • the form is not limited.
  • FIG. 1 is a schematic flowchart of Embodiment 1 of the image processing method provided by the present disclosure. As shown in FIG. 1 , the image processing method provided by the present disclosure includes:
  • the first trigger operation may include a user's touch operation on the screen, and the touch operation may include a click operation, a double-click operation, or a slide operation, etc., and the first trigger operation may also include an expression trigger operation, etc., the present application does not limit the specific form of the first trigger operation.
  • the screen coordinates of the first trigger operation refer to the coordinates of the first trigger operation on the screen of the terminal device.
  • the first image may be an image in a video collected in real time, may also be a locally saved image uploaded by a user or an image in a video, or may be an image sent by another device or an image in a video.
  • the screen coordinates of the first trigger operation can be matched with the coordinates of each pixel on the first image, and the successfully matched pixel is used as the trigger pixel, and the The coordinates of the successfully matched pixels are used as the coordinates of the trigger pixel.
  • the corresponding relationship between the frame number and the processing parameter may be established in advance.
  • the corresponding relationship is referred to as the first mapping relationship.
  • the frame number of an image and the above-mentioned first mapping relationship determine the processing parameters corresponding to the first image, and then use the processing parameters corresponding to the first image to perform special effect processing on the first image according to the coordinates of the trigger pixel.
  • the processing parameters corresponding to the first image of each frame are different, the degree of enlargement of the processed image varies.
  • the above-mentioned processing parameters may include at least one of the following: color difference intensity coefficient, distortion coefficient, scaling coefficient and blur coefficient
  • the special effect processing includes at least one of the following: radial color difference processing, distortion processing, scaling processing or radial blur processing .
  • the processing process included in the special effect processing corresponds to the processing parameters.
  • the processing parameters include chromatic aberration intensity coefficient and distortion coefficient
  • the special effect processing includes radial chromatic aberration processing and distortion processing.
  • the three parameters of the frame number and the color difference intensity coefficient, the zoom coefficient and the blur coefficient may be positively correlated, and negatively correlated with the distortion coefficient, thereby realizing The zoom level of the image becomes larger frame by frame.
  • the image processing method provided by the present disclosure acquires the screen coordinates of the first trigger operation in response to the first trigger operation of the user; for each frame of the first image obtained after the first trigger operation, according to the screen coordinates of the first trigger operation, Acquiring the coordinates of the trigger pixel on the first image; performing special effect processing on the first image according to the coordinates of the trigger pixel to obtain an image with a special effect of a magnifying glass.
  • the above method can process the image in the video into an image with a special effect of a magnifying glass, which increases the variety of special effects and improves user experience.
  • FIG. 3 is a schematic flowchart of Embodiment 2 of the image processing method provided by the present disclosure.
  • the special effect processing in the present disclosure may include radial chromatic aberration processing.
  • This embodiment describes the radial chromatic aberration processing process.
  • the image processing method provided by this embodiment includes:
  • the first mapping relationship may be established in advance, and the first mapping relationship is used to indicate the corresponding relationship between the frame number and the color difference intensity coefficient.
  • the color difference intensity coefficient performs special effect processing on the first image.
  • the essence of radial chromatic aberration processing on the first image is to recalculate the color value of each pixel on the first image. After each pixel is assigned a new color value, an image with radial chromatic aberration effect is obtained .
  • the radial chromatic aberration processing process may include S304-S305.
  • the sum of the color values in each color channel of multiple sampling points corresponding to each pixel point may be obtained through the following steps. Taking any pixel on the first image as an example, for the convenience of description, this disclosure refers to the pixel as the current pixel, as shown in Figure 4, which specifically includes:
  • S304-A Determine the direction from the trigger pixel to the current pixel according to the coordinates of the trigger pixel and the coordinates of the current pixel.
  • S304-B Determine the sampling step size according to the direction from the trigger pixel point to the current pixel point, the step size coefficient, and the number of sampling points.
  • sampling step size can be determined by the following formula:
  • step is the sampling step
  • dir is the direction from the trigger pixel to the current pixel
  • radiusStrength is the step coefficient
  • u_Sample is the number of sampling points
  • the step coefficient and the number of sampling points can be preset values.
  • the offset corresponding to the red channel can be determined by the following formula:
  • redOffset is the offset corresponding to the red channel
  • dir is the direction from the trigger pixel to the current pixel
  • u_Strength is the color difference strength coefficient
  • u_Strength is obtained through S302
  • u_RedStrength is the strength coefficient of the red channel
  • the strength coefficient of the red channel can be preset set value.
  • the offset corresponding to the green channel can be determined by the following formula:
  • greenOffset is the offset corresponding to the green channel
  • dir is the direction from the trigger pixel to the current pixel
  • u_Strength is the color difference strength coefficient
  • u_Strength is obtained through S302
  • u_GreenStrength is the strength coefficient of the green channel
  • the strength coefficient of the green channel can be preset set value.
  • the offset corresponding to the blue channel can be determined by the following formula:
  • blueOffset is the offset corresponding to the blue channel
  • dir is the direction from the trigger pixel to the current pixel
  • u_Strength is the color difference strength coefficient
  • u_Strength is obtained through S302
  • u_BlueStrength is the strength coefficient of the blue channel
  • the strength coefficient of the blue channel Can be a default value.
  • the loop statement can be used to determine the sum of the color values of multiple sampling points corresponding to the current pixel point in the color channel. Taking the red channel as an example, the formula in the loop statement is:
  • R is the sum of the color values of multiple sampling points in the red channel
  • InputTexture is the first image texture
  • uv is the coordinates of the current pixel
  • redOffset is the offset corresponding to the red channel
  • weight is the weight coefficient
  • R/ u_Sample.
  • R is the sum of the color values of multiple sampling points corresponding to the current pixel in the red channel
  • u_Sample is the number of sampling points.
  • pixel O is the trigger pixel
  • pixel M is the current pixel
  • the multiple sampling points corresponding to the current pixel are M1, M2, and M3.
  • the sum of the color values of the R channel is R1+R2+R3
  • the sum of the color values of M1, M2 and M3 in the G channel is G1+G2+G3
  • the sum of the color values of M1, M2 and M3 in the B channel The sum is B1+B2+B3
  • the RGB values of M can be determined: (R1+R2+R3)/3, (G1+G2+G3)/3, (B1+B2+B3)/3.
  • All pixels on the first image are processed as above in S304-S305, and the RGB values of all pixels can be obtained.
  • the image processed by the radial color difference can be obtained.
  • the image processing method provided in this embodiment provides a radial chromatic aberration processing method, and the image processed by this method has the effect of radial chromatic aberration in a real magnifying glass, and the special effect has a strong sense of reality.
  • FIG. 6 is a schematic flowchart of Embodiment 3 of the image processing method provided by the present disclosure.
  • the special effect processing in the present disclosure may include distortion processing.
  • This embodiment describes the distortion processing process.
  • the image processing method provided in this embodiment includes:
  • the first mapping relationship can be established in advance, and the first mapping relationship is used to indicate the corresponding relationship between the frame number and the distortion coefficient.
  • the first mapping relationship determines the distortion coefficient corresponding to the first image, and then according to the coordinates of the trigger pixel point, the distortion coefficient corresponding to the first image is used to perform special effect processing on the first image.
  • the process of distortion processing is introduced below. What needs to be explained is: as described above, the essence of radial chromatic aberration processing is to recalculate the color values of each pixel on the first image. Similarly, the essence of distortion processing is to recalculate the color values of each pixel on the first image. The color value of the pixel. After each pixel is assigned a new color value, an image with a distortion effect is obtained. Specifically including S604-S607.
  • this disclosure refers to the pixel point as the current pixel point, and the pixel point before distortion corresponding to the current pixel point can be determined by the following formula:
  • uv is the coordinate of the pixel before distortion corresponding to the current pixel
  • textureCoordinate is the coordinate of the current pixel
  • center is the coordinate of the trigger pixel
  • dis is the distance from the trigger pixel to the current pixel
  • f is the distortion function.
  • the pre-distortion pixel is found from the first image, and the color value of the pre-distortion pixel is used as the color value of the current pixel.
  • Figure 7 is the image after radial chromatic aberration processing, assuming that the pixel point O is the trigger pixel point, and the pixel point M is the current pixel point, after S605, the pixel point before the distortion corresponding to the pixel point M is M1, and the RGB value of M1 is: R1, G1, B1, then the RGB value of the pixel point M can be determined: R1, G1, B1.
  • the image processing method provided in this embodiment provides a distortion processing method, and the image obtained by using this method has a distortion effect in a real magnifying glass, and the special effect has a strong sense of reality.
  • FIG. 8 is a schematic flowchart of Embodiment 4 of the image processing method provided by the present disclosure.
  • the special effect processing in the present disclosure may include scaling processing.
  • This embodiment describes the scaling processing process.
  • the image processing method provided in this embodiment includes:
  • the first mapping relationship can be established in advance, and the first mapping relationship is used to indicate the corresponding relationship between the frame number and the scaling factor.
  • first according to the frame number of the first image and the above-mentioned first A mapping relationship is used to determine the scaling factor corresponding to the first image, and then according to the coordinates of the trigger pixel point, the scaling factor corresponding to the first image is used to perform special effect processing on the first image.
  • the scaled vertex coordinates can be calculated by the following formula:
  • pos is the current vertex coordinate of the quadrilateral model 10
  • center is the coordinate of the trigger pixel
  • scale is the scaling factor
  • pos1 is the scaled vertex coordinates.
  • the current vertices of the quadrilateral model 10 are A, B, C and D, and the scaling factor obtained in S803 is 1, it can be determined that the quadrilateral needs to be centered on point O.
  • the model 10 is doubled, and the scaled vertices are indicated by A', B', C' and D'.
  • the image processing method provided in this embodiment provides a zoom processing method, and the image obtained by using this method has a zoom effect in a real magnifying glass, and the special effect has a strong sense of reality.
  • FIG. 10 is a schematic flowchart of Embodiment 5 of the image processing method provided by the present disclosure.
  • the special effect processing in the present disclosure may include radial blur processing.
  • This embodiment describes the radial blur processing process.
  • the image processing method provided by this embodiment includes:
  • the first mapping relationship can be established in advance, and the first mapping relationship is used to indicate the correspondence between the frame number and the blur coefficient.
  • the first mapping relationship is used to indicate the correspondence between the frame number and the blur coefficient.
  • the essence of radial blur processing is to recalculate the color value of each pixel on the first image. After each pixel is assigned a new color value, an image with radial blur effect is obtained. Specifically including S1004-S1005.
  • this disclosure refers to this pixel as the current pixel, and the sum of the color values of multiple sampling points corresponding to the current pixel can be obtained by the following method:
  • the direction from the trigger pixel to the current pixel is determined.
  • the coordinates of the current pixel the number of sampling points, the blur coefficient, the texture of the first image, and the direction from the trigger pixel to the current pixel, the sum of the color values of the plurality of sampling points corresponding to the current pixel is determined.
  • a loop statement can be used to determine the sum of the color values of multiple sampling points corresponding to the current pixel point, and the formula in the loop statement is:
  • i is a loop variable
  • the number of loops is equal to the number of sampling points
  • uv is the coordinate of the current pixel
  • blurfactor is the blur factor
  • dir is the direction from the trigger pixel to the current pixel
  • InputTexture is the first image texture
  • outColor is The sum of the color values of multiple sampling points.
  • vec2 is used to represent the coordinates of the current pixel point
  • uv is a two-dimensional vector.
  • outColor is the sum of the color values of multiple sampling points corresponding to the current pixel point
  • u_Sample is the number of sampling points.
  • All the pixels on the first image are processed in S1004-S1005 to obtain the color values of all the pixels. Assign the calculated color value to the corresponding pixel, and then the radially blurred image can be obtained.
  • the image processing method provided in this embodiment provides a radial blur processing method, and the image processed by this method has a radial blur effect in a real magnifying glass, and the special effect has a strong sense of reality.
  • the first image can be sequentially subjected to radial chromatic aberration processing, distortion processing, scaling processing, and radial blur processing.
  • the result of radial chromatic aberration processing is the object of distortion processing, that is, the first image in the process of distortion processing described in Figure 6 is the image after radial chromatic aberration processing, and the result of distortion processing is the object of scaling processing, that is, 8.
  • the first image in the zooming process is a distorted image
  • the result of the zooming process is the object of the radial blurring process, that is, the first image in the radial blurring process in FIG. 10 is the zoomed image. image.
  • the image obtained by this processing sequence is closer to the effect of a real magnifying glass.
  • the three parameters of the frame number and the color difference intensity coefficient, the zoom coefficient and the blur coefficient may be positively correlated, and in the case of a negative correlation with the distortion coefficient, the enlarged image obtained through the above image processing process The degree is increased frame by frame.
  • the user may also trigger the second trigger operation, and obtain the screen coordinates of the second trigger operation in response to the second trigger operation; For each frame of the second image obtained after the second trigger operation, according to the screen coordinates of the second trigger operation, the coordinates of the trigger pixel on the second image are obtained; according to the frame number of the second image and the second mapping relationship, the first The processing parameters corresponding to the second image, according to the coordinates of the trigger pixel point on the second image and the processing parameters corresponding to the second image, perform special effect processing on the second image, in the second mapping relationship, frame number and color difference intensity coefficient, scaling factor and The blur coefficient can be negatively correlated, and the frame number and distortion coefficient can be positively correlated, so that after the terminal device receives the second trigger operation, the image magnification degree obtained through the above image processing process becomes smaller frame by frame, so that the user triggers the first trigger
  • the change of image magnification after the operation and the change of image magnification after the second trigger operation are two opposite processes. After the user triggers the first trigger operation, the mag
  • the relationship between the frame number and the color difference intensity coefficient, the scaling factor and the blur coefficient can also be set as a negative correlation, and the relationship between the frame number and the distortion coefficient can be set as a positive correlation.
  • the relationship between the frame number and the color difference intensity coefficient, scaling factor and blur coefficient is set as a positive correlation, and the relationship between the frame number and the distortion coefficient is set as a negative correlation.
  • FIG. 11 is a schematic structural diagram of an image processing device provided by the present disclosure. As shown in Figure 11, the image processing device provided by the present disclosure includes:
  • An acquisition module 1101, configured to acquire the screen coordinates of the first trigger operation in response to the user's first trigger operation
  • the special effect processing module 1102 is configured to obtain, for each frame of the first image obtained after the first trigger operation, the coordinates of the trigger pixel on the first image according to the screen coordinates of the first trigger operation; The coordinates of the trigger pixel point are used to perform special effect processing on the first image to obtain an image with a magnifying glass special effect.
  • the special effect processing module 1102 is specifically used for:
  • Acquire processing parameters corresponding to the first image according to the frame number of the first image and the first mapping relationship where the processing parameters include at least one of the following: a color difference intensity coefficient, a distortion coefficient, a scaling coefficient, and a blur coefficient.
  • the first mapping relationship is used to indicate the corresponding relationship between the frame number and the processing parameter;
  • the special effect processing corresponds to the processing parameters, and includes at least one of the following: radial chromatic aberration processing, distortion processing, scaling processing, or radial blur processing.
  • the processing parameters include: a color difference intensity coefficient
  • the special effect processing includes: radial color difference processing
  • the special effect processing module 1102 is specifically used for:
  • the sum of the color values of multiple sampling points corresponding to the pixel point in each color channel is obtained; according to the multiple sampling points corresponding to the pixel point in each color channel The sum of the color values of the pixels determines the color value of the pixel in each color channel.
  • the special effect processing module 1102 is specifically used for:
  • For each color channel in the RGB channel determine the offset corresponding to the color channel according to the direction from the trigger pixel point to the pixel point, the color difference intensity coefficient, and the intensity coefficient corresponding to the color channel;
  • the coordinates of the pixel point, the offset corresponding to the color channel, the sampling step size, the number of sampling points and the weight is used to determine the sum of the color values of multiple sampling points corresponding to the pixel point in the color channel.
  • the special effect processing module 1102 is specifically used for:
  • the processing parameters include: distortion coefficients, and the special effect processing includes: distortion processing; the special effect processing module 1102 is specifically used for:
  • processing parameters include: scaling coefficients
  • special effect processing includes: scaling processing
  • special effect processing module 1102 is specifically used for:
  • the processing parameters include: blur coefficient, and the special effect processing includes: radial blur processing; the special effect processing module 1102 is specifically used for:
  • the coordinates of the trigger pixel For each pixel on the first image, according to the coordinates of the trigger pixel, the coordinates of the pixel, the number of sampling points, the texture of the first image, and the blur coefficient, obtain the corresponding The sum of the color values of a plurality of sampling points; the color value of the pixel is obtained according to the sum of the color values of the plurality of sampling points corresponding to the pixel.
  • the special effect processing module 1102 is specifically used for:
  • the number of sampling points, the blur coefficient, the first image texture, and the direction from the trigger pixel to the pixel determine a plurality of samples corresponding to the pixel The sum of the color values of the points.
  • the special effect processing module 1102 is specifically used for:
  • the color value of the pixel is obtained by dividing the sum of the color values of the plurality of sampling points corresponding to the pixel by the number of the sampling points.
  • the special effect processing module 1102 is specifically used for:
  • the frame number is positively correlated with the color difference intensity coefficient, scaling coefficient, and blur coefficient, and the frame number is negatively correlated with the distortion coefficient.
  • the acquisition module 1101 is also used for:
  • the special effect processing module 1102 is also used for:
  • the frame number is negatively correlated with the color difference intensity coefficient, scaling coefficient and blur coefficient, and positively correlated with the distortion coefficient.
  • the image processing device shown in FIG. 11 may be used to execute the steps in any of the above method embodiments.
  • the implementation principles and technical effects are similar, and will not be repeated here.
  • Fig. 12 is a schematic diagram of a hardware structure of a terminal device provided by the present disclosure. As shown in Figure 12, the terminal device in this embodiment may include:
  • memory 1202 configured to store executable instructions of the processor
  • the processor 1201 is configured to implement the steps in any one of the above method embodiments by executing the executable instructions.
  • the implementation principles and technical effects are similar, and will not be repeated here.
  • the present disclosure provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the processor is made to implement the steps in any one of the above method embodiments.
  • the implementation principles and technical effects are similar, and will not be repeated here.
  • the present disclosure also provides a computer program product, the computer program product includes a computer program, the computer program is stored in a computer-readable storage medium, and at least one processor can read the computer program from the computer-readable storage medium. program, when the at least one processor executes the computer program, the steps in any one of the above method embodiments are implemented.
  • the implementation principles and technical effects are similar, and will not be repeated here.
  • the present disclosure also provides a computer program, which, when executed by a processor, enables the processor to implement the steps in any one of the above method embodiments.
  • the implementation principles and technical effects are similar, and will not be repeated here.
  • the disclosed devices and methods may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components can be combined or May be integrated into another system, or some features may be ignored, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units can be implemented in the form of hardware, or in the form of hardware plus software functional units.
  • the above-mentioned integrated units implemented in the form of software functional units may be stored in a computer-readable storage medium.
  • the above-mentioned software functional units are stored in a storage medium, and include several instructions to enable a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (English: processor) to execute the functions described in various embodiments of the present disclosure. part of the method.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (English: Read-Only Memory, abbreviated: ROM), random access memory (English: Random Access Memory, abbreviated: RAM), magnetic disk or optical disc, etc.
  • processors described in this disclosure may be a central processing unit (English: Central Processing Unit, referred to as: CPU), and may also be other general-purpose processors, digital signal processors (English: Digital Signal Processor, referred to as: DSP) , Application Specific Integrated Circuit (English: Application Specific Integrated Circuit, referred to as: ASIC), etc.
  • a general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
  • the steps of the method disclosed in conjunction with the present disclosure may be directly implemented by a hardware processor, or implemented by a combination of hardware and software modules in the processor.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Image Processing (AREA)

Abstract

本公开提供一种图像处理方法和装置。该方法包括:响应于用户的第一触发操作,获取第一触发操作的屏幕坐标;针对第一触发操作后获取到的每帧第一图像,根据第一触发操作的屏幕坐标,获取第一图像上触发像素点的坐标;根据触发像素点的坐标,对第一图像进行特效处理,得到具有放大镜特效的图像。上述方法可将视频中的图像处理为具有放大镜特效的图像,增加了特效效果多样性,提升了用户体验。

Description

图像处理方法和装置
相关申请的交叉引用
本申请要求于2021年7月30日提交的、申请号为2021108755743、名称为“图像处理方法和装置”的中国专利申请的优先权,其全部内容通过引用并入本文。
技术领域
本公开涉及图像处理领域,尤其涉及一种图像处理方法和装置。
背景技术
随着软件开发技术的发展,移动终端上应用程序(Application,简称APP)的种类越来越多。其中,视频类APP深受大众喜欢。用户不仅可以通过视频类APP浏览视频,还可以自己制作并发布视频,用户可为视频增加特效,从而提升了用户参与感,然而,目前的特效多样化还不够,用户需求得不到满足。
发明内容
本公开提供一种图像处理方法和装置,可将视频中的图像处理为具有放大镜特效的图像,增加了特效效果的多样性,提升了用户体验。
第一方面,本公开提供一种图像处理方法,响应于用户的第一触发操作,获取所述第一触发操作的屏幕坐标;针对所述第一触发操作后获取到的每帧第一图像,根据所述第一触发操作的屏幕坐标,获取所述第一图像上触发像素点的坐标;根据所述触发像素点的坐标,对所述第一图像进行特效处理,得到具有放大镜特效的图像。
可选的,所述根据所述触发像素点的坐标,对所述第一图像进行特效处理,得到具有放大镜特效的图像,包括:根据所述第一图像的帧序号和第一映射关系,获取所述第一图像对应的处理参数,所述处理参数包括以下至少一种:色差强度系数、畸变系数、缩放系数以及模糊系数,所述第一映射关系用于指示帧序号和处理参数的对应关系;根据所述触发像素点的坐标和所述第一图像对应的处理参数,对所述第一图像进行特效处理,得到所述具有放大镜特效的图像;其中,所述特效处理与所述处理参数相对应,并且包括如下至少一种:径向色差处理、畸变处理、缩放处理或径向模糊处理。
可选的,所述处理参数包括:色差强度系数,所述特效处理包括:径向色差处理;所述根据所述触发像素点的坐标和所述第一图像对应的处理参数,对所述第一图像进行特效处理,包括:针对所述第一图像上的每个像素点,根据所述触发像素点的坐标、所述像素点的坐标、采样点个数、步长系数、各个颜色通道对应的强度系数、权重系数、所述第一图像纹理以及所述色差强度系数,获取所述像素点对应的多个采样点在每个颜色通道的颜色值之和;根据所述像素点对应的多个采样点在每个颜色通道的颜色值之和,确定所述像素点在每个颜色通道的颜色值。
可选的,所述根据所述触发像素点的坐标、所述像素点的坐标、采样点个数、步长系数、各个颜色通道对应的强度系数、权重系数、所述第一图像纹理以及所述色差强度系数,获取所述像素点对应的多个采样点在每个颜色通道的颜色值之和,包括:根据所述触发像素点的坐标和所述像素点的坐标,确定所述触发像素点到所述像素点的方向;根据所述触发像素点到所述像素点的方向、所述步长系数以及所述采样点个数,确定采样步长;针对RGB通道中的每个颜色通道,根据所述触发像素点到所述像素点的方向、所述色差强度系数以及所述颜色通道对应的强度系数,确定所述颜色通道对应的偏移;针对RGB通道中的每个颜色通道,根据所述第一图像纹理、所述像素点的坐标、所述颜色通道对应的偏移、所述采样步长、所述采样点个数以及所述权重系数,确定所述像素点对应的多个采样点在所述颜色通道的颜色值之和。
可选的,所述根据所述像素点对应的多个采样点在每个颜色通道的颜色值之和,确定所述像素点在每个颜色通道的颜色值,包括:针对RGB通道中的每个颜色通道,使用所述像素点对应的多个采样点在所述颜色通道的颜色值之和除以所述采样点个数,得到所述像素点在所述颜色通道的颜色值。
可选的,所述处理参数包括:畸变系数,所述特效处理包括:畸变处理;
所述根据所述触发像素点的坐标和所述第一图像对应的处理参数,对所述第一图像进行特效处理,包括:根据所述畸变系数,获取畸变函数;针对所述第一图像上的每个像素点,根据所述触发像素点的坐标、所述像素点的坐标、所述触发像素点到所述像素点的距离以及所述畸变函数,确定所述第一图像上所述像素点所对应的畸变前像素点;将所述畸变前像素点的颜色值作为所述像素点的颜色值。
可选的,所述处理参数包括:缩放系数,所述特效处理包括:缩放处理;所述根据所述触发像素点的坐标和所述第一图像对应的处理参数,对所述第一图像进行特效处理,包括:根据所述触发像素点的坐标、四边形模型的当前顶点坐标以及所述缩放系数,确定缩放后的顶点坐标,所述四边形模型用于改变图像的显示大小;将所述四边形模型的顶点坐标更新为所述缩放后的顶点坐标;将所述第一图像映射至所述四边形模型,得到所述具有放大镜特效的图像。
可选的,所述处理参数包括:模糊系数,所述特效处理包括:径向模糊处理;所述根据所述触发像素点的坐标和所述第一图像对应的处理参数,对所述第一图像进行特效处理,包括:针对第一图像上的每个像素点,根据所述触发像素点的坐标、所述像素点的坐标、采样点个数、所述第一图像纹理以及所述模糊系数,获取所述像素点对应的多个采样点的颜色值之和;根据所述像素点对应的多个采样点的颜色值之和,获取所述像素点的颜色值。
可选的,所述根据所述触发像素点的坐标、所述像素点的坐标、采样点个数、所述第一图像纹理以及所述模糊系数,获取所述像素点对应的多个采样点的颜色值之和,包括:根据所述触发像素点的坐标和所述像素点的坐标,确定所述触发像素点到所述像素点的方向;根据所述像素点的坐标、所述采样点个数、所述模糊系数、所述第一图像纹理以及所述触发像素点到所述像素点的方向,确定所述像素点对应的多个采样点的颜色值之和。
可选的,所述根据所述像素点对应的多个采样点的颜色值之和,获取所述像素点的颜色值,包括:使用所述像素点对应的多个采样点的颜色值之和除以所述采样点个数,得到所述像素点的颜色值。
可选的,所述根据所述触发像素点的坐标和所述第一图像对应的处理参数,对所述第一图像进行特效处理,包括:根据所述触发像素点的坐标和所述第一图像对应的处理参数,对所述第一图像依次进行所述径向色差处理、所述畸变处理、所述缩放处理和所述径向模糊处理。
可选的,所述第一映射关系中,所述帧序号分别和所述色差强度系数、缩放系数以及模糊系数呈正相关,所述帧序号和所述畸变系数呈负相关。
可选的,所述方法还包括:响应于用户的第二触发操作,获取所述第二触发操作的屏幕坐标;针对所述第二触发操作后获取到的每帧第二图像,根据所述第二触发操作的屏幕坐标,获取所述第二图像上触发像素点的坐标;根据所述第二图像的帧序号和第二映射关系,获取所述第二图像对应的处理参数,根据所述第二图像上触发像素点的坐标和所述第二图像对应的处理参数,对所述第二图像进行特效处理,所述第二映射关系中,帧序号分别和色差强度系数、缩放系数以及模糊系数呈负相关,和畸变系数呈正相关。
第二方面,本公开提供一种终端设备,包括:获取模块,用于响应于用户的第一触发操作,获取所述第一触发操作的屏幕坐标;特效处理模块,用于针对所述第一触发操作后获取到的每帧第一图像,根据所述第一触发操作的屏幕坐标,获取所述第一图像上触发像素点的坐标;根据所述触发像素点的坐标,对所述第一图像进行特效处理,得到具有放大镜特效的图像。
第三方面,本公开提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时,使得所述处理器实现第一方面的方法。
第四方面,本公开提供一种终端设备,包括:处理器;以及存储器,用于存储所述处理器的可执行指令;其中,所述处理器配置为经由执行所述可执行指令来实现第一方面的方法。
第五方面,本公开提供一种计算机程序产品,所述计算机程序产品包括计算机程序,所述计算机程序存储在计算机可读存储介质中,至少一个处理器可以从所述计算机可读存储介质读取所述计算机程序,所述至少一个处理器执行所述计算机程序时,实现第一方面的方法。
第六方面,本公开提供一种计算机程序,所述计算机程序被处理器执行时,使得所述处理器实现第一方面的方法。
本公开提供的图像处理方法和装置,响应于用户的第一触发操作,获取第一触发操作的屏幕坐标;针对第一触发操作后获取到的每帧第一图像,根据第一触发操作的屏幕坐标,获取第一图像上触发像素点的坐标;根据触发像素点的坐标,对第一图像进行特效处理,得到具有放大镜特效的图像。上述方法可将视频中的图像处理为具有放大镜特效的图像,增加了特效效果多样性,提升了用户体验。
附图说明
图1为本公开提供的图像处理方法的实施例一的流程示意图;
图2为本公开提供的用户界面图;
图3为本公开提供的图像处理方法的实施例二的流程示意图一;
图4为本公开提供的图像处理方法的实施例二的流程示意图二;
图5为本公开提供的径向色差处理原理图;
图6为本公开提供的图像处理方法的实施例三的流程示意图;
图7为本公开提供的畸变处理原理图;
图8为本公开提供的图像处理方法的实施例四的流程示意图;
图9为本公开提供的缩放处理原理图;
图10为本公开提供的图像处理方法的实施例五的流程示意图;
图11为本公开提供的图像处理装置的结构示意图;
图12为本公开提供的终端设备的硬件结构示意图。
具体实施方式
为使本公开的目的、技术方案和优点更加清楚,下面将结合本公开中的附图,对本公开中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
在本公开中,需要解释的是,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性。此外,“至少一个”是指一个或者多个,“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A,B可以是单数或者复数。字符“以是一般表示前后关联对象是一种“或”的关系。“以下至少一项(个)”或其类似表达,是指的这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a,b,或c中的至少一项(个),可以表示:单独a,单独b,单独c,a和b的组合,a和c的组合,b和c的组合,或a、b以及c的组合,其中a,b,c可以是单个,也可以是多个。
本公开提供一种图像处理方法,可将视频中的图像处理为具有放大镜特效的图像,增加了图像的特效效果,提升了用户体验。观察真实放大镜的放大效果,其存在径向色差、畸变、缩放以及径向模糊等现象,本公开模拟这些现象,对视频中的图像进行相应的处理,使得处理后的图像能够贴近真实放大镜的放大效果,使得特效更具真实感。
本公开提供的图像处理方法可以由终端设备执行,终端设备的形态包括但不限于:智能手机、平板电脑、笔记本电脑、可穿戴电子设备或者诸如智能电视等智能家居设备,本公开对终端设备的形态不作限定。
下面以具体的实施例对本公开的技术方案以及本公开的技术方案如何解决上述技术问题进行详细说明。下面这几个具体的实施例可以相互结合,对于相同或相似的概念或过程可能在某些实施例中不再赘述。下面将结合附图,对本公开的实施例进行描述。
实施例一
图1为本公开提供的图像处理方法的实施例一的流程示意图,如图1所示,本公开提供的图像处理方法,包括:
S101、响应于用户的第一触发操作,获取第一触发操作的屏幕坐标。
示例性的,参见图2所示,第一触发操作可以包括用户在屏幕上的触控操作,该触控操作可以包括点击操作、双击操作或者滑动操作等,第一触发操作还可包括表情触发操作等等,本申请对第一触发操作的具体形式不限定。第一触发操作的屏幕坐标指的是第一触发操作在终端设备屏幕上的坐标。
S102、针对第一触发操作后获取到的每帧第一图像,根据第一触发操作的屏幕坐标,获取第一图像上触发像素点的坐标。
示例性的,第一图像可以为实时采集的视频中的图像,也可以为用户上传的本地保存的图像或者视频中的图像,还可以为其他设备发送的图像或者视频中的图像。
示例性的,在得到第一触发操作的屏幕坐标后,可将该第一触发操作的屏幕坐标和第一图像上各个像素点的坐标进行匹配,将匹配成功的像素点作为触发像素点,将匹配成功的像素点的坐标作为触发像素点的坐标。
S103、根据触发像素点的坐标,对第一图像进行特效处理,得到具有放大镜特效的图像。
在一种可能的实现方式中,针对每帧第一图像,使用相同的处理参数进行特效处理,这种情况下,处理得到的图像放大程度是相同的。
在另一种可能的实现方式中,可预先建立帧序号和处理参数的对应关系,在本公开中将该对应关系称为第一映射关系,在对第一图像进行处理时,先根据该第一图像的帧序号和上述第一映射关系确定该第一图像对应的处理参数,然后再根据触发像素点的坐标,使用该第一图像对应的处理参数对第一图像进行特效处理。这种情况下,由于各帧第一图像对应的处理参数不同,因此,处理得到的图像的放大程度是变化。
可选的,上述处理参数可包括以下至少一种:色差强度系数、畸变系数、缩放系数以及模糊系数,特效处理包括如下至少一种:径向色差处理、畸变处理、缩放处理或径向模糊处理。特效处理包括的处理过程和处理参数相对应,比如:处理参数包括色差强度系数和畸变系数,则特效处理包括径向色差处理和畸变处理。
示例性的,为了实现放大程度逐帧变大,第一映射关系中,帧序号和色差强度系数、缩放系数以及模糊系数这三个参数可以呈正相关,而和畸变系数呈负相关,从而,实现图像的放大程度逐帧变大。
本公开提供的图像处理方法,响应于用户的第一触发操作,获取第一触发操作的屏幕坐标;针对第一触发操作后获取到的每帧第一图像,根据第一触发操作的屏幕坐标,获取第一图像上触发像素点的坐标;根据触发像素点的坐标,对第一图像进行特效处理,得到具有放大镜特效的图像。上述方法可将视频中的图像处理为具有放大镜特效的图像,增加了特效效果多样性,提升了用户体验。
实施例二
图3为本公开提供的图像处理方法的实施例二的流程示意图,如上文描述,本公开中特效处理可包括径向色差处理,本实施例对径向色差处理过程进行说明。如图3所示,本实施例提供的图像处理方法,包括:
S301、响应于用户的第一触发操作,获取第一触发操作的屏幕坐标。
S302、针对第一触发操作后获取到的每帧第一图像,根据第一触发操作的屏幕坐标,获取第一图像上触发像素点的坐标。
S301和S302的实现过程参见上文描述,本申请在此不再赘述。
S303、获取第一图像对应的色差强度系数。
如上文描述,可预先建立第一映射关系,第一映射关系用于指示帧序号和色差强度系数的对应关系。在对第一图像进行处理时,先根据该第一图像的帧序号和第一映射关 系确定该第一图像对应的色差强度系数,然后再根据触发像素点的坐标,使用该第一图像对应的色差强度系数对第一图像进行特效处理。
需要说明的是:对第一图像进行径向色差处理的实质是重新计算第一图像上各个像素点的颜色值,各个像素点被赋予新的颜色值后,便得到具有径向色差效果的图像。具体的,径向色差处理过程可包括S304-S305。
S304、针对第一图像上的每个像素点,根据触发像素点的坐标、该像素点的坐标、采样点个数、步长系数、各个颜色通道对应的强度系数、权重系数、第一图像纹理以及色差强度系数,获取该像素点对应的多个采样点在每个颜色通道的颜色值之和。
在一种可能的实现方式中,可通过如下步骤来获取每个像素点对应的多个采样点在每个颜色通道的颜色值之和。以第一图像上任一像素点为例,为方便说明,本公开将该像素点称为当前像素点,参见图4所示,具体包括:
S304-A、根据触发像素点的坐标和当前像素点的坐标,确定触发像素点到当前像素点的方向。
S304-B、根据触发像素点到当前像素点的方向、步长系数以及采样点个数,确定采样步长。
具体的,可通过如下公式来确定采样步长:
step=dir*radiusStrength*u_Sample
其中,step为采样步长,dir为触发像素点到当前像素点的方向,radiusStrength为步长系数,u_Sample为采样点个数,其中步长系数和采样点个数可以为预设值。
S304-C、针对RGB通道中的每个颜色通道,根据触发像素点到当前像素点的方向、色差强度系数以及该颜色通道对应的强度系数,确定该颜色通道对应的偏移。
具体的,可通过如下公式确定红色通道对应的偏移:
redOffset=dir*u_Strength*u_RedStrength
其中,redOffset为红色通道对应的偏移,dir为触发像素点到当前像素点的方向,u_Strength为色差强度系数,u_Strength通过S302得到,u_RedStrength为红色通道的强度系数,红色通道的强度系数可以为预设值。
可通过如下公式确定绿色通道对应的偏移:
greenOffset=dir*u_Strength*u_GreenStrength
其中,greenOffset为绿色通道对应的偏移,dir为触发像素点到当前像素点的方向,u_Strength为色差强度系数,u_Strength通过S302得到,u_GreenStrength为绿色通道的强度系数,绿色通道的强度系数可以为预设值。
可通过如下公式确定蓝色通道对应的偏移:
blueOffset=dir*u_Strength*u_BlueStrength
其中,blueOffset为蓝色通道对应的偏移,dir为触发像素点到当前像素点的方向,u_Strength为色差强度系数,u_Strength通过S302得到,u_BlueStrength为蓝色通道的强度系数,蓝色通道的强度系数可以为预设值。
S304-D、针对RGB通道中的每个颜色通道,根据第一图像纹理、当前像素点的坐标、该颜色通道对应的偏移、采样步长、采样点个数以及权重系数,确定当前像素点对应的多个采样点在该颜色通道的颜色值之和。
具体的,针对任一颜色通道,可利用循环语句来确定当前像素点对应的多个采样点在该颜色通道的颜色值之和,以红色通道为例,循环语句中的公式为:
R texture2D(InputTexture,uv+redOffset).r*weight
每完成一次循环,红色通道对应的偏移减去一个步长,循环的次数等于采样点个数。R为多个采样点在红色通道的颜色值之和,InputTexture为第一图像纹理,uv为当前像素点的坐标,redOffset为红色通道对应的偏移,weight为权重系数,权重系数可以为预设值。.r表示红色通道。
S305、根据当前像素点对应的多个采样点在每个颜色通道的颜色值之和,确定当前像素点在每个颜色通道的颜色值。
针对每个通道,可采用如下公式获取当前像素点在该颜色通道的颜色值,以红色通道为例:R/=u_Sample。其中,R为当前像素点对应的多个采样点在红色通道的颜色值之和,u_Sample为采样点个数。
下面举例说明:
参见图5所示,假设像素点O为触发像素点,像素点M为当前像素点,当前像素点对应的多个采样点为M1、M2和M3,通过上述方法求得M1、M2和M3在R通道的颜色值之和为R1+R2+R3,求得M1、M2和M3在G通道的颜色值之和为G1+G2+G3,求得M1、M2和M3在B通道的颜色值之和为B1+B2+B3,则可确定M的RGB值为:(R1+R2+R3)/3、(G1+G2+G3)/3、(B1+B2+B3)/3。
对第一图像上所有像素点均做如上S304-S305的处理,便可得到所有像素点的RGB值。将计算得到的RGB值赋予对应像素点,便可得到径向色差处理后的图像。
本实施例提供的图像处理方法,提供了径向色差处理的方法,使用该方法处理得到的图像有真实放大镜中径向色差的效果,特效真实感强。
实施例三
图6为本公开提供的图像处理方法的实施例三的流程示意图,如上文描述,本公开中特效处理可包括畸变处理,本实施例对畸变处理过程进行说明。如图6所示,本实施例提供的图像处理方法,包括:
S601、响应于用户的第一触发操作,获取第一触发操作的屏幕坐标。
S602、针对第一触发操作后获取到的每帧第一图像,根据第一触发操作的屏幕坐标,获取第一图像上触发像素点的坐标。
S601和S602的实现过程参见上文描述,本申请在此不再赘述。
S603、获取第一图像对应的畸变系数。
如上文描述,可预先建立第一映射关系,该第一映射关系用于指示帧序号和畸变系数的对应关系,在对第一图像进行处理时,先根据该第一图像的帧序号和第一映射关系确定该第一图像对应的畸变系数,然后再根据触发像素点的坐标,使用该第一图像对应的畸变系数对第一图像进行特效处理。
下面介绍畸变处理的过程,需要说明的是:如上文描述,径向色差处理实质是重新计算第一图像上各个像素点的颜色值,同样的,畸变处理的实质也是重新计算第一图像上各个像素点的颜色值,各个像素点被赋予新的颜色值后,便得到具有畸变效果的图像。具体包括S604-S607。
S604、根据畸变系数,获取畸变函数。
一种可能的实现方式中,假设f(x)=(k-1)x 2+x,k的值满足0.5≤k≤1.0的所有函数均可作为畸变函数,可将k作为畸变系数。假设S603中获取到k为0.75,畸变函数则为f(x)=(0.75-1)x 2+x=-0.25x 2+x。
S605、针对第一图像上的每个像素点,根据触发像素点的坐标、该像素点的坐标、触发像素点到该像素点的距离以及畸变函数,确定第一图像上该像素点所对应的畸变前像素点。
具体的,以第一图像上任一像素点为例,为方便说明,本公开将该像素点称为当前像素点,可通过如下公式确定当前像素点所对应的畸变前像素点:
Figure PCTCN2022093171-appb-000001
其中,uv为当前像素点所对应的畸变前像素点的坐标,textureCoordinate为当前像素点的坐标,center为触发像素点的坐标,dis为触发像素点到当前像素点的距离,f为畸变函数。
S606、将畸变前像素点的颜色值作为当前像素点的颜色值。
在通过上述S605确定当前像素点对应的畸变前像素点的坐标后,从第一图像上找到该畸变前像素点,并将该畸变前像素点的颜色值作为当前像素点的颜色值。
下面举例说明:
图7为径向色差处理后的图像,假设像素点O为触发像素点,像素点M为当前像素点,经过S605得到像素点M对应的畸变前像素点为M1,且M1的RGB值为:R1、G1、B1,则可确定像素点M的RGB值为:R1、G1、B1。
针对第一图像上所有像素点均做S605-S606同样的处理,便可得到的所有像素点的颜色值。将计算得到的颜色值赋予各个像素点,便可得到畸变处理后的图像。
本实施例提供的图像处理方法,提供了畸变处理的方法,使用该方法处理得到的图像有真实放大镜中畸变效果,特效真实感强。
实施例四
图8为本公开提供的图像处理方法的实施例四的流程示意图,如上文描述,本公开中特效处理可包括缩放处理,本实施例对缩放处理过程进行说明。如图8所示,本实施例提供的图像处理方法,包括:
S801、响应于用户的第一触发操作,获取第一触发操作的屏幕坐标。
S802、针对第一触发操作后获取到的每帧第一图像,根据第一触发操作的屏幕坐标,获取第一图像上触发像素点的坐标。
S801和S802的实现过程参见上文描述,本申请在此不再赘述。
S803、获取第一图像对应的缩放系数。
如上文描述,可预先建立第一映射关系,该第一映射关系用于指示帧序号和缩放系数的对应关系,在对第一图像进行处理时,先根据该第一图像的帧序号和上述第一映射关系确定该第一图像对应的缩放系数,然后再根据触发像素点的坐标,使用该第一图像对应的缩放系数对第一图像进行特效处理。
下面介绍缩放处理的过程。具体包括S804-S806。
S804、根据触发像素点的坐标、四边形模型10的当前顶点坐标以及缩放系数,确定缩放后的顶点坐标,四边形模型10用于改变图像的显示大小。
具体的,可通过如下公式计算缩放后的顶点坐标:
pos1=(pos-center)*scale+center
其中,pos为四边形模型10的当前顶点坐标,center为触发像素点的坐标,scale为缩放系数。pos1为缩放后的顶点坐标。
S805、将四边形模型10的顶点坐标更新为缩放后的顶点坐标。
下面举例说明:
参见图9所示,假设触发像素点为点O,四边形模型10的当前顶点为A、B、C和D,S803获取到的缩放系数为1,则可确定需要以点O为中心,将四边形模型10放大一倍,缩放后的顶点使用A’、B’、C’和D’示意。
S806、将第一图像映射至四边形模型10,得到具有放大镜特效的图像。
本实施例提供的图像处理方法,提供了缩放处理的方法,使用该方法处理得到的图像有真实放大镜中缩放效果,特效真实感强。
实施例五
图10为本公开提供的图像处理方法的实施例五的流程示意图,如上文描述,本公开中特效处理可包括径向模糊处理,本实施例对径向模糊处理过程进行说明。如图10所示,本实施例提供的图像处理方法,包括:
S1001、响应于用户的第一触发操作,获取第一触发操作的屏幕坐标。
S1002、针对第一触发操作后获取到的每帧第一图像,根据第一触发操作的屏幕坐标,获取第一图像上触发像素点的坐标。
S1001和S1002的实现过程参见上文描述,本申请在此不再赘述。
S1003、获取第一图像对应的模糊系数。
如上文描述,可预先建立第一映射关系,该第一映射关系用于指示帧序号和模糊系数的对应关系,在对第一图像进行处理时,先根据该第一图像的帧序号和上述对应关系确定该第一图像对应的模糊系数,然后再根据触发像素点的坐标,使用该第一图像对应的模糊系数对第一图像进行特效处理。
同径向色差处理和畸变处理,径向模糊处理的实质也是重新计算第一图像上各个像素点的颜色值,各个像素点被赋予新的颜色值后,便得到具有径向模糊效果的图像。具体包括S1004-S1005。
S1004、针对第一图像上的每个像素点,根据触发像素点的坐标、该像素点的坐标、采样点个数、第一图像纹理以及模糊系数,获取该像素点对应的多个采样点的颜色值之和。
以第一图像上任一像素点为例,为方便说明,本公开将该像素点称为当前像素点,可通过如下方法获取当前像素点对应的多个采样点的颜色值之和:
首先,根据触发像素点的坐标和当前像素点的坐标,确定触发像素点到当前像素点的方向,具体过程参见上述实施例,本公开在此不再赘述。然后根据当前像素点的坐标、采样点个数、模糊系数、第一图像纹理以及触发像素点到当前像素点的方向,确定当前像素点对应的多个采样点的颜色值之和。
具体的,可利用循环语句来确定当前像素点对应的多个采样点的颜色值之和,循环语句中的公式为:
vec2uv=uv+blurfactor*dir*i;
outColor+=texture2D(InputTexture,uv)
其中,i为循环变量,循环的次数等于采样点个数,uv为当前像素点的坐标,blurfactor为模糊系数,dir为触发像素点到当前像素点的方向,InputTexture为第一图像纹理,outColor为多个采样点的颜色值之和。vec2用于表示当前像素点的坐标,uv是二维向量。
S1005、根据当前像素点对应的多个采样点的颜色值之和,获取当前像素点的颜色值。
具体的,可采用如下公式获取当前像素点的颜色值:outColor/=u_Sample。其中,outColor为当前像素点对应的多个采样点的颜色值之和,u_Sample为采样点个数。
对第一图像上的所有像素点均做S1004-S1005的处理,便可得到所有像素点的颜色值。将计算得到的颜色值赋予对应像素点,便可得到径向模糊处理后的图像。
本实施例提供的图像处理方法,提供了径向模糊处理的方法,使用该方法处理得到的图像有真实放大镜中径向模糊效果,特效真实感强。
在一种可能的实现方式中,可根据触发像素点的坐标和第一图像对应的处理参数,对第一图像依次进行径向色差处理、畸变处理、缩放处理和径向模糊处理,这种实现方式中,径向色差处理后的结果为畸变处理的对象,即图6所述畸变处理过程中的第一图像为径向色差处理后的图像,畸变处理的结果为缩放处理的对象,即图8所述缩放处理过程中的第一图像为畸变处理后的图像,缩放处理的结果为径向模糊处理的对象,即图10所述径向模糊处理过程中的第一图像为缩放处理后的图像。这种处理顺序得到的图像更加接近真实放大镜的效果。
示例性的,第一映射关系中,帧序号和色差强度系数、缩放系数以及模糊系数这三个参数可以呈正相关,而和畸变系数呈负相关的情况下,通过上述图像处理过程得到的图像放大程度是逐帧变大的。在一种可能的实现方式中,用户在图2所示拍摄界面上触发第一触发操作后,还可触发第二触发操作,响应于该第二触发操作,获取第二触发操作的屏幕坐标;针对第二触发操作后获取到的每帧第二图像,根据第二触发操作的屏幕坐标,获取第二图像上触发像素点的坐标;根据第二图像的帧序号和第二映射关系,获取第二图像对应的处理参数,根据第二图像上触发像素点的坐标和第二图像对应的处理参数,对第二图像进行特效处理,第二映射关系中,帧序号和色差强度系数、缩放系数以及模糊系数可呈负相关,帧序号和畸变系数可呈正相关,这样终端设备接收到第二触发操作后,通过上述图像处理过程得到的图像放大程度是逐帧变小的,使得用户触发第一触发操作之后图像放大程度的变化和触发第二触发操作之后图像放大程度的变化是两个相反的过程,用户触发第一触发操作之后放大程度逐帧变大,触发第二触发操作之后放大程度逐帧变小,提升了用户制作视频的乐趣。
可以理解的是,也可将第一映射关系中,帧序号和色差强度系数、缩放系数以及模糊系数之间的关系设为负相关,将帧序号和畸变系数之间的关系设为正相关,将第二映射关系中,帧序号和色差强度系数、缩放系数以及模糊系数之间的关系设为正相关,将帧序号和畸变系数之间的关系设为负相关,这样设置的效果是,用户触发第一触发操作之后放大程度逐帧变小,触发第二触发操作之后放大程度逐帧变大。
图11为本公开提供的图像处理装置的结构示意图。如图11所示,本公开提供的图像处理装置,包括:
获取模块1101,用于响应于用户的第一触发操作,获取所述第一触发操作的屏幕坐标;
特效处理模块1102,用于针对所述第一触发操作后获取到的每帧第一图像,根据所述第一触发操作的屏幕坐标,获取所述第一图像上触发像素点的坐标;根据所述触发像素点的坐标,对所述第一图像进行特效处理,得到具有放大镜特效的图像。
可选的,特效处理模块1102,具体用于:
根据所述第一图像的帧序号和第一映射关系,获取所述第一图像对应的处理参数,所述处理参数包括以下至少一种:色差强度系数、畸变系数、缩放系数以及模糊系数,所述第一映射关系用于指示帧序号和处理参数的对应关系;
根据所述触发像素点的坐标和所述第一图像对应的处理参数,对所述第一图像进行特效处理,得到所述具有放大镜特效的图像;
其中,所述特效处理与所述处理参数相对应,并且包括如下至少一种:径向色差处理、畸变处理、缩放处理或径向模糊处理。
可选的,所述处理参数包括:色差强度系数,所述特效处理包括:径向色差处理;特效处理模块1102,具体用于:
针对所述第一图像上的每个像素点,根据所述触发像素点的坐标、所述像素点的坐标、采样点个数、步长系数、各个颜色通道对应的强度系数、权重系数、所述第一图像纹理以及所述色差强度系数,获取所述像素点对应的多个采样点在每个颜色通道的颜色值之和;根据所述像素点对应的多个采样点在每个颜色通道的颜色值之和,确定所述像素点在每个颜色通道的颜色值。
可选的,特效处理模块1102,具体用于:
根据所述触发像素点的坐标和所述像素点的坐标,确定所述触发像素点到所述像素点的方向;
根据所述触发像素点到所述像素点的方向、所述步长系数以及所述采样点个数,确定采样步长;
针对RGB通道中的每个颜色通道,根据所述触发像素点到所述像素点的方向、所述色差强度系数以及所述颜色通道对应的强度系数,确定所述颜色通道对应的偏移;
针对RGB通道中的每个颜色通道,根据所述第一图像纹理、所述像素点的坐标、所述颜色通道对应的偏移、所述采样步长、所述采样点个数以及所述权重系数,确定所述像素点对应的多个采样点在所述颜色通道的颜色值之和。
可选的,特效处理模块1102,具体用于:
针对RGB通道中的每个颜色通道,使用所述像素点对应的多个采样点在所述颜色通道的颜色值之和除以所述采样点个数,得到所述像素点在所述颜色通道的颜色值。
可选的,所述处理参数包括:畸变系数,所述特效处理包括:畸变处理;特效处理模块1102,具体用于:
根据所述畸变系数,获取畸变函数;
针对所述第一图像上的每个像素点,根据所述触发像素点的坐标、所述像素点的坐 标、所述触发像素点到所述像素点的距离以及所述畸变函数,确定所述第一图像上所述像素点所对应的畸变前像素点;将所述畸变前像素点的颜色值作为所述像素点的颜色值。
可选的,所述处理参数包括:缩放系数,所述特效处理包括:缩放处理;特效处理模块1102,具体用于:
根据所述触发像素点的坐标、四边形模型的当前顶点坐标以及所述缩放系数,确定缩放后的顶点坐标,所述四边形模型用于改变图像的显示大小;
将所述四边形模型的顶点坐标更新为所述缩放后的顶点坐标;
将所述第一图像映射至所述四边形模型,得到所述具有放大镜特效的图像。
可选的,所述处理参数包括:模糊系数,所述特效处理包括:径向模糊处理;特效处理模块1102,具体用于:
针对第一图像上的每个像素点,根据所述触发像素点的坐标、所述像素点的坐标、采样点个数、所述第一图像纹理以及所述模糊系数,获取所述像素点对应的多个采样点的颜色值之和;根据所述像素点对应的多个采样点的颜色值之和,获取所述像素点的颜色值。
可选的,特效处理模块1102,具体用于:
根据所述触发像素点的坐标和所述像素点的坐标,确定所述触发像素点到所述像素点的方向;
根据所述像素点的坐标、所述采样点个数、所述模糊系数、所述第一图像纹理以及所述触发像素点到所述像素点的方向,确定所述像素点对应的多个采样点的颜色值之和。
可选的,特效处理模块1102,具体用于:
使用所述像素点对应的多个采样点的颜色值之和除以所述采样点个数,得到所述像素点的颜色值。
可选的,特效处理模块1102,具体用于:
根据所述触发像素点的坐标和所述第一图像对应的处理参数,对所述第一图像依次进行所述径向色差处理、所述畸变处理、所述缩放处理和所述径向模糊处理。
可选的,所述第一映射关系中,所述帧序号分别和所述色差强度系数、缩放系数以及模糊系数呈正相关,所述帧序号和所述畸变系数呈负相关。
可选的,获取模块1101,还用于:
响应于用户的第二触发操作,获取所述第二触发操作的屏幕坐标;
特效处理模块1102,还用于:
针对所述第二触发操作后获取到的每帧第二图像,根据所述第二触发操作的屏幕坐标,获取所述第二图像上触发像素点的坐标;根据所述第二图像的帧序号和第二映射关系,获取所述第二图像对应的处理参数,根据所述第二图像上触发像素点的坐标和所述第二图像对应的处理参数,对所述第二图像进行特效处理,所述第二映射关系中,帧序号分别和色差强度系数、缩放系数以及模糊系数呈负相关,和畸变系数呈正相关。
图11所示图像处理装置可用于执行上述任一方法实施例中的步骤。其实现原理和技术效果类似,在此不再赘述。
图12为本公开提供的终端设备的硬件结构示意图。如图12所示,本实施例的终端设备可以包括:
处理器1201;以及
存储器1202,用于存储所述处理器的可执行指令;
其中,所述处理器1201配置为经由执行所述可执行指令来实现上述任一方法实施例的步骤。其实现原理和技术效果类似,在此不再赘述。
本公开提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时,使得该处理器实现上述任一方法实施例的步骤。其实现原理和技术效果类似,在此不再赘述。
本公开还提供一种计算机程序产品,所述计算机程序产品包括计算机程序,所述计算机程序存储在计算机可读存储介质中,至少一个处理器可以从所述计算机可读存储介质读取所述计算机程序,所述至少一个处理器执行所述计算机程序时,实现上述任一方法实施例的步骤。其实现原理和技术效果类似,在此不再赘述。
本公开还提供一种计算机程序,所述计算机程序被处理器执行时,使得该处理器实现上述任一方法实施例的步骤。其实现原理和技术效果类似,在此不再赘述。
在本公开所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本公开各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
上述以软件功能单元的形式实现的集成的单元,可以存储在一个计算机可读取存储介质中。上述软件功能单元存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(英文:processor)执行本公开各个实施例所述方法的部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(英文:Read-Only Memory,简称:ROM)、随机存取存储器(英文:Random Access Memory,简称:RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
应理解,本公开所描述的处理器可以是中央处理单元(英文:Central Processing Unit,简称:CPU),还可以是其他通用处理器、数字信号处理器(英文:Digital Signal Processor,简称:DSP)、专用集成电路(英文:Application Specific Integrated Circuit,简称:ASIC)等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本公开所公开的方法的步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。
最后应说明的是:以上各实施例仅用以说明本公开的技术方案,而非对其限制;尽 管参照前述各实施例对本公开进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本公开各实施例技术方案的范围。

Claims (18)

  1. 一种图像处理方法,其特征在于,包括:
    响应于用户的第一触发操作,获取所述第一触发操作的屏幕坐标;
    针对所述第一触发操作后获取到的每帧第一图像,根据所述第一触发操作的屏幕坐标,获取所述第一图像上触发像素点的坐标;
    根据所述触发像素点的坐标,对所述第一图像进行特效处理,得到具有放大镜特效的图像。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述触发像素点的坐标,对所述第一图像进行特效处理,得到具有放大镜特效的图像,包括:
    根据所述第一图像的帧序号和第一映射关系,获取所述第一图像对应的处理参数,所述处理参数包括以下至少一种:色差强度系数、畸变系数、缩放系数以及模糊系数,所述第一映射关系用于指示帧序号和处理参数的对应关系;
    根据所述触发像素点的坐标和所述第一图像对应的处理参数,对所述第一图像进行特效处理,得到所述具有放大镜特效的图像;
    其中,所述特效处理与所述处理参数相对应,并且包括如下至少一种:径向色差处理、畸变处理、缩放处理或径向模糊处理。
  3. 根据权利要求2所述的方法,其特征在于,所述处理参数包括:色差强度系数,所述特效处理包括:径向色差处理;
    所述根据所述触发像素点的坐标和所述第一图像对应的处理参数,对所述第一图像进行特效处理,包括:
    针对所述第一图像上的每个像素点,根据所述触发像素点的坐标、所述像素点的坐标、采样点个数、步长系数、各个颜色通道对应的强度系数、权重系数、所述第一图像纹理以及所述色差强度系数,获取所述像素点对应的多个采样点在每个颜色通道的颜色值之和;
    根据所述像素点对应的多个采样点在每个颜色通道的颜色值之和,确定所述像素点在每个颜色通道的颜色值。
  4. 根据权利要求3所述的方法,其特征在于,所述根据所述触发像素点的坐标、所述像素点的坐标、采样点个数、步长系数、各个颜色通道对应的强度系数、权重系数、所述第一图像纹理以及所述色差强度系数,获取所述像素点对应的多个采样点在每个颜色通道的颜色值之和,包括:
    根据所述触发像素点的坐标和所述像素点的坐标,确定所述触发像素点到所述像素点的方向;
    根据所述触发像素点到所述像素点的方向、所述步长系数以及所述采样点个数,确定采样步长;
    针对RGB通道中的每个颜色通道,根据所述触发像素点到所述像素点的方向、所述色差强度系数以及所述颜色通道对应的强度系数,确定所述颜色通道对应的偏移;
    针对RGB通道中的每个颜色通道,根据所述第一图像纹理、所述像素点的坐标、所述颜色通道对应的偏移、所述采样步长、所述采样点个数以及所述权重系数,确定所述像素点对应的多个采样点在所述颜色通道的颜色值之和。
  5. 根据权利要求3所述的方法,其特征在于,所述根据所述像素点对应的多个采样点在每个颜色通道的颜色值之和,确定所述像素点在每个颜色通道的颜色值,包括:
    针对RGB通道中的每个颜色通道,使用所述像素点对应的多个采样点在所述颜色通道的颜色值之和除以所述采样点个数,得到所述像素点在所述颜色通道的颜色值。
  6. 根据权利要求2所述的方法,其特征在于,所述处理参数包括:畸变系数,所述特效处理包括:畸变处理;
    所述根据所述触发像素点的坐标和所述第一图像对应的处理参数,对所述第一图像进行特效处理,包括:
    根据所述畸变系数,获取畸变函数;
    针对所述第一图像上的每个像素点,根据所述触发像素点的坐标、所述像素点的坐标、所述触发像素点到所述像素点的距离以及所述畸变函数,确定所述第一图像上所述像素点所对应的畸变前像素点;
    将所述畸变前像素点的颜色值作为所述像素点的颜色值。
  7. 根据权利要求2所述的方法,其特征在于,所述处理参数包括:缩放系数,所述特效处理包括:缩放处理;
    所述根据所述触发像素点的坐标和所述第一图像对应的处理参数,对所述第一图像进行特效处理,包括:
    根据所述触发像素点的坐标、四边形模型的当前顶点坐标以及所述缩放系数,确定缩放后的顶点坐标,所述四边形模型用于改变图像的显示大小;
    将所述四边形模型的顶点坐标更新为所述缩放后的顶点坐标;
    将所述第一图像映射至所述四边形模型,得到所述具有放大镜特效的图像。
  8. 根据权利要求2所述的方法,其特征在于,所述处理参数包括:模糊系数,所述特效处理包括:径向模糊处理;
    所述根据所述触发像素点的坐标和所述第一图像对应的处理参数,对所述第一图像进行特效处理,包括:
    针对第一图像上的每个像素点,根据所述触发像素点的坐标、所述像素点的坐标、采样点个数、所述第一图像纹理以及所述模糊系数,获取所述像素点对应的多个采样点的颜色值之和;
    根据所述像素点对应的多个采样点的颜色值之和,获取所述像素点的颜色值。
  9. 根据权利要求8所述的方法,其特征在于,所述根据所述触发像素点的坐标、所述像素点的坐标、采样点个数、所述第一图像纹理以及所述模糊系数,获取所述像素点对应的多个采样点的颜色值之和,包括:
    根据所述触发像素点的坐标和所述像素点的坐标,确定所述触发像素点到所述像素点的方向;
    根据所述像素点的坐标、所述采样点个数、所述模糊系数、所述第一图像纹理以及所述触发像素点到所述像素点的方向,确定所述像素点对应的多个采样点的颜色值之和。
  10. 根据权利要求8所述的方法,其特征在于,所述根据所述像素点对应的多个采样点的颜色值之和,获取所述像素点的颜色值,包括:
    使用所述像素点对应的多个采样点的颜色值之和除以所述采样点个数,得到所述像 素点的颜色值。
  11. 根据权利要求2-10中任一项所述的方法,其特征在于,所述根据所述触发像素点的坐标和所述第一图像对应的处理参数,对所述第一图像进行特效处理,包括:
    根据所述触发像素点的坐标和所述第一图像对应的处理参数,对所述第一图像依次进行所述径向色差处理、所述畸变处理、所述缩放处理和所述径向模糊处理。
  12. 根据权利要求2-11中任一项所述的方法,其特征在于,所述第一映射关系中,帧序号分别和色差强度系数、缩放系数以及模糊系数呈正相关,和畸变系数呈负相关。
  13. 根据权利要求12所述的方法,其特征在于,所述方法还包括:
    响应于用户的第二触发操作,获取所述第二触发操作的屏幕坐标;
    针对所述第二触发操作后获取到的每帧第二图像,根据所述第二触发操作的屏幕坐标,获取所述第二图像上触发像素点的坐标;
    根据所述第二图像的帧序号和第二映射关系,获取所述第二图像对应的处理参数;
    根据所述第二图像上触发像素点的坐标和所述第二图像对应的处理参数,对所述第二图像进行特效处理,
    其中,所述第二映射关系中,帧序号分别和色差强度系数、缩放系数以及模糊系数呈负相关,和畸变系数呈正相关。
  14. 一种终端设备,其特征在于,包括:
    获取模块,用于响应于用户的第一触发操作,获取所述第一触发操作的屏幕坐标;
    特效处理模块,用于针对所述第一触发操作后获取到的每帧第一图像,根据所述第一触发操作的屏幕坐标,获取所述第一图像上触发像素点的坐标;根据所述触发像素点的坐标,对所述第一图像进行特效处理,得到具有放大镜特效的图像。
  15. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时,使得所述处理器实现权利要求1-13中任一项所述的方法。
  16. 一种终端设备,其特征在于,包括:
    处理器;以及
    存储器,用于存储所述处理器的可执行指令;
    其中,所述处理器配置为经由执行所述可执行指令来实现权利要求1-13中任一项所述的方法。
  17. 一种计算机程序产品,其特征在于,所述计算机程序产品包括计算机程序,所述计算机程序存储在计算机可读存储介质中,至少一个处理器可以从所述计算机可读存储介质读取所述计算机程序,所述至少一个处理器执行所述计算机程序时实现如权利要求1-13中任一项所述的方法。
  18. 一种计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1-13中任一项所述的方法。
PCT/CN2022/093171 2021-07-30 2022-05-16 图像处理方法和装置 WO2023005359A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110875574.3 2021-07-30
CN202110875574.3A CN115695681A (zh) 2021-07-30 2021-07-30 图像处理方法和装置

Publications (1)

Publication Number Publication Date
WO2023005359A1 true WO2023005359A1 (zh) 2023-02-02

Family

ID=85060021

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/093171 WO2023005359A1 (zh) 2021-07-30 2022-05-16 图像处理方法和装置

Country Status (2)

Country Link
CN (1) CN115695681A (zh)
WO (1) WO2023005359A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070033542A1 (en) * 2005-08-04 2007-02-08 Microsoft Corporation Virtual magnifying glass system architecture
CN104898919A (zh) * 2014-03-07 2015-09-09 三星电子株式会社 放大并显示内容的便携式终端和方法
CN107637089A (zh) * 2015-05-18 2018-01-26 Lg电子株式会社 显示装置及其控制方法
CN108062760A (zh) * 2017-12-08 2018-05-22 广州市百果园信息技术有限公司 视频编辑方法、装置及智能移动终端
CN108648139A (zh) * 2018-04-10 2018-10-12 光锐恒宇(北京)科技有限公司 一种图像处理方法和装置
CN112965780A (zh) * 2021-03-30 2021-06-15 北京字跳网络技术有限公司 图像显示方法、装置、设备及介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070033542A1 (en) * 2005-08-04 2007-02-08 Microsoft Corporation Virtual magnifying glass system architecture
CN104898919A (zh) * 2014-03-07 2015-09-09 三星电子株式会社 放大并显示内容的便携式终端和方法
CN107637089A (zh) * 2015-05-18 2018-01-26 Lg电子株式会社 显示装置及其控制方法
CN108062760A (zh) * 2017-12-08 2018-05-22 广州市百果园信息技术有限公司 视频编辑方法、装置及智能移动终端
CN108648139A (zh) * 2018-04-10 2018-10-12 光锐恒宇(北京)科技有限公司 一种图像处理方法和装置
CN112965780A (zh) * 2021-03-30 2021-06-15 北京字跳网络技术有限公司 图像显示方法、装置、设备及介质

Also Published As

Publication number Publication date
CN115695681A (zh) 2023-02-03

Similar Documents

Publication Publication Date Title
CN109064390B (zh) 一种图像处理方法、图像处理装置及移动终端
TWI769725B (zh) 圖像處理方法、電子設備及電腦可讀儲存介質
CN112241933A (zh) 人脸图像处理方法、装置、存储介质及电子设备
CN111985281B (zh) 图像生成模型的生成方法、装置及图像生成方法、装置
TWI778723B (zh) 重建人臉的方法、裝置、電腦設備及存儲介質
CN110084154B (zh) 渲染图像的方法、装置、电子设备和计算机可读存储介质
CN111131688B (zh) 一种图像处理方法、装置及移动终端
CN109934773A (zh) 一种图像处理方法、装置、电子设备和计算机可读介质
CN110062157A (zh) 渲染图像的方法、装置、电子设备和计算机可读存储介质
CN112950640A (zh) 视频人像分割方法、装置、电子设备及存储介质
CN111915481A (zh) 图像处理方法、装置、电子设备及介质
WO2023005359A1 (zh) 图像处理方法和装置
TWI711004B (zh) 圖片處理方法和裝置
WO2023226628A1 (zh) 图像展示方法、装置、电子设备及存储介质
WO2024012334A1 (zh) 虚拟对象显示方法、装置、设备及存储介质
CN112236800A (zh) 学习设备、图像生成设备、学习方法、图像生成方法和程序
CN115170395A (zh) 全景图像拼接方法、装置、电子设备、介质和程序产品
CN113191376A (zh) 图像处理方法、装置、电子设备和可读存储介质
JP6892557B2 (ja) 学習装置、画像生成装置、学習方法、画像生成方法及びプログラム
WO2020062898A1 (zh) 一种视频前景目标提取方法及装置
CN107087114B (zh) 一种拍摄的方法及装置
WO2021052095A1 (zh) 图像处理方法及装置
WO2023061173A1 (zh) 图像处理方法及装置
CN112668474B (zh) 平面生成方法和装置、存储介质和电子设备
WO2021121291A1 (zh) 图像处理方法、装置、电子设备及计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22847972

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE