CN112634165A - Method and device for image adaptation VI environment - Google Patents

Method and device for image adaptation VI environment Download PDF

Info

Publication number
CN112634165A
CN112634165A CN202011596859.5A CN202011596859A CN112634165A CN 112634165 A CN112634165 A CN 112634165A CN 202011596859 A CN202011596859 A CN 202011596859A CN 112634165 A CN112634165 A CN 112634165A
Authority
CN
China
Prior art keywords
image
processed
replacement
color
mixing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011596859.5A
Other languages
Chinese (zh)
Other versions
CN112634165B (en
Inventor
林青山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Guangzhuiyuan Information Technology Co ltd
Original Assignee
Guangzhou Guangzhuiyuan Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Guangzhuiyuan Information Technology Co ltd filed Critical Guangzhou Guangzhuiyuan Information Technology Co ltd
Priority to CN202011596859.5A priority Critical patent/CN112634165B/en
Publication of CN112634165A publication Critical patent/CN112634165A/en
Application granted granted Critical
Publication of CN112634165B publication Critical patent/CN112634165B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T5/90
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Abstract

The invention relates to a method and a device for image adaptation VI environment, wherein the method comprises the steps of determining an image to be processed according to an original image, and performing gray processing to obtain a first image; adjusting the contrast and the color curve of the first image to make the color of the first image approach to white to obtain a second image; acquiring a target image, and performing perspective deformation processing on the target image according to the image to be processed to obtain a third image; carrying out distortion replacement on the third image by taking the first image as a replacement image to obtain a fourth image; mixing the fourth image and the second image to obtain a fifth image; and mixing the fifth image as an upper layer image and the original image as a lower layer image, and outputting a final image. According to the invention, different VI environments are subjected to gray scale, saturation and other processing by intercepting the specific area, so that the picture can be naturally and smoothly attached to the target area, the characteristics of folds, perspective, shadow and the like in the VI environments are reserved, and the overall and real impression of the picture is improved.

Description

Method and device for image adaptation VI environment
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a method and a device for image adaptation VI environment.
Background
As mobile terminal devices and mobile terminals are increasingly used in the world, and as mobile computing power increases and mobile terminal technology matures, users' needs for editing images via mobile devices are increasing. However, in the mainstream picture editing application in the market at present, when a plurality of independent pictures are processed to perform longitudinal layer overlapping operation, the problems of mutual shielding between the pictures, abrupt boundary feeling, unnatural fusion between the layers, loss of original textures, wrinkles, light and shadows of the pictures and the like easily occur.
In the related technology, in the picture editing application in the market, a user can introduce a user-defined picture to stack a plurality of pictures in different VI environments or combine the pictures in a certain mixed mode, so that the plurality of pictures are combined into one picture, and a richer expression effect is achieved. However, in the existing application of the picture editing class, only the common superposition of pictures can be partially realized, that is, the upper layer picture shields the lower layer picture. The picture derived in the mode has clear edges and corners and very strong boundary sense, and the upper-layer picture is not fused into the VI environment of the lower-layer picture completely, so that the overall effect is unnatural. The other part realizes the mixed mode among the pictures so as to achieve the effects of mutual fusion and integral sense improvement, but the single mixed mode can cause color loss and is also deficient in detail processing of the texture trend, shadow brightness and the like of the pictures. In other words, the prior art cannot meet the requirement that a user superimposes and mixes different pictures in a natural and mutual fusion manner.
Disclosure of Invention
In view of the above, the present invention provides a method and an apparatus for image adaptation VI environment to overcome the shortcomings in the prior art, so as to solve the problem that the prior art cannot meet the requirement of a user for performing overlay mixing on different pictures in a natural and mutual fusion manner.
In order to achieve the purpose, the invention adopts the following technical scheme: a method for image adaptation in a VI environment, comprising:
acquiring an original image, determining an image to be processed according to the original image, and performing gray processing on the image to be processed to obtain a first image;
adjusting the contrast and the color curve of the first image to make the color of the first image approach to white, so as to obtain a second image;
acquiring a target image, and performing perspective deformation processing on the target image according to the image to be processed to obtain a third image;
performing distortion replacement on the third image by taking the first image as a replacement graph to obtain a fourth image;
taking the fourth image as input, and mixing the fourth image with the second image to obtain a fifth image;
and mixing the fifth image as an upper layer image and the original image as a lower layer image, and outputting a final image.
Further, the determining an image to be processed according to the original image includes:
and marking the area to be processed by using a closed curve, scratching the area to be processed, and determining the scratched image as an image to be processed.
Further, the performing perspective transformation processing on the target image according to the image to be processed to obtain a third image includes:
selecting 4 vertexes serving as deformation in the image to be processed and a circumscribed rectangle of the image to be processed, and determining the coordinates of the 4 vertexes and the width and height of the circumscribed rectangle;
performing deformation calculation according to the coordinates of the vertex and the width and the height of the external matrix to obtain a transformation matrix;
and inputting the transformation matrix into a fragment shader as a deformation parameter, performing 3D matrix transformation on a target image in the fragment shader, and outputting a third image.
Further, the obtaining of the transformation matrix by performing deformation calculation through the coordinates of the vertex and the width and height of the external matrix includes:
dividing the coordinates of the 4 vertexes by the width and the height of the external matrix respectively, and converting the obtained result into a normalized coordinate of 0-1;
and calculating a transformation matrix according to the normalized coordinates.
Further, the performing a warping replacement on the third image with the first image as a replacement map to obtain a fourth image includes:
inputting the texture of the third image and the texture of the first image into a fragment shader;
traversing each pixel point of the replacement graph, calculating the offset of the fourth image in the horizontal direction and the vertical direction by using the R component and the G component, multiplying the offset by the replacement proportion, and converting the obtained result into a value of 0-1;
summing the value of the current texture coordinate and the offset converted into 0-1 to obtain a target coordinate point; and acquiring the color of the target coordinate point as the color of the current coordinate point.
Further, the performing a warping replacement on the third image with the first image as a replacement map to obtain a fourth image further includes:
if the permutation map has only one color channel, the color channel controls the permutation in the horizontal and vertical directions simultaneously; if the replacement graph has a plurality of color channels, the red channel controls the replacement in the horizontal direction, and the green channel controls the replacement in the vertical direction;
in the color channel of each pixel of the replacement map, if the gray value is greater than 128, the pixel corresponding to the pixel in the third image is replaced by the pixel horizontally to the right and vertically downwards; if the gray value is less than 128, the pixel corresponding to the third image is replaced by the pixel horizontally to the left and vertically to the top; when the gradation value is 128, no pixel replacement occurs.
Further, the mixing the fourth image with the second image to obtain a fifth image includes:
inputting the fourth image and the second image into a fragment shader;
and multiplying and mixing the RGB value in the fourth image with the RGB value in the second image, and outputting a mixed image, namely a fifth image.
Further, the mixing the fifth image as an upper layer image and the original image as a lower layer image, and outputting a final image includes:
and after determining the incoming coordinates, angles and mask layers, directly placing the fifth image to the upper layer of the image to be processed and outputting a final image.
Further, the distance of pixel replacement is the product of the difference between the gray value and 128 and the replacement ratio.
The embodiment of the present application provides an apparatus for image adaptation VI environment, including:
the acquisition module is used for acquiring an original image, determining an image to be processed according to the original image, and performing gray processing on the image to be processed to obtain a first image;
the adjusting module is used for adjusting the contrast and the color curve of the first image to enable the color of the first image to approach to white, and a second image is obtained;
the deformation module is used for acquiring a target image and carrying out perspective deformation processing on the target image according to the image to be processed to obtain a third image;
the displacement module is used for carrying out distortion displacement on the third image by taking the first image as a displacement graph to obtain a fourth image;
the first mixing module is used for taking the fourth image as input and mixing the fourth image with the second image to obtain a fifth image;
and the second mixing module is used for mixing the fifth image as an upper-layer image and the original image as a lower-layer image, and outputting a final image.
By adopting the technical scheme, the invention can achieve the following beneficial effects:
the invention provides a method and a device for image adaptation VI environment, which comprises the steps of obtaining an original image, determining an image to be processed according to the original image, and carrying out gray level processing on the image to be processed to obtain a first image; adjusting the contrast and the color curve of the first image to make the color of the first image approach to white, so as to obtain a second image; acquiring a target image, and performing perspective deformation processing on the target image according to the image to be processed to obtain a third image; performing distortion replacement on the third image by taking the first image as a replacement graph to obtain a fourth image; taking the fourth image as input, and mixing the fourth image with the second image to obtain a fifth image; and mixing the fifth image as an upper image and the original image as a lower image, outputting a final image to different VI environments, intercepting a specific area according to factors such as the overall outline, color, transparency, material and shadow, performing processing such as gray scale and saturation, and performing methods such as distortion replacement, 3D deformation, masking and mixed mode mapping on the image layer by using OpenGL, so that the user-defined image can be naturally and smoothly attached to a target area, the characteristics such as folds, perspective and shadow in the VI environment are reserved, the integrated effect is achieved, and the overall and true impression of the image is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic diagram of the steps of the method for image adaptation in a VI environment of the present invention;
FIG. 2 is a schematic diagram of an original image provided by the present invention;
FIG. 3 is a schematic diagram of a first image provided by the present invention;
FIG. 4 is a schematic diagram of a second image provided by the present invention;
FIG. 5 is a schematic view of a target image provided by the present invention;
FIG. 6 is a schematic diagram of a third image provided by the present invention;
FIG. 7 is a schematic diagram of a fourth image provided by the present invention;
FIG. 8 is a schematic diagram of a fifth image provided by the present invention;
FIG. 9 is a schematic diagram of a final image provided by the present invention;
FIG. 10 is a schematic diagram of a method for computing a transformation matrix according to the present invention;
FIG. 11 is a flowchart illustrating a method for image adaptation to a VI environment according to the present invention;
fig. 12 is a schematic structural diagram of the apparatus for image adaptation VI environment according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be described in detail below. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the examples given herein without any inventive step, are within the scope of the present invention.
A specific method for image adaptation to VI environment provided in the embodiments of the present application is described below with reference to the accompanying drawings.
As shown in fig. 1, the method for image adaptation VI environment provided in the embodiment of the present application includes:
s101, acquiring an original image, determining an image to be processed according to the original image, and performing gray processing on the image to be processed to obtain a first image;
preferably, the determining the image to be processed according to the original image includes:
and marking the area to be processed by using a closed curve, scratching the area to be processed, and determining the scratched image as an image to be processed.
The original image is as shown in fig. 2, the selected region is scratched, and the effect is better when the target region is marked by a closed curve. The specific picture is scratched and is realized mode can adopt suitable implementation scheme or adopt prior art to realize as required, and this application does not do the restriction here, and the edge precision also can set for in a flexible way as required. And carrying out gray level processing on the extracted picture to obtain a first image, as shown in the attached figure 3.
S102, adjusting the contrast and the color curve of the first image to enable the color of the first image to approach to white, and obtaining a second image;
specifically, the contrast and the color curve are properly adjusted to make the picture closer to white, and a second image is obtained, as shown in fig. 4.
S103, acquiring a target image, and performing perspective deformation processing on the target image according to the image to be processed to obtain a third image;
preferably, the performing perspective transformation processing on the target image according to the image to be processed to obtain a third image includes:
selecting 4 vertexes serving as deformation in the image to be processed and a circumscribed rectangle of the image to be processed, and determining the coordinates of the 4 vertexes and the width and height of the circumscribed rectangle;
performing deformation calculation according to the coordinates of the vertex and the width and the height of the external matrix to obtain a transformation matrix;
and inputting the transformation matrix into a fragment shader as a deformation parameter, performing 3D matrix transformation on a target image in the fragment shader, and outputting a third image.
Preferably, the obtaining of the transformation matrix by performing deformation calculation through the coordinates of the vertex and the width and height of the circumscribed matrix includes:
dividing the coordinates of the 4 vertexes by the width and the height of the external matrix respectively, and converting the obtained result into a normalized coordinate of 0-1;
and calculating a transformation matrix according to the normalized coordinates.
Specifically, perspective transformation is performed on a target image to be mixed, wherein the target image is as shown in fig. 5, deformation parameters are calculated according to positions of 4 vertexes used for transformation in the image to be processed, and 3D matrix transformation is performed on an input picture in a fragment shader, so that the input picture has a perspective effect of "large and small in size" when mixed. The resulting third image is shown in fig. 6. The matrix transformation step is as follows,
firstly, the coordinates of the 4 vertexes used for deformation in the original image are divided by the width and the height of the original image, the coordinates are converted into normalized coordinates of 0-1, and then the normalized coordinates are used for calculating a 4 x4 3D transformation matrix.
And inputting the target image into a fragment shader, introducing a 4 x4 3D transformation matrix as a parameter, expanding two-dimensional texture coordinates into a four-dimensional vector, multiplying the four-dimensional vector by the matrix, and intercepting a visible part to obtain a deformed picture, namely a third image.
S104, performing distortion replacement on the third image by taking the first image as a replacement image to obtain a fourth image;
preferably, the performing a warping substitution on the third image with the first image as a substitution map to obtain a fourth image includes:
inputting the texture of the third image and the texture of the first image into a fragment shader;
traversing each pixel point of the replacement graph, calculating the offset of the fourth image in the horizontal direction and the vertical direction by using the R component and the G component, multiplying the offset by the replacement proportion, and converting the obtained result into a value of 0-1;
summing the value of the current texture coordinate and the offset converted into 0-1 to obtain a target coordinate point; and acquiring the color of the target coordinate point as the color of the current coordinate point.
Specifically, the distorted third image is subjected to distortion replacement by using a gray scale image generated by preprocessing, namely the first image, as a replacement image. The steps are as follows,
the 2 textures are input to the fragment shader using the third image as the original and the preprocessed grayscale image, i.e., the first image, as the replacement image.
Traversing each pixel point of the replacement graph, calculating the offset (maximum offset of 128 pixels) of the original graph in the horizontal direction and the vertical direction by using the R component and the G component, multiplying the offset by the replacement proportion, and converting the offset into a value of 0-1.
And adding the offset of 0-1 to the value of the current texture coordinate to obtain a target coordinate point, and using the point for color sampling to serve as the color of the current point.
Wherein, still include:
if the permutation map has only one color channel, the color channel controls the permutation in the horizontal and vertical directions simultaneously; if the replacement graph has a plurality of color channels, the red channel controls the replacement in the horizontal direction, and the green channel controls the replacement in the vertical direction;
in the color channel of each pixel of the replacement map, if the gray value is greater than 128, the pixel corresponding to the pixel in the third image is replaced by the pixel horizontally to the right and vertically downwards; if the gray value is less than 128, the pixel corresponding to the third image is replaced by the pixel horizontally to the left and vertically to the top; when the gradation value is 128, no pixel replacement occurs. The resulting image is the fourth image, as shown in fig. 7.
Preferably, the pixel replacement formula: the replacement distance is (gray value-128) × the replacement ratio, and when the replacement ratio is 100%, the maximum pixel displacement can be generated, which is 128 pixels.
S105, taking the fourth image as input, and mixing the fourth image with the second image to obtain a fifth image;
preferably, the mixing the fourth image with the second image to obtain a fifth image includes:
inputting the fourth image and the second image into a fragment shader;
and multiplying and mixing the RGB value in the fourth image with the RGB value in the second image, and outputting a mixed image, namely a fifth image.
Specifically, the result graph of the distortion replacement, that is, the fourth image and the second image which is extracted from the original image and preprocessed, are input into the fragment shader, the RGBA values of any pixel point are multiplied by the RGBA values of the two images, and the RGBA value of the first image needs to be supplemented according to a certain rule. Otherwise, when the fourth image is close to black as in fig. 7, the mixing effect is poor. The results of the treatment are shown in FIG. 8. In order to retain the characteristics of texture, wrinkle, shadow and the like, a mixing mode of multiplying 2 picture color values (similar to the mixing mode of 'positive film bottom-on-bottom') is required, although the RGB values are multiplied to cause some color loss, the second image is preprocessed to retain the white pixel value as much as possible, so that the multiplication loss is relatively small at this time, and the truth of the fourth image can be fully retained.
And S106, mixing the fifth image as an upper layer image and the original image as a lower layer image, and outputting a final image.
Preferably, the mixing the fifth image as an upper layer image and the original image as a lower layer image to output a final image includes:
and after determining the incoming coordinates, angles and mask layers, directly placing the fifth image to the upper layer of the image to be processed and outputting a final image.
Specifically, the result image of the previous step, that is, the fifth image and the original image are input to the fragment shader in a "normal" blending manner, and are blended in a "normal" blending manner, that is, the fifth image is directly placed on the original image, so that the color information of the fifth image can be ensured to be completely and normally displayed. The display range, rotation, special shape and the like of the fifth image can be determined by introducing coordinates, angles and a mask layer, so that the display result of the finally synthesized picture is determined, certain smooth transition is carried out on the edge, and the picture sawtooth is reduced. The final result is shown in figure 9.
It can be understood that the present application can be implemented by using a mobile terminal, and the mobile terminal can take a picture through a camera to obtain an original image, or obtain the original image in a memory of the mobile terminal.
It can be understood that the specific steps of calculating the transformation matrix are:
as shown in fig. 10, for example, to paste a figure into an inside trapezoid, it contains 4 vertices and a circumscribed rectangle; coordinates of the trapezoids are expressed by (x, y), relative coordinates of the trapezoids in the circumscribed rectangle are calculated, the coordinates at the upper left corner are (x0, y0), the size of the circumscribed rectangle is w x h, and then the relative coordinates are ((x-x0)/w, (y-y 0)/h);
coordinates of the upper left corner of the rectangle are (X, Y), and the size is W X H; the 4 vertex coordinates of the trapezoid are (x1, y1), (x2, y2), (x3, y3), (x4, y 4);
after normalization, X is 0, Y is 0, W is 1, and H is 1;
wherein x1, x2, x3, x4, y1, y2, y3 and y4 are all between 0 and 1;
firstly, calculating the difference of the vertical coordinates of each vertex, specifically as follows:
y14=y1-y4;
y21=y2-y1;
y31=y3-y1;
y32=y3-y2;
y42=y4-y2;
y43=y4-y3;
calculating a deformation matrix
a=-H*(x2*x3*y14+x2*x4*y31-x1*x4*y32+x1*x3*y42);
b=W*(x2*x3*y14+x3*x4*y21+x1*x4*y32+x1*x2*y43);
c=H*X*(x2*x3*y14+x2*x4*y31-x1*x4*y32+x1*x3*y42)-H*W*x1*(x4*y32-x3*y42+x2*y43)-W*Y*(x2*x3*y14+x3*x4*y21+x1*x4*y32+x1*x2*y43);
d=H*(-x4*y21*y3+x2*y1*y43-x1*y2*y43-x3*y1*y4+x3*y2*y4);
e=W*(x4*y2*y31-x3*y1*y42-x2*y31*y4+x1*y3*y42);
f=-(W*(x4*(Y*y2*y31+H*y1*y32)-x3*(H+Y)*y1*y42+H*x2*y1*y43+x2*Y*(y1-y3)*y4+x1*Y*y3*(-y2+y4))-H*X*(x4*y21*y3-x2*y1*y43+x3*(y1-y2)*y4+x1*y2*(-y3+y4)));
g=H*(x3*y21-x4*y21+(-x1+x2)*y43);
h=W*(-x2*y31+x4*y31+(x1-x3)*y42);
i ═ W × Y (X2 × Y31-X4 × Y31-X1 × Y42+ X3 × 42) + H (- (X3 × Y21) + X4 × Y21+ X1 × Y43-X2 × 43) + W (- (X3 × Y2) + X4 × Y2+ X2 × Y3-X4 × 3-X2 × 4+ X3 × 4)); after a calculation formula of i is obtained, presetting a threshold value k to be 0.0001, and comparing an absolute value of i with k;
if the absolute value of i is less than k and i is greater than 0, then i-k, otherwise, i-k;
and if the absolute value of i is greater than or equal to k, directly taking i.
The final distortion matrix is as follows:
Figure BDA0002868395780000101
then, inputting the target image into a fragment shader, introducing the 4 x4 3D transformation matrix as a parameter, expanding two-dimensional texture coordinates into four-dimensional vectors, multiplying the four-dimensional vectors by the matrix, and intercepting a visible part to obtain a deformed picture, namely a third image.
The working principle of the method for image adaptation VI environment provided by the application is as follows: referring to fig. 11, first, an original image as shown in fig. 2 is obtained, an image to be processed is determined according to the original image, and a gray scale process is performed on the image to be processed to obtain a first image as shown in fig. 3; adjusting the contrast and color curve of the first image to make the color of the first image approach to white, so as to obtain a second image shown in fig. 4; acquiring a target image shown in fig. 5, and performing perspective deformation processing on the target image according to the image to be processed to obtain a third image shown in fig. 6; performing distortion replacement on the third image by taking the first image as a replacement graph to obtain a fourth image shown in fig. 7; taking the fourth image as an input, and performing mixing processing on the fourth image and the second image to obtain a fifth image shown in fig. 8; and mixing the fifth image as an upper layer image and the original image as a lower layer image, and outputting a final image as shown in fig. 9.
As shown in fig. 12, an embodiment of the present application provides an apparatus for image adaptation VI environment, including:
an obtaining module 201, configured to obtain an original image, determine an image to be processed according to the original image, and perform gray processing on the image to be processed to obtain a first image;
an adjusting module 202, configured to adjust a contrast and a color curve of the first image, so that a color of the first image approaches white, and a second image is obtained;
the deformation module 203 is configured to acquire a target image, and perform perspective deformation processing on the target image according to the image to be processed to obtain a third image;
a replacement module 204, configured to perform distortion replacement on the third image by using the first image as a replacement map to obtain a fourth image;
a first mixing module 205, configured to take the fourth image as an input, and perform mixing processing on the fourth image and the second image to obtain a fifth image;
and a second blending module 206, configured to blend the fifth image as an upper layer image and the original image as a lower layer image, and output a final image.
The working principle of the device for image adaptation VI environment provided by the application is that an acquisition module 201 acquires an original image, determines an image to be processed according to the original image, and performs gray processing on the image to be processed to obtain a first image; the adjusting module 202 adjusts the contrast and the color curve of the first image, so that the color of the first image approaches to white, and a second image is obtained; the deformation module 203 acquires a target image, and performs perspective deformation processing on the target image according to the image to be processed to obtain a third image; the replacement module 204 performs distortion replacement on the third image by taking the first image as a replacement graph to obtain a fourth image; the first mixing module 205 takes the fourth image as input, and performs mixing processing on the fourth image and the second image to obtain a fifth image; the second blending module 206 blends the fifth image as an upper layer image and the original image as a lower layer image, and outputs a final image.
The embodiment of the application provides computer equipment, which comprises a processor and a memory connected with the processor;
the memory is used for storing a computer program for performing the method for image adaptation VI environment provided by any of the above embodiments;
the processor is used to call and execute the computer program in the memory.
In summary, the present invention provides a method and an apparatus for image adaptation VI environment, including obtaining an original image, determining an image to be processed according to the original image, and performing gray processing on the image to be processed to obtain a first image; adjusting the contrast and the color curve of the first image to make the color of the first image approach to white, so as to obtain a second image; acquiring a target image, and performing perspective deformation processing on the target image according to the image to be processed to obtain a third image; performing distortion replacement on the third image by taking the first image as a replacement graph to obtain a fourth image; taking the fourth image as input, and mixing the fourth image with the second image to obtain a fifth image; and mixing the fifth image as an upper image and the original image as a lower image, outputting a final image to different VI environments, intercepting a specific area according to factors such as the overall outline, color, transparency, material and shadow, performing processing such as gray scale and saturation, and performing methods such as distortion replacement, 3D deformation, masking and mixed mode mapping on the image layer by using OpenGL, so that the user-defined image can be naturally and smoothly attached to a target area, the characteristics such as folds, perspective and shadow in the VI environment are reserved, the integrated effect is achieved, and the overall and true impression of the image is improved.
It is to be understood that the embodiments of the method provided above correspond to the embodiments of the apparatus described above, and the corresponding specific contents may be referred to each other, which is not described herein again.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (10)

1. A method for image adaptation to a VI environment, comprising:
acquiring an original image, determining an image to be processed according to the original image, and performing gray processing on the image to be processed to obtain a first image;
adjusting the contrast and the color curve of the first image to make the color of the first image approach to white, so as to obtain a second image;
acquiring a target image, and performing perspective deformation processing on the target image according to the image to be processed to obtain a third image;
performing distortion replacement on the third image by taking the first image as a replacement graph to obtain a fourth image;
taking the fourth image as input, and mixing the fourth image with the second image to obtain a fifth image;
and mixing the fifth image as an upper layer image and the original image as a lower layer image, and outputting a final image.
2. The method of claim 1, wherein determining the image to be processed from the original image comprises:
and marking the area to be processed by using a closed curve, scratching the area to be processed, and determining the scratched image as an image to be processed.
3. The method according to claim 1, wherein the perspective transformation processing of the target image according to the image to be processed to obtain a third image comprises:
selecting 4 vertexes serving as deformation in the image to be processed and a circumscribed rectangle of the image to be processed, and determining the coordinates of the 4 vertexes and the width and height of the circumscribed rectangle;
performing deformation calculation according to the coordinates of the vertex and the width and the height of the external matrix to obtain a transformation matrix;
and inputting the transformation matrix into a fragment shader as a deformation parameter, performing 3D matrix transformation on a target image in the fragment shader, and outputting a third image.
4. The method of claim 3, wherein the performing deformation calculations through the coordinates of the vertices and the width and height of the bounding matrix to obtain a transformation matrix comprises:
dividing the coordinates of the 4 vertexes by the width and the height of the external matrix respectively, and converting the obtained result into a normalized coordinate of 0-1;
and calculating a transformation matrix according to the normalized coordinates.
5. The method of claim 1, wherein the warp permuting the third image with the first image as a permutation map to obtain a fourth image comprises:
inputting the texture of the third image and the texture of the first image into a fragment shader;
traversing each pixel point of the replacement graph, calculating the offset of the fourth image in the horizontal direction and the vertical direction by using the R component and the G component, multiplying the offset by the replacement proportion, and converting the obtained result into a value of 0-1;
summing the value of the current texture coordinate and the offset converted into 0-1 to obtain a target coordinate point; and acquiring the color of the target coordinate point as the color of the current coordinate point.
6. The method according to claim 5, wherein the warp replacing the third image with the first image as a replacement map to obtain a fourth image, further comprises:
if the permutation map has only one color channel, the color channel controls the permutation in the horizontal and vertical directions simultaneously; if the replacement graph has a plurality of color channels, the red channel controls the replacement in the horizontal direction, and the green channel controls the replacement in the vertical direction;
in the color channel of each pixel of the replacement map, if the gray value is greater than 128, the pixel corresponding to the pixel in the third image is replaced by the pixel horizontally to the right and vertically downwards; if the gray value is less than 128, the pixel corresponding to the third image is replaced by the pixel horizontally to the left and vertically to the top; when the gradation value is 128, no pixel replacement occurs.
7. The method according to claim 1, wherein the mixing the fourth image with the second image to obtain a fifth image comprises:
inputting the fourth image and the second image into a fragment shader;
and multiplying and mixing the RGB value in the fourth image with the RGB value in the second image, and outputting a mixed image, namely a fifth image.
8. The method according to claim 1, wherein the mixing the fifth image as an upper layer image and the original image as a lower layer image to output a final image comprises:
and after determining the incoming coordinates, angles and mask layers, directly placing the fifth image to the upper layer of the image to be processed and outputting a final image.
9. The method of claim 6,
the distance of pixel replacement is the product of the difference between the gray value and 128 and the replacement ratio.
10. An apparatus for image adaptation in a VI environment, comprising:
the acquisition module is used for acquiring an original image, determining an image to be processed according to the original image, and performing gray processing on the image to be processed to obtain a first image;
the adjusting module is used for adjusting the contrast and the color curve of the first image to enable the color of the first image to approach to white, and a second image is obtained;
the deformation module is used for acquiring a target image and carrying out perspective deformation processing on the target image according to the image to be processed to obtain a third image;
the displacement module is used for carrying out distortion displacement on the third image by taking the first image as a displacement graph to obtain a fourth image;
the first mixing module is used for taking the fourth image as input and mixing the fourth image with the second image to obtain a fifth image;
and the second mixing module is used for mixing the fifth image as an upper-layer image and the original image as a lower-layer image, and outputting a final image.
CN202011596859.5A 2020-12-29 2020-12-29 Method and device for image adaptation VI environment Active CN112634165B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011596859.5A CN112634165B (en) 2020-12-29 2020-12-29 Method and device for image adaptation VI environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011596859.5A CN112634165B (en) 2020-12-29 2020-12-29 Method and device for image adaptation VI environment

Publications (2)

Publication Number Publication Date
CN112634165A true CN112634165A (en) 2021-04-09
CN112634165B CN112634165B (en) 2024-03-26

Family

ID=75287508

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011596859.5A Active CN112634165B (en) 2020-12-29 2020-12-29 Method and device for image adaptation VI environment

Country Status (1)

Country Link
CN (1) CN112634165B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020158877A1 (en) * 2000-11-22 2002-10-31 Guckenberger Ronald James Shadow buffer control module method and software construct for adjusting per pixel raster images attributes to screen space and projector features for digital wrap, intensity transforms, color matching, soft-edge blending and filtering for multiple projectors and laser projectors
CN106612397A (en) * 2016-11-25 2017-05-03 努比亚技术有限公司 Image processing method and terminal
CN110210400A (en) * 2019-06-03 2019-09-06 上海眼控科技股份有限公司 A kind of form document detection method and equipment
CN110458787A (en) * 2019-08-09 2019-11-15 武汉高德智感科技有限公司 A kind of image interfusion method, device and computer storage medium
CN110473159A (en) * 2019-08-20 2019-11-19 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium
CN110555796A (en) * 2019-07-24 2019-12-10 广州视源电子科技股份有限公司 image adjusting method, device, storage medium and equipment
CN111325700A (en) * 2020-02-26 2020-06-23 无锡久仁健康云科技有限公司 Multi-dimensional fusion algorithm and system based on color images
CN111489322A (en) * 2020-04-09 2020-08-04 广州光锥元信息科技有限公司 Method and device for adding sky filter to static picture
CN111510691A (en) * 2020-04-17 2020-08-07 Oppo广东移动通信有限公司 Color interpolation method and device, equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020158877A1 (en) * 2000-11-22 2002-10-31 Guckenberger Ronald James Shadow buffer control module method and software construct for adjusting per pixel raster images attributes to screen space and projector features for digital wrap, intensity transforms, color matching, soft-edge blending and filtering for multiple projectors and laser projectors
CN106612397A (en) * 2016-11-25 2017-05-03 努比亚技术有限公司 Image processing method and terminal
CN110210400A (en) * 2019-06-03 2019-09-06 上海眼控科技股份有限公司 A kind of form document detection method and equipment
CN110555796A (en) * 2019-07-24 2019-12-10 广州视源电子科技股份有限公司 image adjusting method, device, storage medium and equipment
CN110458787A (en) * 2019-08-09 2019-11-15 武汉高德智感科技有限公司 A kind of image interfusion method, device and computer storage medium
CN110473159A (en) * 2019-08-20 2019-11-19 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium
CN111325700A (en) * 2020-02-26 2020-06-23 无锡久仁健康云科技有限公司 Multi-dimensional fusion algorithm and system based on color images
CN111489322A (en) * 2020-04-09 2020-08-04 广州光锥元信息科技有限公司 Method and device for adding sky filter to static picture
CN111510691A (en) * 2020-04-17 2020-08-07 Oppo广东移动通信有限公司 Color interpolation method and device, equipment and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
XIAOTING FAN 等: "Stereoscopic Image Stitching via Disparity-Constrained Warping and Blending", 《 IEEE TRANSACTIONS ON MULTIMEDIA》, pages 655 *
陈春香 等: "基于色彩空间与小波变换的图像融合", 《桂林工学院学报》, vol. 27, no. 3, pages 417 - 421 *
马明星: "VR系统中图形渲染和视觉传达研究设计", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》, pages 138 - 4278 *

Also Published As

Publication number Publication date
CN112634165B (en) 2024-03-26

Similar Documents

Publication Publication Date Title
US9508185B2 (en) Texturing in graphics hardware
EP1204073B1 (en) Image generation method and apparatus
KR101049928B1 (en) Method, terminal and computer-readable recording medium for generating panoramic images
Sen Silhouette maps for improved texture magnification
US7239314B2 (en) Method for 2-D animation
CN108805090B (en) Virtual makeup trial method based on planar grid model
TW200842758A (en) Efficient 2-D and 3-D graphics processing
Korkalo et al. Light-weight marker hiding for augmented reality
US20170200302A1 (en) Method and system for high-performance real-time adjustment of one or more elements in a playing video, interactive 360° content or image
CN110248242B (en) Image processing and live broadcasting method, device, equipment and storage medium
US11276150B2 (en) Environment map generation and hole filling
US20100066732A1 (en) Image View Synthesis Using a Three-Dimensional Reference Model
US7280117B2 (en) Graphical user interface for a keyer
JP3467725B2 (en) Image shadow removal method, image processing apparatus, and recording medium
US20200118253A1 (en) Environment map generation and hole filling
CN111489322A (en) Method and device for adding sky filter to static picture
CN108596992B (en) Rapid real-time lip gloss makeup method
WO2023169121A1 (en) Image processing method, game rendering method and apparatus, device, program product, and storage medium
CN112634165B (en) Method and device for image adaptation VI environment
US20100194772A1 (en) Image display using a computer system, including, but not limited to, display of a reference image for comparison with a current image in image editing
GB2369541A (en) Method and apparatus for generating visibility data
Borshukov New algorithms for modeling and rendering architecture from photographs
JP2973413B2 (en) Illuminance calculation method and display device for computer graphics
CN112150387B (en) Method and device for enhancing stereoscopic impression of five sense organs on human images in photo
Borg et al. Fast high definition video rendering on mobile devices

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant