CN113888394A - Image deformation method and device and electronic equipment - Google Patents

Image deformation method and device and electronic equipment Download PDF

Info

Publication number
CN113888394A
CN113888394A CN202111112770.1A CN202111112770A CN113888394A CN 113888394 A CN113888394 A CN 113888394A CN 202111112770 A CN202111112770 A CN 202111112770A CN 113888394 A CN113888394 A CN 113888394A
Authority
CN
China
Prior art keywords
image
mapping
target
edge
blocks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111112770.1A
Other languages
Chinese (zh)
Inventor
李为
刘奎龙
杨昌源
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN202111112770.1A priority Critical patent/CN113888394A/en
Publication of CN113888394A publication Critical patent/CN113888394A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/18Image warping, e.g. rearranging pixels individually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0276Advertisement creation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/02Affine transformations

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Finance (AREA)
  • Strategic Management (AREA)
  • Game Theory and Decision Science (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application discloses an image deformation method, an image deformation device and electronic equipment, wherein the method comprises the following steps: determining an original image, and identifying image main content to be subjected to deformation processing from the original image; dividing the main content of the image into a plurality of initial polygonal image blocks, and performing boundary expansion processing on the initial polygonal image blocks to obtain a plurality of edge-expanded polygonal image blocks; mapping the vertexes of the edge-extended polygonal image blocks to the deformed target image by using a first transformation model; and mapping pixel points in the extended polygon image block into the target image by using a second transformation model according to the position corresponding relation information of the vertexes before and after mapping. Through the embodiment of the application, the image deformation can be realized more efficiently, and the appearance of gaps in new images is avoided.

Description

Image deformation method and device and electronic equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image deformation method and apparatus, and an electronic device.
Background
In the commodity object information system, the characteristics of a commodity object can be conveyed to a user by expressing the dynamic attributes of the commodity object. For example, for a commodity object of a shoe class, the characteristics of softness and good bending degree of the sole can be expressed in an image mode, and the like.
The specific dynamic attribute information may be implemented by shooting a video or the like, but the cost of the merchant user is high. In order to save cost, an algorithm can be used to process the original static image and generate a dynamic image, and then the dynamic attribute can be expressed through the generated dynamic image.
In the process of generating the motion picture, if the characteristics of softness and flexibility of the sole are required to be embodied, the original image needs to be subjected to continuous multi-frame deformation processing, and then the generated multi-frame target images form the motion picture. Some image deformation processing schemes exist in the prior art, for example, one scheme is to perform deformation processing on an image by using a moving least square method. In the scheme, a plurality of anchor points can be selected in an original image, the anchor points are set at expected positions in a plurality of frames of target images respectively, then, aiming at each frame of target image, the positions of each pixel in the original image in the target image can be calculated respectively by using a moving least square method, and then, each pixel value is projected into the target image according to the specific position.
The method can achieve the effect of smooth deformation of the image, but has the disadvantage that on one hand, the method needs to calculate pixel by pixel and respectively calculate the position of each pixel in a new image, and therefore, the efficiency is often low. On the other hand, after the morphing process is completed, it may occur that pixels of some regions become denser, images of some regions become sparser, and the like in the new image. In this case, for a region where pixels are sparse, a gap may occur between pixels, and such a gap appears as: some areas in a new image can see that a plurality of pixel points exist, and gaps exist between every two pixel points, for example, the positions of specific gaps may be displayed as background colors such as black, which obviously affects the display effect of the image.
Therefore, how to more efficiently implement image deformation and avoid the occurrence of gaps in new images becomes a technical problem to be solved by those skilled in the art.
Disclosure of Invention
The application provides an image deformation method, an image deformation device and electronic equipment, which can more efficiently realize image deformation and avoid the appearance of gaps in new images.
The application provides the following scheme:
an image warping method, comprising:
determining an original image, and identifying image main content to be subjected to deformation processing from the original image;
dividing the main content of the image into a plurality of initial polygonal image blocks, and performing boundary expansion processing on the initial polygonal image blocks to obtain a plurality of edge-expanded polygonal image blocks;
mapping the vertexes of the edge-extended polygonal image blocks to the deformed target image by using a first transformation model;
and mapping pixel points in the extended polygon image block into the target image by using a second transformation model according to the position corresponding relation information of the vertexes before and after mapping.
Wherein the segmenting the image subject content into a plurality of initial polygonal image blocks comprises:
and dividing the main content of the image into a plurality of initial triangular image blocks, and performing boundary expansion processing on the initial triangular image blocks to obtain a plurality of edge-expanded triangular image blocks.
Wherein, the dividing the image main content into a plurality of initial triangular image blocks comprises:
carrying out mesh division on the original image;
traversing all grids, reserving the grid vertexes containing the image main content and performing deduplication processing;
and calling a preset subdivision algorithm according to the reserved grid vertex to obtain the plurality of triangular image blocks.
Wherein, the performing the boundary expansion processing on the triangular image blocks to obtain a plurality of edge-expanded triangular image blocks includes:
the following processing is respectively carried out on each vertex of the triangular image block:
selecting a plurality of alternative points around the vertex according to the target distance to form an alternative point set;
selecting a first alternative point subset and a second alternative point subset from the alternative point set by respectively taking two edges where the vertexes are located as boundary lines, wherein when one edge is taken as the boundary line to select the alternative point subset, the vertex taking the edge as the opposite side is taken as a reference vertex, and the alternative points on the opposite side of the reference vertex are selected to form the alternative point subset;
determining a substitution point of the vertex according to the intersection of the first alternative point subset and the second alternative point subset;
and connecting the substitution points of the vertexes to obtain the edge-extended triangular image block.
Wherein, the size of the target distance is related to the distance that different image blocks may be extended in the mapping process.
Wherein the second transformation model comprises an affine transformation model;
the mapping, by using the second transformation model, the pixel points in the extended polygon image block to the target image includes:
determining three groups of mapping points according to the position corresponding relation information of the vertexes of the edge-extended triangular image blocks before and after mapping;
and mapping the pixel points in the edge-extended triangular image block into the target image by utilizing an affine transformation model based on the three groups of mapping points.
Wherein the first transformation model comprises a moving least squares transformation model;
the mapping the vertexes of the extended polygon image blocks to the deformed target image by using the first transformation model comprises:
establishing a mobile least square fitting function based on a plurality of anchor point positions set in the original image and the corresponding expected positions of the anchor points in the target image to be generated;
and mapping the vertex to the target image to be generated by utilizing the moving least square fitting function so as to determine a position mapping result of the vertex in the target image to be generated.
Wherein, still include:
and if a plurality of pixel points are mapped to the same position in the target image, selecting one with lower transparency from the plurality of pixel points to map to the position.
A method of generating a motion picture for a commodity object, comprising:
determining a target commodity object needing to generate a moving picture and a corresponding original image, and identifying image main body content to be subjected to deformation processing from the original image;
dividing the main content of the image into a plurality of original polygonal image blocks, and performing boundary expansion processing on the original polygonal image blocks to obtain a plurality of edge-expanded polygonal image blocks;
determining the positions of a plurality of anchor points from the image main content and the expected positions of the anchor points in the multi-frame target image respectively;
aiming at the multi-frame target image, respectively carrying out the following processing:
for a current target image, establishing a first transformation model according to the positions of a plurality of anchor points and the expected position, mapping the vertex of the extended polygon image block into the current target image by using the first transformation model, and mapping the pixel points in the extended polygon image block into the current target image by using a second transformation model according to the position corresponding relation information of the vertex before and after mapping so as to generate the current target image;
and generating a motion picture of the target commodity object according to the generated multi-frame target image.
Wherein, the determining the target commodity object needing to generate the motion picture and the corresponding original image comprises:
receiving page address information of a target page associated with the target commodity object;
and analyzing the image associated with the target page according to the page address information, and determining an image meeting target conditions as the original image.
Wherein, still include:
providing an operation control for publishing the dynamic image;
and after receiving user operation through the operation control, publishing the generated dynamic image to the target page.
An image morphing apparatus comprising:
the image processing device comprises an original image determining unit, a processing unit and a processing unit, wherein the original image determining unit is used for determining an original image and identifying image main content to be subjected to deformation processing from the original image;
the edge expanding processing unit is used for dividing the main content of the image into a plurality of initial polygon image blocks and performing boundary expanding processing on the initial polygon image blocks to obtain a plurality of edge expanding polygon image blocks;
the first mapping unit is used for mapping the vertexes of the edge-extended polygonal image blocks into the deformed target image by utilizing a first transformation model;
and the second mapping unit is used for mapping the pixel points in the extended polygon image block into the target image by using a second transformation model according to the position corresponding relation information of the vertexes before and after mapping.
An apparatus for generating a motion picture for a commodity object, comprising:
the system comprises an original image determining unit, a dynamic image generating unit and a dynamic image generating unit, wherein the original image determining unit is used for determining a target commodity object needing to generate a dynamic image and a corresponding original image and identifying image main body content to be subjected to deformation processing from the original image;
the edge expanding processing unit is used for dividing the main content of the image into a plurality of original polygonal image blocks and performing boundary expanding processing on the original polygonal image blocks to obtain a plurality of edge expanding polygonal image blocks;
the anchor point position determining unit is used for determining the positions of a plurality of anchor points from the image main body content and the expected positions of the anchor points in the multi-frame target image respectively;
a target image generation unit, configured to perform the following processing for the multiple frames of target images respectively: for a current target image, establishing a first transformation model according to the positions of a plurality of anchor points and the expected position, mapping the vertex of the extended polygon image block into the current target image by using the first transformation model, and mapping the pixel points in the extended polygon image block into the current target image by using a second transformation model according to the position corresponding relation information of the vertex before and after mapping so as to generate the current target image;
and the moving picture generating unit is used for generating a moving picture of the target commodity object according to the generated multi-frame target image.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method of any of the preceding claims.
An electronic device, comprising:
one or more processors; and
a memory associated with the one or more processors for storing program instructions that, when read and executed by the one or more processors, perform the steps of the method of any of the preceding claims.
According to the specific embodiments provided herein, the present application discloses the following technical effects:
by the embodiment of the application, in the process of carrying out deformation processing on an image, the image main content in the original image can be divided into a plurality of initial polygon image blocks, and each initial polygon can be subjected to expanded boundary processing to obtain an expanded polygon image block. The vertices of such an extended polygon image block may then be mapped into the target image, in particular using the first transformation model. And by combining the second transformation model, the mapping of the pixel points in the extended polygon image block to the target image can be realized, and further a specific target image is generated. In this way, because the polygon image blocks are subjected to the boundary expansion processing before the vertex of the polygon image block is mapped, an overlapped area is generated between different polygon image blocks, and thus, even if the distance between different edge-expanded polygon image blocks is increased in the mapping process, gaps possibly generated by the increased distance can be compensated through the existence of the overlapped area, so that the image effect after deformation is improved.
The vertex of the edge-extended polygon can be mapped by using a mobile least square method, and the mapping to the target image can be completed only by carrying out geometric transformation, including affine transformation and the like, on the pixel points inside the edge-extended polygon. Therefore, the number of times of moving least square method transformation can be greatly reduced, and the efficiency is improved.
Of course, it is not necessary for any product to achieve all of the above-described advantages at the same time for the practice of the present application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a schematic diagram of a system architecture provided by an embodiment of the present application;
FIG. 2 is a flow chart of a first method provided by an embodiment of the present application;
fig. 3 to 9 are schematic diagrams of a plurality of states in an image processing process according to an embodiment of the present application;
FIG. 10 is a flow chart of a second method provided by embodiments of the present application;
FIG. 11 is a schematic diagram of a first apparatus provided by an embodiment of the present application;
FIG. 12 is a schematic diagram of a second apparatus provided by an embodiment of the present application;
fig. 13 is a schematic diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments that can be derived from the embodiments given herein by a person of ordinary skill in the art are intended to be within the scope of the present disclosure.
It should be noted that an image is composed of coordinate points with pixel values in a pixel coordinate system, and such coordinate points with pixel values are also referred to as pixel points. The image is subjected to distortion deformation processing, namely coordinate conversion is carried out on the pixel points, and specific pixel points are converted to other positions, so that a new image can be generated. In the new image, the effect after the warp deformation processing can be obtained.
The moving least squares method described in the background art can achieve the above-mentioned coordinate transformation. However, since all the pixels in the original image need to be calculated respectively, that is, how many pixels are included in the original image, how many times of the calculation by the moving least square method needs to be performed, and this process is time-consuming, and therefore, the efficiency is low. In addition, there is a disadvantage that a gap is generated between pixels.
In order to solve the above problem, one solution is to divide the original image into a plurality of image blocks, for example, to triangulate to obtain a plurality of triangular image blocks. In this way, the vertex of each triangle is respectively transformed by the moving least square method and is mapped into the target image, and the position mapping result of each vertex in the target image is obtained. And then, for the pixels inside the triangle, mapping to the target image can be completed in an affine transformation mode, and a position mapping result of each pixel inside the triangle in the target image is obtained. Thus, the two parts of position mapping results are combined to generate the final target image.
By the method, the moving least square method transformation is only needed to be carried out on the vertex of the triangle instead of carrying out the moving least square method transformation on all the pixel points, so the calculation times of the moving least square method transformation are greatly reduced. For example, assuming that a triangular image block includes 312 pixels, if the moving least square transform is directly performed, only the image block needs to perform 312 moving least square transforms. In the mode of combining triangulation and affine transformation, only 3 times of moving least square transformation and one time of affine transformation are needed to be performed on one triangular image block. The time required for one affine transformation operation is much shorter than the time required for 309 moving least square transformations, and therefore the efficiency can be improved as a whole.
In addition, in the process of carrying out image deformation processing by combining the moving least square method with triangulation and affine transformation, because the moving least square method transformation does not need to be carried out on each pixel point respectively, the problem of gaps among the pixel points can be solved. However, since the vertices of the specific image blocks are mapped by the moving least square transform, distances between partial image blocks may become closer to each other, and distances between partial image blocks may become farther from each other. Therefore, gaps may occur between different image blocks, which still affect the display effect of the generated target image.
Therefore, in the embodiment of the present application, further optimization is performed on the basis of the combination of the moving least square method and triangulation and affine transformation, so as to solve the problem of gaps generated between different image blocks. Specifically, in the embodiment of the present application, after dividing the original image into a plurality of triangular image blocks (which may also be quadrangles, etc.), the boundary expansion processing may be performed on the triangle first to obtain a plurality of edge-expanded triangles. Thus, in the original image, there may be portions where different flared triangles overlap each other. Then, based on the vertex of the edge-extended triangle, moving least square method transformation can be carried out, and then affine transformation is carried out on the pixel points inside the edge-extended triangle, so that mapping of all the pixel points is completed, and a target image is generated. In this way, since there are overlapping portions between different edge-extended triangles in the original image, even if the distances between the partial triangles are extended after the transformation (the extended distances during the transformation are usually smaller than the extended distances during the edge-extension process), gaps between the image blocks can be avoided. In addition, the workload of the edge expanding process is relatively small, and therefore, the influence on the processing efficiency is almost negligible.
By the method provided by the embodiment of the application, the step of edge expanding processing is added, and the moving least square method conversion is carried out based on the edge expanding processing result, so that the image quality after conversion can be improved on the basis of ensuring the processing efficiency, and the occurrence of gaps between image blocks is avoided.
From a technical architecture perspective, the embodiments of the present application may have a variety of specific architectures. For example, in one embodiment, an image transformation processing tool may be provided, the tool may be an offline tool, and may exist in the form of a client program, and a user may install the image transformation processing tool and input an image specifically required to be processed, and the tool may perform processing to generate a transformed target image, or may generate a plurality of frames of target images to generate a moving image. The user may then apply such processed target images or motion pictures to a variety of scenes, including, for example, dropping into a page, etc.
Alternatively, as shown in fig. 1, an image morphing processing system may be provided for the merchant user/seller user, and the system may also communicate with a specific merchandise object information system. In this way, if the merchant user/seller user needs to generate a dynamic image for the published commodity object to express the properties of flexibility, softness, etc., the detailed page link of the specific commodity object can be input through the image deformation processing system. Then, the image transformation processing system may select an image suitable as the original image from the detail page, and generate the motion picture after performing transformation processing according to the method provided in the embodiment of the present application. Alternatively, the merchant/seller user may directly input a picture of the merchandise object, generate a motion picture by the image morphing processing system, and so on. After generating the motion picture, a preview function can be provided for the user, so that the user can preview the generated motion picture effect. In addition, a function of downloading the generated motion picture and the like can be provided, so that the user can download and use the generated motion picture. Specific use modes may include: such a map is used as a main map of the product object, or may be published to a detail page of the product object, for example, to a main map resource position of the product in the detail page.
The following describes in detail specific implementations provided in embodiments of the present application.
Example one
First, the first embodiment provides an image deformation method, and referring to fig. 2, the method may include:
s201: determining an original image, and identifying image main body content to be subjected to deformation processing from the original image.
The original image may be specifically input by a user, or the user may also specify a specific website of the page, and the image processing tool or system automatically analyzes the pictures in the page, and determines a suitable original image, and so on. During automatic analysis, the judgment can be performed according to preset characteristics and other information, for example, a specific original image can conform to the following characteristics: the method comprises the steps of taking a specific commodity object as image main body content, enabling the shooting angle of the main body content to meet requirements (for example, if the characteristic that the sole of a shoe is bendable needs to be expressed, a side view of the shoe can be selected as an original image), not containing excessive redundant content (for example, images with models can be filtered out, images with excessively complex backgrounds can be filtered out, and the like). In a specific implementation, if the original image is input by a user, an example can be provided for the user so as to help the user provide the original image meeting the requirement.
After the original image is determined, the image main body content to be subjected to deformation processing can be identified. For example, as shown in fig. 3, the original image is a photograph of a certain shoe, a part belonging to the shoe can be identified as the main content of the image, and other parts belonging to the background do not need to participate in the subsequent calculation process. The image main content identification algorithm can be various, including a specific matting algorithm for identifying a main edge, and then using the content within the edge as the image main content, and the like, so that the existing algorithm can be called to realize the identification.
It should be noted that, in the original image, a specific pixel point may generally have RGB three-channel property. However, after the image main content identification process is performed, the specifically identified pixel points belonging to the image main content may have attributes of four channels, and the transparency channel is added based on the original RGB. The transparency information may also be used in subsequent processing.
In addition, in specific implementation, in order to achieve better processing effect, some preprocessing may be performed after the subject content identification is completed. For example, the image subject content may be rotated to make its position in the original image more suitable for performing subsequent deformation processing, or to obtain a more easily-obtained deformation effect. For example, if the original image is a photograph of "shoes", the main image content of "shoes" in the original image may be rotated to make the toe of the shoes face downward, so that the subsequently generated motion picture may simulate the process of gradually bending the sole of the shoe in the process of stepping down from the toe of the shoe when the shoe is worn.
In addition, the width of the boundary of the original image can be adjusted, and specifically, this step can be performed if necessary. For example, if the area occupied by the image main content in the original image is large, there may not be much remaining space in the original image in the up-down or left-right direction. At this time, if the deformation processing is directly performed based on the original image, the deformed image main body content may exceed the boundary of the original image. Therefore, in an optional manner, if the area occupied by the image main content in the original image is relatively large, the width of the boundary of the original image may also be adjusted, for example, some background pixels may be supplemented in the original image, so that the boundary is widened, and so on.
S202: and dividing the main content of the image into a plurality of initial polygon image blocks, and performing boundary expansion processing on the initial polygon image blocks to obtain a plurality of edge-expanded polygon image blocks.
After identifying specific image subject content from the original image, the image subject content may be dissected into a plurality of initial polygonal image blocks. Wherein, the specific polygon may include a triangle, a quadrangle, and the like. In a preferred mode, the method can be realized in a triangular mode. That is, the image main content may be divided into a plurality of triangular image blocks, and then the triangular image blocks are subjected to boundary expansion processing to obtain a plurality of edge-expanded triangular image blocks.
Specifically, when triangulation is performed, there may be a plurality of manners, for example, in one manner, Delaunay triangulation may be used to implement triangulation (of course, other triangulation algorithms may also be used). The delaunay triangulation is a set in which V is a finite point set in a two-dimensional real number domain, a side E is a closed line segment composed of points in the point set as end points, and E is E. Then a triangulation T ═ (V, E) of the set of points V is a plan G which satisfies the condition: a. edges in the plan view do not contain any points in the set of points, except for the endpoints; b. there are no intersecting edges; c. all the faces in the plan view are triangular faces, and the collection of all the triangular faces is the convex hull of the scatter set V.
That is, for a known set of points, multiple triangles may be formed using a delaunay triangulation algorithm. Therefore, before the image subject content is triangulated by using a specific delaunay triangulation algorithm, a point set can be selected in the image subject content first, and then the point set is input into the algorithm, so that the triangulation result can be output.
There are various ways to obtain the above-mentioned set of points. For example, in one approach, the original image may first be gridded. For example, the original image may be specifically subjected to uniform grid division, and the grid width may be adaptively configured according to the area of the image subject content. That is, if the area of the image subject content in the original image is larger, the grid width may also be larger, conversely, if the area of the image subject content in the original image is smaller, the grid width may also be smaller, and so on. In the example shown in fig. 3 provided in the embodiment of the present application, the grid width may be set to 25 pixels, and at this time, the specific grid division result may be as shown in fig. 4.
After the meshing is completed, all meshes can be traversed, mesh vertices containing image subject content are retained and de-duplicated. That is, after the original image is subjected to the mesh division, part of the mesh will fall on the image subject content, and part of the mesh will not fall on the image subject content. Therefore, a mesh that falls on the image subject content can be selected from the mesh, and the vertices of the mesh are retained and subjected to deduplication processing, so that a point set can be obtained. This may be specifically as shown in fig. 5, for example. And after the reserved mesh vertexes are used as a point set, calling a Delaunay triangulation algorithm according to the reserved mesh vertexes to obtain a plurality of triangular image blocks. For example, as shown in fig. 6.
After the image main body content is divided into a plurality of image blocks, in the embodiment of the present application, instead of directly performing the moving least square method transformation, the polygon is first subjected to the boundary expansion processing, that is, the boundary of the polygon is expanded outward, and partial overlap may occur between different polygons.
There may be a plurality of methods for performing the boundary expansion processing on a specific polygon, for example, in the case of a triangle, the following may be performed: the following processing is respectively carried out on each vertex of the triangular image block: firstly, selecting a plurality of alternative points around a vertex according to a target distance to form an alternative point set; then, respectively taking two edges where the vertex is located as boundary lines, selecting a first alternative point subset and a second alternative point subset from the alternative point set, wherein when one edge is taken as the boundary line to select the alternative point subset, the vertex taking the edge as the opposite side is taken as a reference vertex, and the alternative points on the opposite side of the reference vertex are selected to form the alternative point subset; then, according to the intersection of the first alternative point subset and the second alternative point subset, determining a substitute point of the vertex; and finally, connecting the substitution points of all the vertexes to obtain the edge-extended triangular image block.
For example, as shown in fig. 7, assuming ABC is a split triangle, expanding the boundary chuli can be performed by:
a) for the vertex a, selecting alternative points according to the input distance d, and taking d as 1 pixel as an example, the selected alternative points may include points p1-p 8; that is, the boundary expansion processing may be performed on the polygonal image block according to the preset target distance d. Wherein, regarding the value of the distance d, the value should not be too large or too small; if the size of the triangle image block is too large, the area of each triangle image block is obviously increased, and the smoothness of the image bending area after final deformation processing is influenced; if too small, it may not be sufficient to compensate for the gap formed by the distance between the image blocks being pulled far. Therefore, in a specific implementation, the distance d may be slightly larger than the maximum distance that the image block may be pulled far after the deformation processing. For example, assuming that the maximum distance pulled away is two pixels, d may be three pixels, and so on.
b) With the line l1 formed by the edge AB as the boundary (in this case, the edge AB is opposite to the vertex C, and therefore, the vertex C can be used as a reference vertex), the alternative points on the opposite side of the vertex C are reserved, namely p1, p2, p3 and p 8;
c) with the line l2 formed by the side AC as the boundary, the alternative point on the opposite side of the vertex B is retained in the result of c), which is p 8;
d) if there is only one alternative point remaining (e.g., p8 in this example), then this point may be a substitute point for point A. Here, after the value of d is set, the same value of d may be used for each triangle to perform the edge extension process. Thus, it may happen that, for a certain triangle, after selection in the above-described manner, no alternative points are found, indicating that the d setting is relatively small for this triangle, but since it is not advisable to modify the value of d during the operation of the algorithm, point a can be directly taken as its own alternative point. In addition, when a triangle is processed, a plurality of alternative points are found, and at this time, a point farthest from the point a may be selected from the plurality of alternative points as a substitute point;
e) repeating steps a to d for the vertex B, C, and obtaining a triangle formed by the three alternative points, which is the edge-expanding triangle row, as shown by the dashed triangle in fig. 6.
S203: and mapping the vertexes of the edge-extended polygonal image blocks to the deformed target image by using a first transformation model.
After the polygon has been expanded, a first transformation may be performed based on the vertices of the expanded polygon to determine the positions of the vertices in the target image. Wherein, the target image is the image to be generated and deformed. In a specific implementation, the first transformation can be implemented in various ways, for example, in a specific implementation, a moving least squares model can be used. Specifically, in order to perform transformation using the moving least squares method, a plurality of anchor point positions in the original image and corresponding expected positions of anchor points in the target image to be generated may be specified (this process may be independent from the process of the aforementioned triangulation, and therefore, may be performed in parallel). The anchor point position may be selected according to the shape of the specific image main content, the characteristics of the required deformation, and the like. The desired position, i.e. the position where the anchor point appears in the target image after the desired transformation. In specific implementation, specific anchor point positions belong to key positions for reflecting the change effect. For example, as shown in fig. 8, assuming that a shoe is shown in the original image and the sole of the shoe needs to be shown to be bent and soft by the deformation process, the anchor points may be S1 at the toe, S2 at one third of the sole where the sole is usually bent, and S3 at the heel, respectively. The desired positions in the target image may be t1, t2, t3, respectively.
It should be noted that, in a specific implementation, it may be necessary to express the characteristic of the shoe sole bending through a motion picture composed of multiple frames of target images, and therefore, multiple frames of target images may be generated according to an original image, where an anchor point position in the original image may not be changed, but a corresponding expected position in each frame of target image may be different, so as to present a dynamic effect of gradual bending deformation. Therefore, the desired position of each anchor point can be specified for different target image frames, respectively. The selection of the anchor point position in the original image and the determination of the expected position in the target image of each frame can be automatically realized by means of an algorithm, and details are not described here.
After determining the anchor point position in the original image and the expected position in the target image, a plurality of mapping points can be established by using the anchor point position and the expected position, and a moving least square fitting function is established based on the plurality of mapping points. In the moving least squares method, a fitting curve needs to be established near a group of nodes (nodes) at different positions, and each node has a set of coefficients for defining the shape of the fitting curve near the position. Thus, in calculating a fitted curve near a certain node, only the set of coefficient values for that node need to be calculated. The determined sets of mapping points can be used to determine the coefficient values. The specific implementation manner is not the focus of the embodiments of the present application, and therefore, the detailed description is not provided herein.
After the moving least square method fitting function is constructed, the vertex of a specific edge-expanding polygon can be mapped into the target image to be generated by using the fitting function, so that the position mapping result of the vertex in the target image to be generated is determined. That is, in the embodiment of the present application, the moving least square method transformation is not directly performed on the vertices of the divided polygon, but performed on the vertices of the polygon whose boundary extension is completed.
S204: and mapping pixel points in the extended polygon image block into the target image by using a second transformation model according to the position corresponding relation information of the vertexes before and after mapping.
After the vertices of the extended polygon are mapped into the target image, the positions of the vertices in the target image may be determined. And then, mapping other pixel points inside the polygon to the target image according to the mapping result of the vertex, namely, determining the positions of the other pixel points inside the polygon in the target image.
Specifically, when mapping of other pixel points inside the polygon is performed, the mapping can be realized through various specific transformation models. For example, in one mode, the specific second transformation model may include a geometric transformation model, that is, a geometric transformation model may be used to map pixel points in the extended polygon image block into the target image. The specific geometric transformation model may include rigid transformation, affine transformation, perspective transformation, non-linear transformation, and the like. Specifically, the corresponding geometric transformation model may be determined according to the number of edges of the specifically divided polygon, and the like. For example, an affine transformation model may be used if a triangle subdivision is performed, a perspective transformation model may be used if a quadrilateral subdivision is performed, and so on.
Specifically, taking a triangle section as an example, three groups of mapping points may be determined according to position correspondence information of vertices of an extended triangle image block before and after mapping, and then, based on the three groups of mapping points, a pixel point in the extended triangle image block is mapped to the target image by using an affine transformation model. The affine transformation is a process in which, in geometry, one vector space is subjected to linear transformation once and then translated, and transformed into the other vector space.
After the mapping of the vertexes of the specific edge-extended polygonal image block and the internal pixel points to the target image is completed, the positions of the pixel points of the image main body content in the original image and the positions of the pixel points in the target image can be determined, and then the pixel values of the pixel points are filled into the corresponding positions in the target image, so that the target image can be generated. For example, if the original image and the anchor point position are as shown in fig. 8, the target image generated after the deformation process may be as shown in fig. 9, and it can be seen that the subject image in the image is subjected to the bending deformation. The target image generated after the deformation processing may be a dynamic effect graph, which dynamically shows the whole process of the bending deformation of the subject image, and fig. 9 is a state diagram at a certain time.
It should be noted here that, because the polygon image block is subjected to the edge extension processing, there may be a case where partial pixels overlap between the polygons after the edge extension processing, that is, the same pixel may belong to different edge extension polygons. After the transformation is completed, partial overlap may also occur between different image blocks in the target image, especially at the edge of the image block. The overlapping in the target image mainly means that two different pixel points in the original image are mapped to the same position in the target image. Therefore, in a specific implementation, the situation may also be handled, specifically, if two pixel points are mapped to the same position in the target image, one of the two pixel points with lower transparency may be selected and displayed to the position, and so on. The transparency information of the pixel point may be obtained in the foregoing process of identifying the image body content, or may be obtained in other manners, which is not limited herein.
In summary, according to the embodiment of the present application, in the process of performing deformation processing on an image, the image main content in an original image may be divided into a plurality of initial polygon image blocks, and each initial polygon may be subjected to extended boundary processing to obtain an extended polygon image block. The vertices of such an extended polygon image block may then be mapped into the target image, in particular using the first transformation model. And by combining the second transformation model, the mapping of the pixel points in the extended polygon image block to the target image can be realized, and further a specific target image is generated. In this way, because the boundary expansion processing is performed on the polygonal image blocks, an overlapping region is generated between different polygonal image blocks, so that even if the distance between different edge-expanded polygonal image blocks is increased in the mapping process, gaps possibly generated by the increased distance can be compensated by the existence of the overlapping region, and the image effect after deformation is improved.
Example two
The second embodiment provides a method for generating an animation for a commodity object for the application of the algorithm provided in the first embodiment in the micro-effect property scene of the commodity object, referring to fig. 10, where the method may include:
s1001: determining a target commodity object needing to generate a moving picture and a corresponding original image, and identifying image main body content to be subjected to deformation processing from the original image;
s1002: dividing the main content of the image into a plurality of original polygonal image blocks, and performing boundary expansion processing on the original polygonal image blocks to obtain a plurality of edge-expanded polygonal image blocks;
s1003: determining the positions of a plurality of anchor points from the image main content and the expected positions of the anchor points in the multi-frame target image respectively;
s1004: aiming at the multi-frame target image, respectively carrying out the following processing:
for a current target image, establishing a first transformation model according to the positions of a plurality of anchor points and the expected position, and mapping the vertex of the extended polygon image block into the current target image by using the first transformation model; mapping pixel points in the extended polygon image block to the current target image by using a second transformation model according to the position corresponding relation information of the vertexes before and after mapping so as to generate the current target image;
s1005: and generating a motion picture of the target commodity object according to the generated multi-frame target image.
Specifically, when the target commodity object and the corresponding original image which need to generate the motion picture are determined, there may be a plurality of methods, for example, the user may directly input the original image, or, in another mode, the user may input page address information of a target page associated with the target commodity object. For example, the specific information may be a link address of a detail page of the commodity object, and the like. Then, the image associated with the target page can be analyzed according to the page address information, and an image meeting the target condition is determined as the original image.
In addition, in specific implementation, the system can also be communicated with a commodity object information system, and an operation control for issuing the dynamic image can be provided, so that the generated dynamic image can be issued to the target page after user operation is received through the operation control. For example, the details may be posted to a merchandise host map resource slot in a details page, and so on.
For the parts not described in detail in the second embodiment, reference may be made to the description in the first embodiment, and details are not repeated here.
It should be noted that, in the embodiments of the present application, the user data may be used, and in practical applications, the user-specific personal data may be used in the scheme described herein within the scope permitted by the applicable law, under the condition of meeting the requirements of the applicable law and regulations in the country (for example, the user explicitly agrees, the user is informed, etc.).
Corresponding to the first embodiment, the embodiment of the present application further provides an image morphing apparatus, referring to fig. 11, the apparatus may include:
an original image determining unit 1101 configured to determine an original image, and identify image subject content to be subjected to deformation processing from the original image;
an edge expansion processing unit 1102, configured to divide the main content of the image into a plurality of initial polygon image blocks, and perform boundary expansion processing on the initial polygon image blocks to obtain a plurality of edge expansion polygon image blocks;
a first mapping unit 1103, configured to map vertices of the extended polygon image block into a deformed target image using a first transformation model;
a second mapping unit 1104, configured to map, according to the information of the position correspondence between the vertices before and after mapping, the pixel points in the extended polygon image block to the target image by using a second transformation model.
The edge expanding unit may be specifically configured to:
and dividing the main content of the image into a plurality of initial triangular image blocks, and performing boundary expansion processing on the initial triangular image blocks to obtain a plurality of edge-expanded triangular image blocks.
Specifically, when the image main content is divided into a plurality of initial triangle image blocks, the edge extension processing unit may be specifically configured to:
carrying out mesh division on the original image;
traversing all grids, reserving the grid vertexes containing the image main content and performing deduplication processing;
and calling a preset subdivision algorithm according to the reserved grid vertex to obtain the plurality of triangular image blocks.
Specifically, the edge expanding unit may be specifically configured to:
the following processing is respectively carried out on each vertex of the triangular image block:
selecting a plurality of alternative points around the vertex according to the target distance to form an alternative point set;
selecting a first alternative point subset and a second alternative point subset from the alternative point set by respectively taking two edges where the vertexes are located as boundary lines, wherein when one edge is taken as the boundary line to select the alternative point subset, the vertex taking the edge as the opposite side is taken as a reference vertex, and the alternative points on the opposite side of the reference vertex are selected to form the alternative point subset;
determining a substitution point of the vertex according to the intersection of the first alternative point subset and the second alternative point subset;
and connecting the substitution points of the vertexes to obtain the edge-extended triangular image block.
Specifically, the size of the target distance is related to the distance that different image blocks may be extended in the mapping process.
The second mapping unit may specifically be configured to:
determining three groups of mapping points according to the position corresponding relation information of the vertexes of the edge-extended triangular image blocks before and after mapping;
and mapping the pixel points in the edge-extended triangular image block into the target image by utilizing an affine transformation model based on the three groups of mapping points.
Specifically, the first transformation model comprises a moving least square transformation model;
the first mapping unit may specifically be configured to:
establishing a mobile least square fitting function based on a plurality of anchor point positions set in the original image and the corresponding expected positions of the anchor points in the target image to be generated;
and mapping the vertex to the target image to be generated by utilizing the moving least square fitting function so as to determine a position mapping result of the vertex in the target image to be generated.
In addition, the apparatus may further include:
and the pixel point selection unit is used for selecting one with lower transparency from the plurality of pixel points to map to the same position in the target image if the plurality of pixel points are mapped to the same position in the target image.
Corresponding to the second embodiment, an embodiment of the present application further provides an apparatus for generating a motion picture for a commodity object, and referring to fig. 12, the apparatus may include:
an original image determining unit 1201, configured to determine a target commodity object for which a motion picture needs to be generated and a corresponding original image, and identify an image main content to be subjected to a deformation process from the original image;
an edge expansion processing unit 1202, configured to divide the image main content into a plurality of original polygon image blocks, and perform boundary expansion processing on the original polygon image blocks to obtain a plurality of edge expansion polygon image blocks;
an anchor point position determining unit 1203, configured to determine positions of a plurality of anchor points from the image main content, and expected positions of the anchor points in the multi-frame target image respectively;
a target image generating unit 1204, configured to perform the following processing for the multiple frames of target images, respectively: for a current target image, establishing a first transformation model according to the positions of a plurality of anchor points and the expected position, mapping the vertex of the extended polygon image block into the current target image by using the first transformation model, and mapping the pixel points in the extended polygon image block into the current target image by using a second transformation model according to the position corresponding relation information of the vertex before and after mapping so as to generate the current target image;
a motion picture generating unit 1205 is configured to generate a motion picture of the target commodity object according to the generated multi-frame target image.
In addition, the present application also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps of the method described in any of the preceding method embodiments.
And an electronic device comprising:
one or more processors; and
a memory associated with the one or more processors for storing program instructions that, when read and executed by the one or more processors, perform the steps of the method of any of the preceding method embodiments.
FIG. 13 illustrates an architecture of an electronic device that may include, in particular, a processor 1310, a video display adapter 1311, a disk drive 1312, an input/output interface 1313, a network interface 1314, and memory 1320. The processor 1310, video display adapter 1311, disk drive 1312, input/output interface 1313, network interface 1314, and memory 1320 may be communicatively coupled via a communication bus 1330.
The processor 1310 may be implemented by a general-purpose CPU (Central Processing Unit), a microprocessor, an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits, and is configured to execute related programs to implement the technical solution provided by the present Application.
The Memory 1320 may be implemented in the form of a ROM (Read Only Memory), a RAM (Random Access Memory), a static storage device, a dynamic storage device, or the like. The memory 1320 may store an operating system 1321 for controlling the operation of the electronic device 1300, a Basic Input Output System (BIOS) for controlling low-level operations of the electronic device 1300. In addition, a web browser 1323, a data storage management system 1324, an image morphing processing system 1325, and the like may also be stored. The image deformation processing system 1325 may be an application program that implements the operations of the foregoing steps in this embodiment of the present application. In summary, when the technical solution provided by the present application is implemented by software or firmware, the relevant program codes are stored in the memory 1320 and called for execution by the processor 1310.
The input/output interface 1313 is used to connect an input/output module to realize information input and output. The i/o module may be configured as a component in a device (not shown) or may be external to the device to provide a corresponding function. The input devices may include a keyboard, a mouse, a touch screen, a microphone, various sensors, etc., and the output devices may include a display, a speaker, a vibrator, an indicator light, etc.
The network interface 1314 is used for connecting a communication module (not shown in the figure) to realize the communication interaction between the device and other devices. The communication module can realize communication in a wired mode (such as USB, network cable and the like) and also can realize communication in a wireless mode (such as mobile network, WIFI, Bluetooth and the like).
Bus 1330 includes a path to transfer information between various components of the device, such as processor 1310, video display adapter 1311, disk drive 1312, input/output interface 1313, network interface 1314, and memory 1320.
It should be noted that although the above devices only show the processor 1310, the video display adapter 1311, the disk drive 1312, the input/output interface 1313, the network interface 1314, the memory 1320, the bus 1330 and the like, in a specific implementation, the device may also include other components necessary for normal operation. Furthermore, it will be understood by those skilled in the art that the apparatus described above may also include only the components necessary to implement the solution of the present application, and not necessarily all of the components shown in the figures.
From the above description of the embodiments, it is clear to those skilled in the art that the present application can be implemented by software plus necessary general hardware platform. Based on such understanding, the technical solutions of the present application may be essentially or partially implemented in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments of the present application.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, the system or system embodiments are substantially similar to the method embodiments and therefore are described in a relatively simple manner, and reference may be made to some of the descriptions of the method embodiments for related points. The above-described system and system embodiments are only illustrative, wherein the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The image deformation method, the image deformation device, and the electronic device provided by the present application are introduced in detail, and a specific example is applied in the present application to explain the principle and the implementation of the present application, and the description of the above embodiment is only used to help understand the method and the core idea of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, the specific embodiments and the application range may be changed. In view of the above, the description should not be taken as limiting the application.

Claims (12)

1. An image warping method, comprising:
determining an original image, and identifying image main content to be subjected to deformation processing from the original image;
dividing the main content of the image into a plurality of initial polygonal image blocks, and performing boundary expansion processing on the initial polygonal image blocks to obtain a plurality of edge-expanded polygonal image blocks;
mapping the vertexes of the edge-extended polygonal image blocks to the deformed target image by using a first transformation model;
and mapping pixel points in the extended polygon image block into the target image by using a second transformation model according to the position corresponding relation information of the vertexes before and after mapping.
2. The method of claim 1,
the dividing the image main body content into a plurality of initial polygonal image blocks comprises:
and dividing the main content of the image into a plurality of initial triangular image blocks, and performing boundary expansion processing on the initial triangular image blocks to obtain a plurality of edge-expanded triangular image blocks.
3. The method of claim 2,
the performing boundary expansion processing on the triangular image blocks to obtain a plurality of edge-expanded triangular image blocks includes:
the following processing is respectively carried out on each vertex of the triangular image block:
selecting a plurality of alternative points around the vertex according to the target distance to form an alternative point set;
selecting a first alternative point subset and a second alternative point subset from the alternative point set by respectively taking two edges where the vertexes are located as boundary lines, wherein when one edge is taken as the boundary line to select the alternative point subset, the vertex taking the edge as the opposite side is taken as a reference vertex, and the alternative points on the opposite side of the reference vertex are selected to form the alternative point subset;
determining a substitution point of the vertex according to the intersection of the first alternative point subset and the second alternative point subset;
and connecting the substitution points of the vertexes to obtain the edge-extended triangular image block.
4. The method of claim 3,
the size of the target distance is related to the distance that different image blocks may be extended in the mapping process.
5. The method of claim 2,
the second transformation model comprises an affine transformation model;
the mapping, by using the second transformation model, the pixel points in the extended polygon image block to the target image includes:
determining three groups of mapping points according to the position corresponding relation information of the vertexes of the edge-extended triangular image blocks before and after mapping;
and mapping the pixel points in the edge-extended triangular image block into the target image by utilizing an affine transformation model based on the three groups of mapping points.
6. The method according to any one of claims 1 to 5,
the first transformation model comprises a moving least squares transformation model;
the mapping the vertexes of the extended polygon image blocks to the deformed target image by using the first transformation model comprises:
establishing a mobile least square fitting function based on a plurality of anchor point positions set in the original image and the corresponding expected positions of the anchor points in the target image to be generated;
and mapping the vertex to the target image to be generated by utilizing the moving least square fitting function so as to determine a position mapping result of the vertex in the target image to be generated.
7. The method of any of claims 1 to 5, further comprising:
and if a plurality of pixel points are mapped to the same position in the target image, selecting one with lower transparency from the plurality of pixel points to map to the position.
8. A method for generating a motion picture for a commodity object, comprising:
determining a target commodity object needing to generate a moving picture and a corresponding original image, and identifying image main body content to be subjected to deformation processing from the original image;
dividing the main content of the image into a plurality of original polygonal image blocks, and performing boundary expansion processing on the original polygonal image blocks to obtain a plurality of edge-expanded polygonal image blocks;
determining the positions of a plurality of anchor points from the image main content and the expected positions of the anchor points in the multi-frame target image respectively;
aiming at the multi-frame target image, respectively carrying out the following processing:
for a current target image, establishing a first transformation model according to the positions of a plurality of anchor points and the expected position, mapping the vertex of the extended polygon image block into the current target image by using the first transformation model, and mapping the pixel points in the extended polygon image block into the current target image by using a second transformation model according to the position corresponding relation information of the vertex before and after mapping so as to generate the current target image;
and generating a motion picture of the target commodity object according to the generated multi-frame target image.
9. An image morphing apparatus, characterized by comprising:
the image processing device comprises an original image determining unit, a processing unit and a processing unit, wherein the original image determining unit is used for determining an original image and identifying image main content to be subjected to deformation processing from the original image;
the edge expanding processing unit is used for dividing the main content of the image into a plurality of initial polygon image blocks and performing boundary expanding processing on the initial polygon image blocks to obtain a plurality of edge expanding polygon image blocks;
the first mapping unit is used for mapping the vertexes of the edge-extended polygonal image blocks into the deformed target image by utilizing a first transformation model;
and the second mapping unit is used for mapping the pixel points in the extended polygon image block into the target image by using a second transformation model according to the position corresponding relation information of the vertexes before and after mapping.
10. An apparatus for generating a motion picture for a commodity object, comprising:
the system comprises an original image determining unit, a dynamic image generating unit and a dynamic image generating unit, wherein the original image determining unit is used for determining a target commodity object needing to generate a dynamic image and a corresponding original image and identifying image main body content to be subjected to deformation processing from the original image;
the edge expanding processing unit is used for dividing the main content of the image into a plurality of original polygonal image blocks and performing boundary expanding processing on the original polygonal image blocks to obtain a plurality of edge expanding polygonal image blocks;
the anchor point position determining unit is used for determining the positions of a plurality of anchor points from the image main body content and the expected positions of the anchor points in the multi-frame target image respectively;
a target image generation unit, configured to perform the following processing for the multiple frames of target images respectively: for a current target image, establishing a first transformation model according to the positions of a plurality of anchor points and the expected position, mapping the vertex of the extended polygon image block into the current target image by using the first transformation model, and mapping the pixel points in the extended polygon image block into the current target image by using a second transformation model according to the position corresponding relation information of the vertex before and after mapping so as to generate the current target image;
and the moving picture generating unit is used for generating a moving picture of the target commodity object according to the generated multi-frame target image.
11. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 8.
12. An electronic device, comprising:
one or more processors; and
a memory associated with the one or more processors for storing program instructions that, when read and executed by the one or more processors, perform the steps of the method of any of claims 1 to 8.
CN202111112770.1A 2021-09-18 2021-09-18 Image deformation method and device and electronic equipment Pending CN113888394A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111112770.1A CN113888394A (en) 2021-09-18 2021-09-18 Image deformation method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111112770.1A CN113888394A (en) 2021-09-18 2021-09-18 Image deformation method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN113888394A true CN113888394A (en) 2022-01-04

Family

ID=79009932

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111112770.1A Pending CN113888394A (en) 2021-09-18 2021-09-18 Image deformation method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN113888394A (en)

Similar Documents

Publication Publication Date Title
JP6400720B2 (en) View-independent color equalization 3D scene texture processing
US10424112B2 (en) Mesh boundary smoothing
WO2020108610A1 (en) Image processing method, apparatus, computer readable medium and electronic device
US9836879B2 (en) Mesh skinning technique
JP2001052194A (en) Reconfiguration for curved surface
US11263356B2 (en) Scalable and precise fitting of NURBS surfaces to large-size mesh representations
CN114820905B (en) Virtual image generation method and device, electronic equipment and readable storage medium
CN112767551B (en) Three-dimensional model construction method and device, electronic equipment and storage medium
CN109697748B (en) Model compression processing method, model mapping processing method, model compression processing device, and storage medium
CN109685095B (en) Classifying 2D images according to 3D arrangement type
AU2017272304A1 (en) Auto vr: an assistant system for virtual reality painting
US9019268B1 (en) Modification of a three-dimensional (3D) object data model based on a comparison of images and statistical information
CA2357962C (en) System and method for the coordinated simplification of surface and wire-frame descriptions of a geometric model
US20180211434A1 (en) Stereo rendering
CN113643414A (en) Three-dimensional image generation method and device, electronic equipment and storage medium
US11010939B2 (en) Rendering of cubic Bezier curves in a graphics processing unit (GPU)
JP2018136793A (en) Image processing device, image processing method and program
CN115690359B (en) Point cloud processing method and device, electronic equipment and storage medium
CN115965735B (en) Texture map generation method and device
CN113888394A (en) Image deformation method and device and electronic equipment
CN106846498B (en) Laser point cloud rendering method and device
JPH0636013A (en) Method and device for generating topographic data
US9734579B1 (en) Three-dimensional models visual differential
US10636210B2 (en) Dynamic contour volume deformation
JP2655056B2 (en) Texture data generator

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination