CN111383310B - Picture splitting method and device - Google Patents

Picture splitting method and device Download PDF

Info

Publication number
CN111383310B
CN111383310B CN201811645350.8A CN201811645350A CN111383310B CN 111383310 B CN111383310 B CN 111383310B CN 201811645350 A CN201811645350 A CN 201811645350A CN 111383310 B CN111383310 B CN 111383310B
Authority
CN
China
Prior art keywords
target
vertex
picture
information
coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811645350.8A
Other languages
Chinese (zh)
Other versions
CN111383310A (en
Inventor
钟盛照
蔡述雄
吴珍妮
张昆
何奕佑
刘小龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201811645350.8A priority Critical patent/CN111383310B/en
Publication of CN111383310A publication Critical patent/CN111383310A/en
Application granted granted Critical
Publication of CN111383310B publication Critical patent/CN111383310B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/802D [Two Dimensional] animation, e.g. using sprites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/18Image warping, e.g. rearranging pixels individually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/60Rotation of whole images or parts thereof
    • G06T3/602Rotation of whole images or parts thereof by block rotation, e.g. by recursive reversal or rotation

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the invention provides a picture splitting method and a device, wherein the method comprises the following steps: acquiring target picture data and extracting a target area in the target picture data; traversing the target picture data by adopting traversal windows, acquiring vertex information covering all the traversal windows in the target picture data, and generating a plurality of target shape areas in the target area according to the vertex information; generating a target shape picture corresponding to each target shape area according to the pixel points covered by each target shape area; and setting offset attribute information corresponding to each target shape picture, and performing displacement deflection with a split animation effect on each target shape picture according to the offset attribute information. By adopting the embodiment of the invention, the operation amount in the picture splitting process can be reduced, so that the picture splitting processing efficiency is improved.

Description

Picture splitting method and device
Technical Field
The invention relates to the technical field of computers, in particular to a picture splitting method and device.
Background
With the continuous development of image technology and the emergence of various emerging image applications, users increasingly use picture animations frequently in pursuit of personalization.
In the prior art, the animation effect of a picture is mainly completed by performing rectangle splitting on the picture. The user can independently select any region in the picture, traverse all pixel points in the region, fill each pixel point into the corresponding rectangle, can calculate each rectangle and obtain the pixel point information corresponding to the rectangle, and can realize the rectangle splitting of the picture by changing the offset distance and the rotation direction of each rectangle and all the pixel point information corresponding to the rectangle. Therefore, the position information of all pixel points in the selected area needs to be changed in the splitting animation, so that the calculation amount is too high, and the picture splitting processing efficiency is low.
Disclosure of Invention
The embodiment of the invention provides a picture splitting method and device, which can reduce the operation amount in the picture splitting process so as to improve the picture splitting processing efficiency.
One aspect of the present invention provides a method for splitting a picture, including:
acquiring target picture data and extracting a target area in the target picture data;
traversing the target picture data by adopting traversal windows, acquiring vertex information covering all the traversal windows in the target picture data, and generating a plurality of target shape areas in the target area according to the vertex information;
generating a target shape picture corresponding to each target shape area according to the pixel points covered by each target shape area;
and setting offset attribute information corresponding to each target shape picture, and performing displacement deflection with a split animation effect on each target shape picture according to the offset attribute information.
Wherein, the setting of the offset attribute information corresponding to each target shape picture includes:
determining first vertex information of each row in the target vertex matrix as first vertex coordinate information, and determining residual vertex information in the target vertex matrix as second vertex coordinate information, wherein the residual vertex information comprises vertex information except the first vertex information of each row in the target vertex matrix;
setting offset attribute information for the target shape picture in each line based on the first vertex coordinate information and the second vertex coordinate information of each line, respectively.
Wherein the performing displacement deflection with a split animation effect on each target shape picture according to the offset attribute information includes:
acquiring key point coordinate information of the target shape picture, wherein the key point coordinate information is determined by coordinate information corresponding to a target pixel point in the target shape picture;
determining a first direction target coordinate of the target shape picture according to a first direction coordinate in the key point coordinate information and the first direction offset;
determining a second direction target coordinate of the target shape picture according to a second direction coordinate in the key point coordinate information and the second direction offset;
and determining the offset target coordinate information of the target shape picture based on the first direction target coordinate and the second direction target coordinate, and performing displacement deflection with a split animation effect on the target shape picture according to the offset target coordinate information.
Another aspect of the present invention provides a picture splitting apparatus, including:
the data acquisition module is used for acquiring target picture data and extracting a target area in the target picture data;
the region generation module is used for traversing the target picture data by adopting traversal windows, acquiring vertex information covering all the traversal windows in the target picture data, and generating a plurality of target shape regions in the target region according to the vertex information;
the image generation module is used for generating target shape images corresponding to each target shape area according to the pixel points covered by each target shape area;
and the splitting animation module is used for setting the offset attribute information corresponding to each target shape picture respectively and carrying out displacement deflection with splitting animation effect on each target shape picture according to the offset attribute information.
Another aspect of the present invention provides a picture splitting apparatus, including: a processor and a memory;
the processor is connected to a memory, wherein the memory is used for storing program codes, and the processor is used for calling the program codes to execute the method in one aspect of the embodiment of the invention.
Another aspect of the embodiments of the present invention provides a computer storage medium storing a computer program, the computer program comprising program instructions that, when executed by a processor, perform a method as in one aspect of the embodiments of the present invention.
In the whole implementation process of the picture splitting animation, the selected area in the picture can be split in the form of the target shape, the pixel points covered by each target shape are converted into the form of the picture, the picture splitting animation effect can be realized only by changing the position information of each target shape picture, the situation that all the pixel points in the target shape area need to be subjected to offset processing respectively is avoided, the operation amount is reduced, and the picture splitting processing efficiency is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a scene schematic diagram of a picture splitting method according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of a picture splitting method according to an embodiment of the present invention;
fig. 3 is a schematic flowchart of another picture splitting method according to an embodiment of the present invention;
4 a-4 c are schematic diagrams illustrating a method for obtaining a target shape area according to an embodiment of the present invention;
fig. 5 is a schematic flowchart of setting offset attribute information according to an embodiment of the present invention;
fig. 6a and fig. 6b are schematic diagrams illustrating a principle of setting offset attribute information according to an embodiment of the present invention;
fig. 7 is a schematic flowchart of another picture splitting method according to an embodiment of the present invention;
FIGS. 8 a-8 d are schematic interface diagrams of a method for splitting a picture according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of a picture splitting apparatus according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of an offset attribute setting unit according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of another picture splitting apparatus according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a scene schematic diagram of a picture splitting method according to an embodiment of the present invention. As shown in fig. 1, the user may open a client 200a (e.g., a QQ client, a wechat client, etc.) from the terminal device 100a, and select a picture from the client 200a as the target picture data 300a for picture splitting, or of course, the user may select a picture from an album in the terminal device 100a as the target picture data 300a, or open a camera application and take a picture or a video in real time, and take a video frame of the picture or the video taken in real time as the target picture data 300a, and the user may select any one of the target picture data as the target area 400a for splitting. The target picture data may be traversed using a rectangular window, vertex information of the rectangular window during traversal may be recorded, a single-line vertex or a double-line vertex in the recorded vertex information may be subjected to position shift, a single-line and double-line displacement difference of the vertex information may be realized, the vertex information subjected to the position shift may be connected, and a plurality of target shape regions may be generated in the target region 400a, for example, a plurality of triangle patterns may be generated, where each triangle pattern is formed by connecting three vertex information closest to each other. And converting the pixel points covered by each target shape area into a picture form to generate a target shape picture. Then, offset attribute information is set for each target shape picture, deflection target coordinate information of the target shape picture can be determined according to the original coordinate information and the set offset attribute information of the target shape picture, and a picture splitting animation effect of the target area 400a can be realized according to the original coordinate information and the deflection target coordinate information of each target shape picture, and the picture splitting animation effect is displayed on an interface of the client 200a, wherein the offset attribute information can refer to deflection coordinate information and deflection angle information of each target shape picture on a horizontal axis and a vertical axis. If the user is not satisfied with the displayed picture splitting animation effect, the user can set the display device to update the offset attribute information of each target shape picture by himself until the optimal picture splitting animation effect is achieved. The terminal device 100a may include a mobile phone, a tablet computer, a notebook computer, a palm computer, a Mobile Internet Device (MID), a wearable device (e.g., a smart watch, a smart bracelet, etc.), and other smart devices, and each terminal device may be installed with the client 200a capable of displaying the image splitting animation effect.
The specific flow of the splitting process performed on the picture may refer to the following embodiments corresponding to fig. 2 to fig. 5.
Further, referring to fig. 2, fig. 2 is a schematic flow chart of a picture splitting method according to an embodiment of the present invention. As shown in fig. 2, the method may include:
step S101, acquiring target picture data and extracting a target area in the target picture data;
specifically, the terminal device may determine target picture data by selecting a picture or a video frame of a video from a social platform client or an album application or a video application by a user, display the target picture data on the client, and then determine a target area according to an area selected by the user in the target picture data, where the target area may be in any shape and any size (not exceeding the target picture data). For terminal equipment with a touch function, such as a mobile phone, a tablet personal computer, a palm computer and the like, a user can select any area in the target picture data by touching the terminal equipment interface with a finger or a touch pen; for terminal equipment without a touch function, such as a notebook computer, a desktop computer and the like, a user can select any area in the target picture data by using a mouse; the target picture data may be a picture or any video frame in a video.
Step S102, traversing the target picture data by adopting traversal windows, acquiring vertex information covering all the traversal windows in the target picture data, and generating a plurality of target shape areas in the target area according to the vertex information;
specifically, the target picture data may be traversed by using a traversal window (e.g., a rectangle), in the traversal process, the traversal window is used as a unit to move, and vertex information of the traversal window in the moving process is stored, of course, the same vertex information does not need to be stored repeatedly in the traversal process, that is, after the traversal of the entire target picture data by using the traversal window is completed, a series of non-repeated vertex information may be obtained in the target picture data region, and a part of the vertex information in the vertex information is moved by the same distance, so that a displacement difference is realized between two adjacent rows of vertex information in the vertex information, and then, the vertex information after the right shift is connected, so that a plurality of target shape regions are obtained in the target region. Since only the target area in the target image data is targeted, only the vertex information in the target area range needs to be connected to generate a plurality of target shape areas.
The target shape areas can be triangular, rhombic and the like, and can be formed by connecting according to vertex information. If the target shape region is a triangle, the vertex information may be connected by a triangulation method to obtain a plurality of triangles, and if the target shape region is a diamond shape, a plurality of diamonds may be obtained by connecting the vertex information.
Step S103, generating target shape pictures respectively corresponding to each target shape area according to the pixel points covered by each target shape area;
specifically, the pixel points respectively covered by the plurality of target shape areas are stored in an array form, and the array corresponding to each target shape area is converted into a picture form to generate a target shape picture. Specifically, the method may be represented as traversing pixel points within the range of the target region, recording information (e.g., coordinate information) of each pixel point, adding each pixel point into the corresponding target shape region, storing the information of the pixel point added in each target shape region with an array, and then converting the pixel information corresponding to each target shape region into a picture format through a canvas.
Optionally, for the pixel points at the edge of the target region, part of the pixel points may not belong to any of the target shape regions, and in the process of generating the target shape picture, the small part of the pixel points may be ignored; in the generating of the target shape region, a plurality of target shape regions including the target region may be generated, and thus, when the target shape picture is generated, edge pixels of the target region may be added to the corresponding target shape region, and an incomplete target shape picture may be generated based on edge pixel information in the target shape region.
And step S104, setting offset attribute information corresponding to each target shape picture, and performing displacement deflection with a split animation effect on each target shape picture according to the offset attribute information.
Specifically, after the target shape pictures are generated, one target shape picture is selected from the target shape pictures in each row as a base picture (for example, the first target shape picture in each row), and the distance between the remaining target shape pictures in each row and the base picture in the row is determined, offset attribute information on the horizontal axis and the vertical axis is set for each target shape picture according to the above-mentioned distance, according to the offset attribute information of each target shape picture, the specific position of each target shape picture after displacement deflection can be determined, the displacement deflection of each target shape picture with the split animation effect is realized, the offset attribute information of the target shape picture set on the horizontal axis may be in proportional relation to the distance, for the offset attribute information set on the vertical axis of the target shape picture, a ratio parameter may be set, and the ratio parameter is multiplied by the offset attribute information set on the horizontal axis of the target shape picture. For example, if the original position of one target shape picture is (2, 0), the distance from the base picture of the row in which the target shape picture is located is 3, the offset attribute information set on the horizontal axis for the target shape picture is 2 × 3, and the offset attribute information set on the vertical axis is 0.2 × 2 × 3, the specific position information after the target shape picture is deflected is (8, 1.2).
In the whole implementation process of the picture splitting animation, the selected area in the picture can be split in the form of the target shape, the pixel points covered by each target shape are converted into the form of the picture, the picture splitting animation effect can be realized only by changing the position information of each target picture, the situation that all the pixel points in the target shape area need to be subjected to offset processing respectively is avoided, the operation amount is reduced, and the picture splitting processing efficiency is improved.
Further, referring to fig. 3, fig. 3 is a schematic flow chart of another picture splitting method according to an embodiment of the present invention. As shown in fig. 3, the method may include the steps of:
step S201, acquiring target picture data and extracting a target area in the target picture data;
for a specific implementation process of the step S201, reference may be made to the description of the step S101 in the embodiment corresponding to fig. 2, which is not described herein again.
Step S202, traversing the target picture data through the rectangular window, generating a grid with the rectangular window as a minimum unit in the target picture data area, acquiring vertex information in the grid, and generating a vertex matrix;
specifically, the target picture data may be traversed by using a rectangular window, that is, the rectangular window is slid on the target picture data, and a region where the rectangular window is slid each time is not overlapped with a region where the rectangular window is slid each time, so that after the rectangular window traverses the entire target picture data, a mesh with the rectangular window as a minimum unit is generated in the region where the target picture data is located, all vertex information in the mesh is recorded, and all vertex information is stored in one vertex matrix. The vertex information may refer to coordinate information of vertices left by traversing through a rectangular window, and the vertex information in the vertex matrix is stored according to a vertex arrangement order in the mesh, that is, the number of rows and columns of vertices in the mesh is the number of rows and columns of the vertex matrix.
Step S203, carrying out position offset on the target row vertex information in the vertex matrix to generate a target vertex matrix, wherein the target row vertex information comprises single-row vertex information or double-row vertex information;
specifically, after the vertex matrix is obtained, the single-row and double-row displacement difference of the vertices in the vertex matrix may be obtained by performing position shift (for example, moving the distance of half of the width of the rectangular window to the right or left) on the vertex information of the single row or the double row in the vertex matrix, and the new vertex matrix may be determined as the target vertex matrix. It can be understood that, in the vertex matrix, the ordinate information of the vertices in the same row is the same, and every two adjacent vertices differ by the distance of the rectangular window width, the abscissa information of the vertices in the same column is the same, and every two adjacent vertices differ by the distance of the rectangular window height, and after the single row vertex information or the double row vertex information in the vertex matrix is subjected to position offset, the displacement difference of the single-row and double-row vertices in the horizontal axis direction can be realized. For example, if the vertex information of a single row in the vertex matrix is all shifted rightward by a distance of half the width of a rectangular window, assuming that the width of the rectangular window is 2 and the height is 1, the vertex information of a first row in the vertex matrix is (2, 0), the vertex information of a second row in the same column as the vertex information is (2, 1), and the vertex information of the first row is updated to (3, 0) after the vertex information of the single row is shifted rightward by half the distance of the rectangular window, so that a displacement difference in the horizontal axis direction occurs with the vertex information of the same column in the second row.
Step S204, connecting the vertex information in the target vertex matrix, and generating a shape area set in the target area;
specifically, after the target vertex matrix is determined, all vertex information belonging to the target region in the target vertex matrix is connected, and the vertex information is connected to form regions, thereby generating a form region set (for example, a triangle set, a diamond set, or the like). If the shape area is a triangle, connecting all vertex information belonging to the target area in the target vertex matrix into a triangle pattern; and if the shape area is a diamond shape, connecting all vertex information belonging to the target area in the target vertex matrix into a diamond pattern.
Step S205, acquiring the region perimeter and the shape attribute corresponding to each shape region to be selected in the shape region set, and selecting the shape region to be selected with the region perimeter as the target perimeter and the shape attribute as the target shape region in the shape region set;
specifically, in the generated shape region set, some shape regions are unsatisfactory, and it is necessary to remove the unsatisfactory shape regions to obtain satisfactory shape regions, and determine the satisfactory shape regions as target shape regions. The specific mode of rejecting the shape area which does not meet the requirement is as follows: acquiring an area perimeter and a shape attribute corresponding to each shape area to be selected in the shape area set, if the shape area is a triangle, determining the shape area to be selected as a target shape area when the area perimeter of the shape area to be selected is a target value (the target value is related to the width and height of the rectangular window and the distance of the position offset, for example, the width and height of the rectangular window are 2, the height of the rectangular window is 1, and the distance of the position offset is 1, the target value is 4.828), and the shape attribute of the shape area to be selected is an isosceles triangle; and if the shape area is a diamond shape, determining the shape area to be selected as a target shape area when the area of the shape area to be selected is the area of the rectangular window.
Further, please refer to fig. 4a to fig. 4c, which are schematic diagrams illustrating a principle of obtaining a target shape area according to an embodiment of the present invention. As shown in fig. 4a, the obtained target picture data 300b may be traversed by using a rectangular window 500a, after the whole target picture data 300b is traversed, a mesh 600a with the rectangular window 500a as a minimum unit may be generated in the target picture data area, vertex information (for example, coordinate information of vertex information 700a, etc.) in the mesh 600a may be subsequently obtained, and all the obtained vertex information may be stored in a matrix, which may be determined as a vertex matrix, and vertex information in each row in the vertex matrix has the same vertical coordinate, and vertex information in each column has the same horizontal coordinate. Taking fig. 4b as an example, the vertex information of a single row in the vertex matrix may be moved to the right by a distance half the width of the rectangular window 500a (for example, the vertex information 700a is moved to the right and then updated to the vertex information 700b), so that the vertex matrix of a single row and a double row in the vertex matrix has a displacement difference in the lateral direction, and the moved vertex matrix is determined as the target vertex matrix. Determining vertex information belonging to the target area 400b in the target vertex matrix according to the target area 400b selected by the user, and connecting the vertex information belonging to the target area 400b to obtain a shape area set (taking a triangle as an example) in fig. 4c, wherein in the shape area set, part of triangles are not satisfactory and need to be deleted, and specifically, the perimeter and the belonging of each triangle in the shape area set can be determinedAlternatively, if the triangle is not an isosceles triangle, the triangle is deleted, and if the perimeter of the triangle is greater than the target value (the target value is related to the size of the rectangular window 500a, e.g., the width of the rectangular window 500a is 2s and the height is s, the target value is s
Figure BDA0001931956060000091
The triangle is deleted and eventually a plurality of triangles (i.e. target shape areas) in fig. 4c can be obtained.
Step S206, generating target shape pictures respectively corresponding to each target shape area according to the pixel points covered by each target shape area;
and step S207, setting offset attribute information corresponding to each target shape picture, and performing displacement deflection with a split animation effect on each target shape picture according to the offset attribute information.
For a specific implementation process of step S206 to step S207, reference may be made to the description of step S103 to step S104 in the embodiment corresponding to fig. 2, which is not described herein again.
Further, please refer to fig. 5, where fig. 5 is a schematic flowchart illustrating a process of setting offset attribute information according to an embodiment of the present invention. As shown in fig. 5, steps S301 to S311 are specific descriptions of step S207 in the embodiment corresponding to fig. 3, that is, steps S301 to S311 are a specific flow for setting offset attribute information and performing displacement deflection according to the embodiment of the present invention, and specifically include the following steps:
step S301, determining first vertex information of each row in the target vertex matrix as first vertex coordinate information, and determining residual vertex information in the target vertex matrix as second vertex coordinate information, wherein the residual vertex information comprises vertex information except the first vertex information of each row in the target vertex matrix;
specifically, after the target shape picture is generated, since the position information of the target shape picture can be represented by any vertex information in the target shape picture as a whole, the position information of the target shape picture can be represented by the vertex information in the target vertex matrix. The first vertex information of each row in the target vertex matrix may be determined as first vertex coordinate information, that is, the first vertex information of each row in the target vertex matrix represents position information of a first target shape picture of each row, and the remaining vertex information in the target vertex matrix may be determined as second vertex coordinate information representing position information of remaining target shape pictures. The remaining vertex information may include all vertex information except for the first vertex information of each row in the target vertex matrix, and the remaining target shape picture may include a target shape picture except for the first target shape picture of each row in the target shape picture.
Step S302, selecting a target shape picture as a processing picture from all target shape pictures;
specifically, after the first vertex coordinate information and the second vertex coordinate information are determined, offset attribute information may be set for each target shape picture according to the first vertex coordinate information and the second vertex coordinate information. One target shape picture is randomly selected among all the target shape pictures as a processing picture, and the following step of setting offset attribute information for the processing picture is performed.
Step S303, acquiring second vertex coordinate information corresponding to the processed picture as a second processing coordinate, and acquiring first vertex coordinate information belonging to the same row as the second processing coordinate as a first processing coordinate;
specifically, after the processed picture is determined, second vertex coordinate information corresponding to the processed picture is acquired, and first vertex coordinate information belonging to the same row as the second vertex coordinate information is acquired. For example, if the second vertex coordinate information corresponding to the processed picture is (2, 3) and (2, 3) is taken as the second processing coordinate, the first vertex coordinate information belonging to the same row as the second vertex coordinate information may be (0, 3) and (0, 3) may be taken as the first processing coordinate.
Step S304, determining a target distance between the first processing coordinate and the second processing coordinate, and determining a first direction offset corresponding to the processed picture according to the target distance;
specifically, a target distance between the first processing coordinate and the second processing coordinate, that is, a distance (for example, a distance in a horizontal axis direction) between a first direction coordinate (for example, an abscissa) in the first processing coordinate and a first direction coordinate in the second processing coordinate may be calculated from the first processing coordinate and the second processing coordinate, and a first direction offset amount may be set for the processed picture according to the first direction distance. The first direction offset may be obtained by multiplying an offset parameter by the first direction distance, for example, if the first direction distance of a processed picture is calculated to be 2, the first direction offset set for the processed picture may be 2 × a, where a is an offset parameter and may be any real number.
Step S305, determining a second direction offset corresponding to the processed picture according to an offset coefficient, a random offset direction and the target distance;
specifically, a second direction offset amount of the processed picture may be set according to the calculated first direction distance, and a determined offset coefficient and a random offset direction, where a magnitude of the second direction offset amount may be obtained by multiplying the offset coefficient by the first direction distance, the offset coefficient may be any real number, the magnitude of the offset coefficient may determine a magnitude of the second direction offset amount, the random offset direction may determine a direction of the second direction offset amount, the random offset direction may include an upward direction or a downward direction, the second direction offset amount may be represented as a positive number if the random offset direction is the upward direction, and the second direction offset amount may be represented as a negative number if the random offset direction is the downward direction.
Step S306, determining the first direction offset and the second direction offset as offset attribute information corresponding to the processed picture;
specifically, the offset attribute information of the processed picture can be determined according to the determined first direction offset and the second direction offset. For example, if the first direction offset amount of a processed picture is 6 and the second direction offset amount is-2, the offset attribute information of the processed picture is (6, -2), which indicates that the processed picture can be shifted to the right by a distance of 6 units and to the down by a distance of 2 units.
Step S307, when all the target shape pictures are taken as the processing pictures, obtaining offset attribute information corresponding to each target shape picture;
specifically, after the offset attribute information of the processed picture is determined, one target shape picture is newly selected from the remaining target shape pictures as the processed picture, and the steps S302 to S306 are performed until all the target shape pictures are used as the processed pictures and the offset attribute information is determined. Thereby, the offset attribute information corresponding to each target shape picture can be obtained.
Step S308, obtaining the coordinate information of key points of each target shape picture, wherein the coordinate information of the key points is determined by the coordinate information corresponding to the target pixel points in the target shape pictures;
specifically, the key point coordinate information corresponding to each target shape picture is obtained, where the key point coordinate information may represent coordinate information of a target shape picture where the key point is located, and may be determined by coordinate information of a target pixel point in the target shape picture, and the coordinate information of the target pixel point may be obtained from an array corresponding to the target shape picture, for example, the key point coordinate corresponding to each target shape picture is determined by coordinate information of a first pixel point in the array corresponding to each target shape picture. It can be understood that, as a whole, the coordinate point information of any pixel point in the target shape picture can represent the coordinate information of the target shape picture, and the pixel point information included in each target shape picture is different, so that the coordinate information of each target shape picture can be well distinguished.
Step S309, determining a first direction target coordinate of the target shape picture according to a first direction coordinate in the key point coordinate information and the first direction offset;
specifically, the first direction target coordinate of the target shape picture may be obtained by adding the first direction offset to the first direction coordinate in the key point coordinate information. The first direction may be understood as a horizontal axis direction, the first direction coordinates of the key point coordinate information may be understood as an original horizontal coordinate of the target shape picture, and the first direction target coordinates may be understood as a target horizontal coordinate of the target shape picture after deflection.
Step S310, determining a second direction target coordinate of the target shape picture according to a second direction coordinate in the key point coordinate information and the second direction offset;
specifically, the second direction target coordinate of the target shape picture may be obtained by adding the second direction offset to the second direction coordinate in the key point coordinate information. The second direction may be a longitudinal axis direction, the second direction coordinate of the key point coordinate information may be an original longitudinal coordinate of the target shape picture, and the second direction destination coordinate may be a destination longitudinal coordinate of the target shape picture after deflection.
Step S311, determining the offset destination coordinate information of the target shape picture based on the first direction destination coordinate and the second direction destination coordinate, and performing displacement deflection with a split animation effect on the target shape picture according to the offset destination coordinate information.
Specifically, the offset destination coordinate information of the target shape picture can be determined according to the calculated first direction destination coordinate and the calculated second direction destination coordinate, and when the target shape picture is subjected to split animation, the target shape picture can be shifted from the key point coordinate information to the offset destination coordinate information, namely, the target shape picture is split from the original position to the offset position. For example, if the coordinate information of the key point of an object shape picture is (2, 1) and the offset attribute information of the object shape picture is (6, -2), the offset destination coordinate information of the object shape picture can be determined as (8, -1), which indicates that the final position of the offset of the object shape picture in the split animation is offset by a distance of 6 units in the positive direction along the horizontal axis of the coordinate axis and 2 units in the negative direction along the vertical axis of the coordinate axis based on the original position. The offset destination coordinate information of each target shape picture can be determined according to the first direction coordinate information and the second direction coordinate information corresponding to each target shape picture, when the splitting animation is carried out, each target shape picture is simultaneously displaced and deflected, and each target shape picture can return to the original position from the offset destination coordinate information and restore to the form of the target picture data at the position where each target shape picture is offset to the corresponding offset destination coordinate information. Of course, the target shape picture also includes the deflection angle of the target shape picture in the splitting animation process.
Optionally, the adjustment parameter may be acquired, the offset attribute information of the target shape picture may be updated, and then the offset destination coordinate information of the target shape picture may be determined again according to the updated offset attribute information, so as to achieve different split animation effects. That is, the final deflection position of the target shape picture in the splitting animation process can be adjusted through the acquired adjustment parameters.
Further, please refer to fig. 6a to fig. 6b, which are schematic diagrams illustrating a principle of setting offset attribute information according to an embodiment of the present invention. As shown in fig. 6a, offset attribute information may be set for a target shape picture in a target area, vertex information 700c is determined as first vertex coordinate information of a row in which the vertex information 700c is located, vertex information 700e is determined as first vertex coordinate information of a row in which the vertex information 700e is located, and the remaining vertex information is determined as second vertex coordinate information, for example, the second vertex coordinate information includes vertex information 700d and vertex information 700 f. When the target shape picture 800b is selected as the processing picture, it may be determined that the second vertex coordinate information corresponding to the target shape picture 800b is the vertex information 700d (second processing coordinates) and the first vertex coordinate information is the vertex information 700c (first processing coordinates), and thus the target distance between the vertex information 700d and the vertex information 700c may be calculated as the width s of the rectangular window. As shown in fig. 6b, taking the target shape picture 800b as an example, if the offset coefficient is 3 according to the target distance s, the first direction offset amount of the target shape picture 800b can be determined to be 3s, that is, the offset amount in the horizontal axis direction is 3s, and then, according to the offset coefficient, the random offset direction and the target distance 3s, the second direction offset amount of the target shape picture 800b can be determined to be 0.8s, that is, the offset amount in the vertical axis direction is 0.8s, wherein the offset coefficient is 0.6 and the random offset direction is upward. According to the determined first direction offset, the second direction offset, and the key point coordinate information corresponding to the target shape picture 800b (the first pixel point coordinate information in the pixel point array corresponding to the target shape picture 800 b), the offset destination coordinate information of the target shape picture 800b, such as the position of the target shape picture 800e, can be determined. Similarly, the target shape picture 800a, the target shape picture 800c, and the target shape picture 800d may all set the offset attribute information by the above method, and obtain the offset target coordinate information.
The method comprises the steps of obtaining target picture data and extracting a target area in the target picture data; traversing the target picture data by adopting a rectangular window, acquiring vertex information covering all traversal windows in the target picture data, carrying out position offset on single-row vertex information or double-row vertex information in the vertex information, and generating a plurality of target shape areas in the target area according to the vertex information subjected to the position offset; generating a target shape picture corresponding to each target shape area according to the pixel points covered by each target shape area; and setting offset attribute information corresponding to each target shape picture, and performing displacement deflection with a split animation effect on each target shape picture according to the offset attribute information. Therefore, in the whole implementation process of the picture splitting animation, a target area selected by a user can be extracted, the target area is dotted in a rectangular window mode to obtain vertex information, a target shape area can be generated in the target area through single-row and double-row displacement difference of the vertex information, all pixel points covered by each target shape area are converted into a target shape picture, the target area selected in the picture can be subjected to displacement deflection in the form of the target shape picture, the pixel points covered by each target shape are converted into a picture form, the picture splitting animation effect can be achieved only by changing the position information of each target picture, the situation that all pixel points in the target shape area need to be subjected to deviation processing respectively is avoided, the operand is reduced, and the picture splitting processing efficiency is improved.
Further, please refer to fig. 7, fig. 7 is a schematic flowchart of another picture splitting method according to an embodiment of the present invention. As shown in fig. 7, the method may include the steps of:
step S401, acquiring target picture data and extracting a target area in the target picture data;
for a specific implementation process of step S401, reference may be made to the description of step S101 in the embodiment corresponding to fig. 2, which is not described herein again.
Step S402, obtaining an animation splitting angle, and rotating the target picture data in a coordinate axis based on the animation splitting angle;
specifically, in order to ensure the identity of the algorithm, the target picture data subjected to animation splitting at each angle is rotated, please refer to fig. 8a, which is an interface schematic diagram of a picture splitting method according to an embodiment of the present invention, and taking fig. 8a as an example, on the terminal device 100a, the user selects the target area 300c from the target picture data 300c as the area for the split animation, two sliding bars are displayed on the interface of the terminal 100a, the angle sliding bar can adjust the splitting angle of the picture animation, the value range of the sliding bar is 0-360 degrees, the downward sliding button on the sliding bar indicates that the splitting angle of the picture animation is larger (the default is that the splitting direction corresponding to 0 degree is horizontal right, the splitting direction corresponding to 90 degrees is vertical upward, the splitting direction corresponding to 180 degrees is horizontal left, and so on), and a user can determine the splitting angle of the picture by adjusting the sliding button; and the other distance sliding bar can adjust the deflection distance of the target shape picture in the picture animation splitting, the deflection distance is represented on the deflection attribute information set by the target shape picture, the value set on the distance sliding bar is multiplied by the deflection attribute information which is the deflection attribute information of the coordinate information of the target shape picture for finally calculating the deflection, and the downwards sliding button indicates that the distance of the target shape picture from the original position is larger. The animation splitting angle may be obtained according to the position of the slide button on the angle slide bar, as shown in fig. 8a, the animation splitting angle set by the user may be obtained as 90 degrees, please refer to fig. 8b, the target picture data 300c may be rotated clockwise by 90 degrees in the coordinate axis, and then the target picture data 300c may be subsequently processed. It is understood that the processes shown in fig. 8b are all processed on the client background, and are not displayed on the interface of the terminal device 100 a.
Step S403, traversing the target picture data by adopting traversal windows, acquiring vertex information covering all the traversal windows in the target picture data, and generating a plurality of target shape areas in the target area according to the vertex information;
step S404, generating target shape pictures respectively corresponding to each target shape area according to the pixel points covered by each target shape area;
for specific implementation processes of step S403 to step S404, reference may be made to the description of step S102 to step S103 in the embodiment corresponding to fig. 2, and details are not repeated here.
Step S405, setting offset attribute information corresponding to each target shape picture;
for a specific implementation process of step S405, reference may be made to the description of step S301 to step S307 in the embodiment corresponding to fig. 5, which is not described herein again.
Step S406, based on the animation splitting angle, rotating the view container including the coordinate axis and the target picture data, and performing displacement deflection with a splitting animation effect on each target shape picture in the rotated view container according to the offset attribute information.
Specifically, after the corresponding offset attribute information is set for each target picture in the coordinate axis, the view container including the coordinate axis and the target picture data is rotated in the direction opposite to the direction in the step S402 according to the animation splitting angle, and the offset destination coordinate information of each target shape picture can be determined in the view container based on the offset attribute information corresponding to each target shape picture, so as to achieve the splitting animation effect of each target shape picture. The view container may include different components, and a display manner of the components in the view container is set, where a specific implementation manner for setting the offset attribute information may refer to the embodiment corresponding to fig. 5, and details are not described here.
Further, please refer to fig. 8 c-8 d together, which are schematic interface diagrams of another image splitting method according to an embodiment of the present invention, as shown in fig. 8c, in a view container, a rectangular window is used to traverse the target image data 300c of fig. 8b after being rotated clockwise by 90 degrees, a mesh with the rectangular window as a minimum unit is generated in the region of the target image data 300c, vertex information of the mesh is obtained, all vertex information is stored as a vertex matrix, then, single row of vertex information in the vertex matrix is moved to the right by a distance of half of the width of the rectangular window to obtain a target vertex matrix, all vertex information in the target region 400c in the target vertex matrix is connected, the vertex information is connected into a triangle pattern (target shape region), pixel information covered by each triangle is stored in a pixel array respectively, and each pixel point array is converted into a triangular picture (target shape picture), and then, offset attribute information can be set for each triangular picture according to the coordinate information of each triangular picture in the coordinate axis. When all the triangle pictures are set with the offset attribute information, the coordinate axis and the target picture data 300c are rotated by 90 degrees counterclockwise. It can be seen that, in the view container, for the target picture data 300c after the counterclockwise rotation, the offset attribute information corresponding to each triangle picture in the target area 400c is not changed in the coordinate axis. When the animation splitting angle is 90 degrees, as shown in fig. 8d, on the interface of the terminal device 100a, the splitting animation effect of the triangular picture in the above-described target area 400c may be displayed, and the direction of the animation splitting is upward as a whole. At the moment, the user can change the angle and the distance of the splitting animation of the triangular picture by adjusting the sliding buttons on the angle sliding bar and the distance sliding bar on the interface, so that the optimal picture splitting animation effect is realized. However, the above-described processes shown in fig. 8c are all performed in the background server, and are not displayed on the interface of the terminal device 100a, and when the picture division animation of the target area 400c is displayed, only the division animation process and two sliders (an angle slider and a distance slider) are displayed on the interface. Therefore, the target picture data can be processed in the background all the time according to the horizontal right splitting direction by rotating the target picture data by the angle corresponding to the splitting angle, so that different processing algorithms are avoided being adopted for different splitting directions, and the development cost is reduced. The default splitting direction of 0 degree is not limited to horizontal right, and may be any direction, which is not described herein again.
The method comprises the steps of obtaining target picture data and extracting a target area in the target picture data; acquiring an animation splitting angle, and rotating the target picture data in a coordinate axis based on the animation splitting angle; traversing the target picture data by adopting traversal windows, acquiring vertex information covering all the traversal windows in the target picture data, and generating a plurality of target shape areas in the target area according to the vertex information; generating a target shape picture corresponding to each target shape area according to the pixel points covered by each target shape area; and setting offset attribute information corresponding to each target shape picture, rotating a view container comprising the coordinate axis and the target picture data based on the animation splitting angle, and performing displacement deflection with a splitting animation effect on each target shape picture in the rotated view container according to the offset attribute information. Therefore, in the whole implementation process of the picture splitting animation, the target picture data can be rotated by a certain angle, and then a plurality of target shape pictures are generated in the target area, so that the target area can be split in the form of the target shape pictures, the pixel points covered by each target shape are converted into the picture form, the picture splitting animation effect can be realized only by changing the position information of each target picture, the situation that all the pixel points in the target shape area need to be subjected to offset processing respectively is avoided, the operation amount is reduced, and the picture splitting processing efficiency is improved.
Further, please refer to fig. 9, fig. 9 is a schematic structural diagram of a picture splitting apparatus according to an embodiment of the present invention. As shown in fig. 9, the picture splitting apparatus 1 may be applied to the terminal device in the embodiment corresponding to fig. 2, and the picture splitting apparatus 1 may include: the system comprises a data acquisition module 10, an area generation module 20, a picture generation module 30 and a split animation picture 40;
the data acquisition module 10 is configured to acquire target picture data and extract a target area in the target picture data;
a region generating module 20, configured to traverse the target picture data using a traversal window, obtain vertex information covering all traversal windows in the target picture data, and generate a plurality of target shape regions in the target region according to the vertex information;
the image generating module 30 is configured to generate a target shape image corresponding to each target shape region according to the pixel point covered by each target shape region;
and the splitting animation module 40 is configured to set offset attribute information corresponding to each target shape picture, and perform displacement deflection with a splitting animation effect on each target shape picture according to the offset attribute information.
The specific functional implementation manners of the data obtaining module 10, the area generating module 20, the picture generating module 30, and the split animation picture 40 may refer to steps S101 to S104 in the embodiment corresponding to fig. 2, which are not described herein again.
Referring to fig. 9, the picture splitting apparatus 1 may further include a splitting angle obtaining module 50;
a splitting angle obtaining module 50, configured to obtain an animation splitting angle, rotate the target picture data in a coordinate axis based on the animation splitting angle, and perform the step of traversing the target picture data by using a traversal window;
the splitting animation module 40 is specifically configured to rotate the view container including the coordinate axis and the target picture data based on the animation splitting angle, and perform displacement deflection with a splitting animation effect on each target shape picture in the rotated view container according to the offset attribute information.
The specific functional implementation manner of the splitting angle obtaining module 50 may refer to step S402 in the embodiment corresponding to fig. 7, which is not described herein again.
Further, as shown in fig. 9, the region generating module 20 includes: a mesh generation unit 201, a position shift unit 202, a vertex connection unit 203;
a mesh generating unit 201, configured to traverse the target picture data through the rectangular window, generate a mesh with the rectangular window as a minimum unit in the target picture data region, acquire vertex information in the mesh, and generate a vertex matrix;
a position offset unit 202, configured to perform position offset on vertex information of a target row in the vertex matrix to generate a target vertex matrix, where the vertex information of the target row includes vertex information of a single row or vertex information of a double row;
a vertex connecting unit 203, configured to connect vertex information in the target vertex matrix, and generate a plurality of target shape areas having target shape attributes in the target area.
The specific functional implementation manners of the mesh generation unit 201, the position offset unit 202, and the vertex connection unit 203 may refer to steps S202 to S205 in the embodiment corresponding to fig. 3, which is not described herein again.
Further, as shown in fig. 9, the splitting animation module 40 may include: a vertex coordinate determination unit 401, an offset attribute setting unit 402, a key point coordinate acquisition unit 403, a first destination coordinate determination unit 404, a second destination coordinate determination unit 405, and a displacement deflection unit 406;
a vertex coordinate determining unit 401, configured to determine first vertex information of each row in the target vertex matrix as first vertex coordinate information, and determine remaining vertex information in the target vertex matrix as second vertex coordinate information, where the remaining vertex information includes vertex information in the target vertex matrix except the first vertex information of each row;
an offset attribute setting unit 402 configured to set offset attribute information for the target shape picture in each line, respectively, based on the first vertex coordinate information and the second vertex coordinate information of each line;
a key point coordinate obtaining unit 403, configured to obtain key point coordinate information of each target shape picture, where the key point coordinate information is determined by coordinate information corresponding to a target pixel point in the target shape picture;
a first destination coordinate determining unit 404, configured to determine a first direction destination coordinate of each target shape picture according to a first direction coordinate in the key point coordinate information and the first direction offset;
a second destination coordinate determining unit 405, configured to determine a second direction destination coordinate of each target shape picture according to a second direction coordinate in the key point coordinate information and the second direction offset;
and a displacement deflection unit 406, configured to determine, based on the first-direction destination coordinate and the second-direction destination coordinate, offset destination coordinate information of the target shape picture, and perform displacement deflection with a split animation effect on the target shape picture according to the offset destination coordinate information.
For specific functional implementation manners of the vertex coordinate determining unit 401, the offset attribute setting unit 402, the key point coordinate obtaining unit 403, the first destination coordinate determining unit 404, the second destination coordinate determining unit 405, and the displacement deflecting unit 406, reference may be made to steps S301 to S311 in the embodiment corresponding to fig. 5, which is not described herein again.
Further, as shown in fig. 9, the vertex connecting unit 203 may include: an area set generation subunit 2031 and an area selection subunit 2032;
a region set generating subunit 2031, configured to connect vertex information in the target vertex matrix, and generate a shape region set in the target region;
the region selecting subunit 2032 is configured to acquire a region perimeter and a shape attribute corresponding to each shape region to be selected in the shape region set, and select, as the target shape region, a shape region to be selected in the shape region set, where the region perimeter is the target perimeter and the shape attribute is the target shape attribute.
The specific functional implementation manners of the area set generating subunit 2031 and the area selecting subunit 2032 may refer to step S204 to step S205 in the embodiment corresponding to fig. 3, which is not described herein again.
Further, referring to fig. 10, fig. 10 is a schematic structural diagram of an offset attribute setting unit according to an embodiment of the present invention. As shown in fig. 10, the offset attribute setting unit 402 may include: a selection subunit 4021, a first acquisition subunit 4022, a first determination subunit 4023, a second determination subunit 4024, a third determination subunit 4025, and a second acquisition subunit 4026;
a selecting subunit 4021, configured to select a target shape picture as a processed picture from all target shape pictures;
a first obtaining subunit 4022, configured to obtain second vertex coordinate information corresponding to the processed picture as a second processing coordinate, and obtain first vertex coordinate information belonging to the same row as the second processing coordinate as a first processing coordinate;
a first determining subunit 4023, configured to determine a target distance between the first processing coordinate and the second processing coordinate, and determine a first direction offset corresponding to the processed picture according to the target distance;
a second determining subunit 4024, configured to determine, according to a shift coefficient, a random shift direction, and the target distance, a second direction shift amount corresponding to the processed picture;
a third determining subunit 4025, configured to determine the first direction offset and the second direction offset as offset attribute information corresponding to the processed picture;
the second obtaining subunit 4026, when all the target shape pictures are taken as the processing pictures, obtains offset attribute information corresponding to each target shape picture.
For specific functional implementation manners of the selecting subunit 4021, the first obtaining subunit 4022, the first determining subunit 4023, the second determining subunit 4024, the third determining subunit 4025, and the second obtaining subunit 4026, reference may be made to step S302 to step S307 in the embodiment corresponding to fig. 5, which is not described herein again.
The method comprises the steps of obtaining target picture data and extracting a target area in the target picture data; traversing the target picture data by adopting a rectangular window, acquiring vertex information covering all traversal windows in the target picture data, carrying out position offset on single-row vertex information or double-row vertex information in the vertex information, and generating a plurality of target shape areas in the target area according to the vertex information subjected to the position offset; generating a target shape picture corresponding to each target shape area according to the pixel points covered by each target shape area; and setting offset attribute information corresponding to each target shape picture, and performing displacement deflection with a split animation effect on each target shape picture according to the offset attribute information. Therefore, in the whole implementation process of the picture splitting animation, a target area selected by a user can be extracted, the target area is dotted in a rectangular window mode to obtain vertex information, a target shape area can be generated in the target area through single-row and double-row displacement difference of the vertex information, all pixel points covered by each target shape area are converted into a target shape picture, the target area selected in the picture can be subjected to displacement deflection in the form of the target shape picture, the pixel points covered by each target shape are converted into a picture form, the picture splitting animation effect can be achieved only by changing the position information of each target picture, the situation that all pixel points in the target shape area need to be subjected to deviation processing respectively is avoided, the operand is reduced, and the picture splitting processing efficiency is improved.
Referring to fig. 11, fig. 11 is a schematic structural diagram of another picture splitting device according to an embodiment of the present invention. As shown in fig. 11, the picture splitting apparatus 1000 may be applied to the terminal device in the embodiment corresponding to fig. 2, and the picture splitting apparatus 1000 may include: the processor 1001, the network interface 1004, and the memory 1005, and the picture splitting apparatus 1000 may further include: a user interface 1003, and at least one communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display) and a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface and a standard wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1004 may be a high-speed RAM memory or a non-volatile memory (e.g., at least one disk memory). The memory 1005 may optionally be at least one memory device located remotely from the processor 1001. As shown in fig. 11, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and a device control application program.
In the picture splitting apparatus 1000 shown in fig. 11, the network interface 1004 can provide network communication function; the user interface 1003 is an interface for providing a user with input; and the processor 1001 may be used to invoke a device control application stored in the memory 1005 to implement:
acquiring target picture data and extracting a target area in the target picture data;
traversing the target picture data by adopting traversal windows, acquiring vertex information covering all the traversal windows in the target picture data, and generating a plurality of target shape areas in the target area according to the vertex information;
generating a target shape picture corresponding to each target shape area according to the pixel points covered by each target shape area;
and setting offset attribute information corresponding to each target shape picture, and performing displacement deflection with a split animation effect on each target shape picture according to the offset attribute information.
In an embodiment, the traversal window is a rectangular window, and when the processor 1001 executes the traversal of the target picture data by using the traversal window, acquires vertex information covering all traversal windows in the target picture data, and generates a plurality of target shape regions in the target region according to the vertex information, the following steps are specifically executed:
traversing the target picture data through the rectangular window, generating a grid with the rectangular window as a minimum unit in the target picture data area, acquiring vertex information in the grid, and generating a vertex matrix;
performing position offset on the vertex information of the target row in the vertex matrix to generate a target vertex matrix, wherein the vertex information of the target row comprises single-row vertex information or double-row vertex information;
and connecting the vertex information in the target vertex matrix to generate a plurality of target shape areas with target shape attributes in the target area.
In one embodiment, when the processor 1001 executes the vertex information in the connected target vertex matrix to generate a plurality of target shape areas having target shape attributes in the target area, the following steps are specifically executed;
connecting vertex information in the target vertex matrix to generate a shape region set in the target region;
and acquiring the area perimeter and the shape attribute corresponding to each shape area to be selected in the shape area set, and selecting the shape area to be selected with the area perimeter as the target perimeter and the shape attribute as the target shape area in the shape area set.
In an embodiment, when the processor 1001 performs the setting of the offset attribute information corresponding to each target shape picture, the following steps are specifically performed:
determining first vertex information of each row in the target vertex matrix as first vertex coordinate information, and determining residual vertex information in the target vertex matrix as second vertex coordinate information, wherein the residual vertex information comprises vertex information except the first vertex information of each row in the target vertex matrix;
setting offset attribute information for the target shape picture in each line based on the first vertex coordinate information and the second vertex coordinate information of each line, respectively.
In an embodiment, when executing the setting of offset attribute information for the target shape picture in each line based on the first vertex coordinate information and the second vertex coordinate information of each line, the processor 1001 specifically executes the following steps:
selecting a target shape picture as a processing picture from all the target shape pictures;
acquiring second vertex coordinate information corresponding to the processed picture as a second processing coordinate, and acquiring first vertex coordinate information belonging to the same row as the second processing coordinate as a first processing coordinate;
determining a target distance between the first processing coordinate and the second processing coordinate, and determining a first direction offset corresponding to the processed picture according to the target distance;
determining a second direction offset corresponding to the processed picture according to the offset coefficient, the random offset direction and the target distance;
determining the first direction offset and the second direction offset as offset attribute information corresponding to the processed picture;
and when all the target shape pictures are taken as the processing pictures, obtaining the offset attribute information corresponding to each target shape picture.
In an embodiment, when executing the above-mentioned displacement deflection with a split animation effect on each target shape picture according to the offset attribute information, the processor 1001 specifically executes the following steps:
acquiring key point coordinate information of each target shape picture, wherein the key point coordinate information is determined by coordinate information corresponding to a target pixel point in the target shape picture;
determining a first direction target coordinate of the target shape picture according to a first direction coordinate in the key point coordinate information and the first direction offset;
determining a second direction target coordinate of the target shape picture according to a second direction coordinate in the key point coordinate information and the second direction offset;
and determining the offset target coordinate information of the target shape picture based on the first direction target coordinate and the second direction target coordinate, and performing displacement deflection with a split animation effect on the target shape picture according to the offset target coordinate information.
In an embodiment, before performing traversal of the target picture data with the traversal window, the processor 1001 may further perform the following steps:
acquiring an animation splitting angle, rotating the target picture data in a coordinate axis based on the animation splitting angle, and executing the step of traversing the target picture data by adopting a traversal window;
then, the performing displacement deflection with a split animation effect on each target shape picture according to the offset attribute information includes:
and rotating the view container comprising the coordinate axis and the target picture data based on the animation splitting angle, and performing displacement deflection with a splitting animation effect on each target shape picture in the rotated view container according to the offset attribute information.
The method comprises the steps of obtaining target picture data and extracting a target area in the target picture data; traversing the target picture data by adopting a rectangular window, acquiring vertex information covering all traversal windows in the target picture data, carrying out position offset on single-row vertex information or double-row vertex information in the vertex information, and generating a plurality of target shape areas in the target area according to the vertex information subjected to the position offset; generating a target shape picture corresponding to each target shape area according to the pixel points covered by each target shape area; and setting offset attribute information corresponding to each target shape picture, and performing displacement deflection with a split animation effect on each target shape picture according to the offset attribute information. Therefore, in the whole implementation process of the picture splitting animation, a target area selected by a user can be extracted, the target area is dotted in a rectangular window mode to obtain vertex information, a target shape area can be generated in the target area through single-row and double-row displacement difference of the vertex information, all pixel points covered by each target shape area are converted into a target shape picture, the target area selected in the picture can be subjected to displacement deflection in the form of the target shape picture, the pixel points covered by each target shape are converted into a picture form, the picture splitting animation effect can be achieved only by changing the position information of each target picture, the situation that all pixel points in the target shape area need to be subjected to deviation processing respectively is avoided, the operand is reduced, and the picture splitting processing efficiency is improved.
It should be understood that the picture splitting apparatus 1000 described in the embodiment of the present invention may perform the description of the picture splitting method in the embodiment corresponding to any one of fig. 2 to fig. 8, and may also perform the description of the picture splitting apparatus 1 in the embodiment corresponding to fig. 9 or fig. 10, which is not repeated herein. In addition, the beneficial effects of the same method are not described in detail.
Further, here, it is to be noted that: an embodiment of the present invention further provides a computer storage medium, where the computer storage medium stores the aforementioned computer program executed by the image splitting apparatus 1, and the computer program includes program instructions, and when the processor executes the program instructions, the description of the image splitting method in any one of the embodiments corresponding to fig. 2 to fig. 8 can be executed, so that details are not repeated here. In addition, the beneficial effects of the same method are not described in detail. For technical details not disclosed in the embodiments of the computer storage medium to which the present invention relates, reference is made to the description of the method embodiments of the present invention.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present invention, and it is therefore to be understood that the invention is not limited by the scope of the appended claims.

Claims (10)

1. A picture splitting method, comprising:
acquiring target picture data and extracting a target area in the target picture data;
traversing the target picture data through a rectangular window, generating a grid with the rectangular window as a minimum unit in the target picture data area, acquiring vertex information in the grid, and generating a vertex matrix;
performing position offset on the vertex information of the target row in the vertex matrix to generate a target vertex matrix, wherein the vertex information of the target row comprises single-row vertex information or double-row vertex information;
connecting vertex information in the target vertex matrix, and generating a plurality of target shape areas with target shape attributes in the target area;
generating a target shape picture corresponding to each target shape area according to the pixel points covered by each target shape area;
and setting offset attribute information corresponding to each target shape picture, and performing displacement deflection with a split animation effect on each target shape picture according to the offset attribute information.
2. The method of claim 1, wherein said concatenating vertex information in the target vertex matrix to generate a plurality of target shape regions having target shape attributes within the target region comprises:
connecting vertex information in the target vertex matrix to generate a shape region set in the target region;
and acquiring the area perimeter and the shape attribute corresponding to each shape area to be selected in the shape area set, and selecting the shape area to be selected with the area perimeter as the target perimeter and the shape attribute as the target shape area in the shape area set.
3. The method according to claim 1, wherein the setting offset attribute information corresponding to each target shape picture comprises:
determining first vertex information of each row in the target vertex matrix as first vertex coordinate information, and determining residual vertex information in the target vertex matrix as second vertex coordinate information, wherein the residual vertex information comprises vertex information except the first vertex information of each row in the target vertex matrix;
setting offset attribute information for the target shape picture in each line based on the first vertex coordinate information and the second vertex coordinate information of each line, respectively.
4. The method according to claim 3, wherein the setting offset attribute information for the target shape picture in each row based on the first vertex coordinate information and the second vertex coordinate information of each row respectively comprises:
selecting a target shape picture as a processing picture from all the target shape pictures;
acquiring second vertex coordinate information corresponding to the processed picture as a second processing coordinate, and acquiring first vertex coordinate information belonging to the same row as the second processing coordinate as a first processing coordinate;
determining a target distance between the first processing coordinate and the second processing coordinate, and determining a first direction offset corresponding to the processed picture according to the target distance;
determining a second direction offset corresponding to the processed picture according to the offset coefficient, the random offset direction and the target distance;
determining the first direction offset and the second direction offset as offset attribute information corresponding to the processed picture;
and when all the target shape pictures are taken as the processing pictures, obtaining the offset attribute information corresponding to each target shape picture.
5. The method according to claim 4, wherein the performing displacement deflection with a split animation effect on each target shape picture according to the offset attribute information comprises:
acquiring key point coordinate information of the target shape picture, wherein the key point coordinate information is determined by coordinate information corresponding to a target pixel point in the target shape picture;
determining a first direction target coordinate of the target shape picture according to a first direction coordinate in the key point coordinate information and the first direction offset;
determining a second direction target coordinate of the target shape picture according to a second direction coordinate in the key point coordinate information and the second direction offset;
and determining the offset target coordinate information of the target shape picture based on the first direction target coordinate and the second direction target coordinate, and performing displacement deflection with a split animation effect on the target shape picture according to the offset target coordinate information.
6. The method of claim 1, further comprising:
acquiring an animation splitting angle, rotating the target picture data in a coordinate axis based on the animation splitting angle, and executing the step of traversing the target picture data through a rectangular window;
then, the performing displacement deflection with a split animation effect on each target shape picture according to the offset attribute information includes:
and rotating the view container comprising the coordinate axis and the target picture data based on the animation splitting angle, and performing displacement deflection with a splitting animation effect on each target shape picture in the rotated view container according to the offset attribute information.
7. A picture splitting apparatus, comprising:
the data acquisition module is used for acquiring target picture data and extracting a target area in the target picture data;
the region generation module is used for traversing the target picture data through a rectangular window, generating a grid with the rectangular window as a minimum unit in the target picture data region, acquiring vertex information in the grid and generating a vertex matrix;
the region generating module is further configured to perform position offset on the vertex information of the target row in the vertex matrix to generate a target vertex matrix, where the vertex information of the target row includes single-row vertex information or double-row vertex information;
the region generation module is further configured to connect vertex information in the target vertex matrix and generate a plurality of target shape regions with target shape attributes in the target region;
the image generation module is used for generating target shape images corresponding to each target shape area according to the pixel points covered by each target shape area;
and the splitting animation module is used for setting the offset attribute information corresponding to each target shape picture respectively and carrying out displacement deflection with splitting animation effect on each target shape picture according to the offset attribute information.
8. The apparatus of claim 7, wherein the split animation module comprises:
a vertex coordinate determining unit, configured to determine first vertex information of each row in the target vertex matrix as first vertex coordinate information, and determine remaining vertex information in the target vertex matrix as second vertex coordinate information, where the remaining vertex information includes vertex information in the target vertex matrix except the first vertex information of each row;
an offset attribute setting unit configured to set offset attribute information for the target shape picture in each line, respectively, based on the first vertex coordinate information and the second vertex coordinate information of each line.
9. A picture splitting apparatus, comprising: a processor and a memory;
the processor is coupled to a memory, wherein the memory is configured to store program code and the processor is configured to invoke the program code to perform the method of any of claims 1-6.
10. A computer-readable storage medium, in which a computer program is stored which is adapted to be loaded and executed by a processor to cause a computer device having said processor to carry out the method of any one of claims 1 to 6.
CN201811645350.8A 2018-12-29 2018-12-29 Picture splitting method and device Active CN111383310B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811645350.8A CN111383310B (en) 2018-12-29 2018-12-29 Picture splitting method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811645350.8A CN111383310B (en) 2018-12-29 2018-12-29 Picture splitting method and device

Publications (2)

Publication Number Publication Date
CN111383310A CN111383310A (en) 2020-07-07
CN111383310B true CN111383310B (en) 2022-02-11

Family

ID=71214925

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811645350.8A Active CN111383310B (en) 2018-12-29 2018-12-29 Picture splitting method and device

Country Status (1)

Country Link
CN (1) CN111383310B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103473799A (en) * 2013-09-02 2013-12-25 腾讯科技(深圳)有限公司 Picture dynamic processing method, device and terminal equipment
JP2014063371A (en) * 2012-09-21 2014-04-10 Casio Comput Co Ltd Moving picture reproducing apparatus, moving picture reproducing method, and program
CN104123742A (en) * 2014-07-21 2014-10-29 徐才 Method and player for translating static cartoon picture into two dimensional animation
CN104978124A (en) * 2015-06-30 2015-10-14 广东欧珀移动通信有限公司 Picture display method for terminal and terminal
CN107945253A (en) * 2017-11-21 2018-04-20 腾讯数码(天津)有限公司 A kind of animation effect implementation method, device and storage device
CN108364337A (en) * 2018-01-31 2018-08-03 北京车和家信息技术有限公司 The animated show method and device of image

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014063371A (en) * 2012-09-21 2014-04-10 Casio Comput Co Ltd Moving picture reproducing apparatus, moving picture reproducing method, and program
CN103473799A (en) * 2013-09-02 2013-12-25 腾讯科技(深圳)有限公司 Picture dynamic processing method, device and terminal equipment
CN104123742A (en) * 2014-07-21 2014-10-29 徐才 Method and player for translating static cartoon picture into two dimensional animation
CN104978124A (en) * 2015-06-30 2015-10-14 广东欧珀移动通信有限公司 Picture display method for terminal and terminal
CN107945253A (en) * 2017-11-21 2018-04-20 腾讯数码(天津)有限公司 A kind of animation effect implementation method, device and storage device
CN108364337A (en) * 2018-01-31 2018-08-03 北京车和家信息技术有限公司 The animated show method and device of image

Also Published As

Publication number Publication date
CN111383310A (en) 2020-07-07

Similar Documents

Publication Publication Date Title
KR102638526B1 (en) Modifying scenes in augmented reality using parameterized markers
WO2016131390A1 (en) Electronic map display method, apparatus and electronic device
CN102750079B (en) Terminal unit and object control method
CN108932053B (en) Drawing method and device based on gestures, storage medium and computer equipment
CN108762505B (en) Gesture-based virtual object control method and device, storage medium and equipment
CN109643218A (en) The animation of user interface element
US7751627B2 (en) Image dominant line determination and use
CN109271983B (en) Display method and display terminal for identifying object in screenshot
JP2013534656A (en) Adaptive and innovative mobile device street view
CN112288665A (en) Image fusion method and device, storage medium and electronic equipment
US20150302587A1 (en) Image processing device, image processing method, program, and information recording medium
CN103914876A (en) Method and apparatus for displaying video on 3D map
CN109064525B (en) Picture format conversion method, device, equipment and storage medium
JP5981175B2 (en) Drawing display device and drawing display program
US20130162674A1 (en) Information processing terminal, information processing method, and program
GB2578947A (en) Unified digital content selection system for vector and raster graphics
JP5299125B2 (en) Document processing apparatus and program
US20220269360A1 (en) Device, method and program for generating multidimensional reaction-type image, and method and program for reproducing multidimensional reaction-type image
CN107861711B (en) Page adaptation method and device
KR102237519B1 (en) Method of providing virtual exhibition space using 2.5 dimensionalization
US20080143673A1 (en) Method and Apparatus For Moving Cursor Using Numerical Keys
CN111383310B (en) Picture splitting method and device
CN113610864B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN114344894A (en) Scene element processing method, device, equipment and medium
CN115552364A (en) Multi-terminal collaborative display updating method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant