CN109146770A - A kind of strain image generation method, device, electronic equipment and computer readable storage medium - Google Patents
A kind of strain image generation method, device, electronic equipment and computer readable storage medium Download PDFInfo
- Publication number
- CN109146770A CN109146770A CN201810838408.4A CN201810838408A CN109146770A CN 109146770 A CN109146770 A CN 109146770A CN 201810838408 A CN201810838408 A CN 201810838408A CN 109146770 A CN109146770 A CN 109146770A
- Authority
- CN
- China
- Prior art keywords
- image
- deformation
- setting
- range
- action
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 82
- 230000009471 action Effects 0.000 claims abstract description 110
- 230000004044 response Effects 0.000 claims abstract description 15
- 238000012545 processing Methods 0.000 claims description 36
- 238000013507 mapping Methods 0.000 claims description 6
- 239000013598 vector Substances 0.000 claims description 4
- 230000004323 axial length Effects 0.000 claims description 2
- 230000000694 effects Effects 0.000 abstract description 19
- 238000003384 imaging method Methods 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 17
- 230000006870 function Effects 0.000 description 12
- 230000008569 process Effects 0.000 description 11
- 238000013519 translation Methods 0.000 description 11
- 238000004891 communication Methods 0.000 description 5
- 230000008901 benefit Effects 0.000 description 3
- 210000001508 eye Anatomy 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 3
- 239000000203 mixture Substances 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000009467 reduction Effects 0.000 description 3
- 230000004075 alteration Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000003672 processing method Methods 0.000 description 2
- 210000001747 pupil Anatomy 0.000 description 2
- YBJHBAHKTGYVGT-ZKWXMUAHSA-N (+)-Biotin Chemical compound N1C(=O)N[C@@H]2[C@H](CCCCC(=O)O)SC[C@@H]21 YBJHBAHKTGYVGT-ZKWXMUAHSA-N 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 238000007792 addition Methods 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 230000003796 beauty Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 210000004709 eyebrow Anatomy 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 230000006798 recombination Effects 0.000 description 1
- 238000005215 recombination Methods 0.000 description 1
- 238000000611 regression analysis Methods 0.000 description 1
- 210000000697 sensory organ Anatomy 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- FEPMHVLSLDOMQC-UHFFFAOYSA-N virginiamycin-S1 Natural products CC1OC(=O)C(C=2C=CC=CC=2)NC(=O)C2CC(=O)CCN2C(=O)C(CC=2C=CC=CC=2)N(C)C(=O)C2CCCN2C(=O)C(CC)NC(=O)C1NC(=O)C1=NC=CC=C1O FEPMHVLSLDOMQC-UHFFFAOYSA-N 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/18—Image warping, e.g. rearranging pixels individually
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Studio Devices (AREA)
Abstract
The present invention discloses a kind of strain image generation method, device, electronic equipment and computer readable storage medium.Wherein the strain image generation method includes: the type setting command in response to receiving, and the type of deformation is arranged;In response to the range setting command received, the sphere of action of deformation is set;According to the setting result treatment imaging sensor acquired image of above-mentioned setting command, strain image is generated.The embodiment of the present disclosure solves the technical issues of can only carrying out deformation to image using default effect in the prior art, improves the flexibility ratio for generating strain image by taking the technical solution.
Description
Technical Field
The present disclosure relates to the field of image processing, and in particular, to a method and an apparatus for generating a deformation image, an electronic device, and a computer-readable storage medium.
Background
With the development of computer technology, the application range of the intelligent terminal is widely expanded, for example, the intelligent terminal can listen to music, play games, chat on internet, take pictures and the like. For the photographing technology of the intelligent terminal, the photographing pixels of the intelligent terminal reach more than ten million pixels, and the intelligent terminal has higher definition and the photographing effect comparable to that of a professional camera.
At present, when an intelligent terminal is used for photographing, not only can photographing effects of traditional functions be realized by using photographing software built in when the intelligent terminal leaves a factory, but also photographing effects with additional functions can be realized by downloading an Application program (APP for short) from a network end, for example, the APP with functions of dark light detection, a beauty camera, super pixels and the like can be realized.
However, the current face image deformation function only comprises some preset deformation effects, only the deformation effect can be directly selected, and the deformation effect cannot be flexibly edited.
Disclosure of Invention
In view of this, the present disclosure provides a method for generating a deformation image, which is used to define and edit a deformation effect of an image and generate the deformation image.
In a first aspect, an embodiment of the present disclosure provides a method for generating a deformed image, including: setting the type of the deformation in response to the received type setting command; setting an action range of the deformation in response to the received range setting command; and processing the image acquired by the image sensor according to the setting result of the setting command to generate a deformation image.
Optionally, the setting of the type of the deformation includes: and setting a type parameter of deformation and a degree parameter of the deformation.
Optionally, the setting of the action range of the deformation includes: setting a shape of the actuation range, wherein the shape is described by a plurality of parameters; the plurality of parameters includes at least a shape type parameter, a center point location parameter, and a length parameter.
Optionally, the processing the image collected by the image sensor according to the setting result of the setting command to generate a deformed image includes: and calling a deformation algorithm corresponding to the set deformation, and performing deformation processing on the pixel points of the image within the action range to obtain a deformed image.
Optionally, after setting the action range of the deformation, the method further includes: and displaying a standard image on the display device, and displaying the action range on the standard image.
Optionally, after the action range is displayed on the standard image, the method further includes: and processing the standard image according to the setting result of the setting command to generate a deformation image of the standard image.
Optionally, the processing the image collected by the image sensor according to the setting result of the setting command to generate a deformed image includes: acquiring feature points of a standard image, and fixing the position of the action range in the standard image through the feature points; identifying a first image corresponding to a standard image from images acquired through an image sensor; mapping the fixed position in the standard image into the first image; and performing deformation processing on the first image to generate a deformed image.
Optionally, the scope of action includes a center point location parameter representing a location of the scope of action; the position P of the central point of the action range is composed of 3 characteristic points A, B, C and 2 linear difference coefficients lambda1And λ2The description is specifically as follows: p is located in a triangle formed by A, B, C, and D is the intersection of the line segment extending from point A to point P and line segment BC, whereinWhere BD, BC, AP, and AD represent the length of the line segment, respectively.
Optionally, the range of action includes a length parameter indicative of an axial length of the range of action; the length parameter R of the action range is described by 2 feature points E, F and a length coefficient S, specifically: r is EF × S, where EF is the distance between two points E, F, and the length coefficient S is calculated by EF and R.
Optionally, the action range includes an angle parameter representing a rotation angle of the action range; the angle parameter angle of the action range is described by 2 characteristic points G, H, specifically: using vectorsAs a reference direction for the direction of the range of action.
Optionally, before generating the deformed image, the method further includes: and responding to the received deformation amplitude setting command, and setting the deformation amplitudes in the X-axis direction and the Y-axis direction of the deformation action range.
In a second aspect, an embodiment of the present disclosure provides a deformed image generating apparatus, including: the deformation type setting module is used for responding to the received type setting command and setting the type of deformation; the range setting module is used for responding to the received range setting command and setting the action range of the deformation; and the deformation execution module is used for processing the image acquired by the image sensor according to the setting result of the setting command to generate a deformation image.
Optionally, the deformation type setting module: and the method is used for setting a type parameter of deformation and a degree parameter of deformation.
Optionally, the range setting module: a shape for setting a reach, wherein the shape is described by a plurality of parameters; the plurality of parameters includes at least a shape type parameter, a center point location parameter, and a length parameter.
Optionally, the deformation execution module: and the deformation algorithm corresponding to the set deformation is called, and deformation processing is carried out on the pixel points of the image within the action range to obtain a deformed image.
Optionally, the deformation image generating apparatus further includes: and the display module is used for displaying the action range on the standard image.
Optionally, the deformation execution module: the characteristic points are used for acquiring the standard images, and the positions of the action ranges in the standard images are fixed through the characteristic points; identifying a first image corresponding to a standard image from images acquired through an image sensor; mapping the fixed position in the standard image into the first image; and performing deformation processing on the first image to generate a deformed image.
Optionally, the position P of the central point of the action range is composed of 3 feature points A, B, C and 2 linear difference coefficients λ1And λ2The description is specifically as follows: p is located in a triangle formed by A, B, C, and D is the intersection of the line segment extending from point A to point P and line segment BC, whereinWhere BD, BC, AP, and AD represent the length of the line segment, respectively.
Optionally, the length parameter R of the action range is described by 2 feature points E, F and a length coefficient S, specifically: r is EF × S, where EF is the distance between two points E, F, and the length coefficient S is calculated by EF and R.
Optionally, the action range further comprises an angle parameter representing a rotation angle of the action range; the angle parameter angle of the action range is described by 2 characteristic points G, H, specifically: using vectorsAs a reference direction for the direction of the range of action.
Optionally, the deformation image generating apparatus further includes: and the amplitude setting module is used for responding to the received deformation amplitude setting command and setting the deformation amplitude in the X-axis direction and the Y-axis direction of the deformation action range.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform any of the deformed image generating methods of the first aspect.
In a fourth aspect, the present disclosure provides a non-transitory computer-readable storage medium, wherein the non-transitory computer-readable storage medium stores computer instructions for causing a computer to execute any of the deformation image generation methods in the first aspect.
The embodiment of the disclosure provides a deformation image generation method and device, electronic equipment and a computer-readable storage medium. The deformation image generation method comprises the following steps: setting the type of the deformation in response to the received type setting command; setting an action range of the deformation in response to the received range setting command; and processing the image acquired by the image sensor according to the setting result of the setting command to generate a deformation image. By adopting the technical scheme, the technical problem that the image can only be deformed by adopting the preset effect in the prior art is solved, and the flexibility of generating the deformed image is improved.
The foregoing is a summary of the present disclosure, and for the purposes of promoting a clear understanding of the technical means of the present disclosure, the present disclosure may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present disclosure, and other drawings can be obtained according to the drawings without creative efforts for those skilled in the art.
Fig. 1 is a flowchart of a first embodiment of a deformation image generation method provided in an embodiment of the present disclosure.
Fig. 2a is a flowchart of a second embodiment of a deformation image generation method provided in the embodiment of the present disclosure.
Fig. 2b is a schematic diagram of a standard image in a second embodiment of a method for generating a deformed image according to the present disclosure;
fig. 2c is a schematic diagram of a deformed image of a standard image in a second embodiment of a deformed image generation method provided in the embodiment of the present disclosure;
fig. 3 is a schematic diagram of a position of a center point of an action range in a deformation image generation method provided by the embodiment of the present disclosure.
Fig. 4 is a flowchart of a second embodiment of a deformation image generation method provided in the embodiment of the present disclosure.
Fig. 5 is a schematic structural diagram of a first deformation image generation apparatus provided in the embodiment of the present disclosure.
Fig. 6 is a schematic structural diagram of a second embodiment of a deformed image generating apparatus according to an embodiment of the present disclosure.
Fig. 7 is a schematic structural diagram of a third embodiment of a deformed image generating apparatus according to an embodiment of the present disclosure.
Fig. 8 is a schematic structural diagram of an electronic device provided according to an embodiment of the present disclosure.
Fig. 9 is a schematic structural diagram of a computer-readable storage medium provided according to an embodiment of the present disclosure.
Fig. 10 is a schematic structural diagram of a deformed image generating terminal according to an embodiment of the present disclosure.
Detailed Description
The embodiments of the present disclosure are described below with specific examples, and other advantages and effects of the present disclosure will be readily apparent to those skilled in the art from the disclosure in the specification. It is to be understood that the described embodiments are merely illustrative of some, and not restrictive, of the embodiments of the disclosure. The disclosure may be embodied or carried out in various other specific embodiments, and various modifications and changes may be made in the details within the description without departing from the spirit of the disclosure. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
It is noted that various aspects of the embodiments are described below within the scope of the appended claims. It should be apparent that the aspects described herein may be embodied in a wide variety of forms and that any specific structure and/or function described herein is merely illustrative. Based on the disclosure, one skilled in the art should appreciate that one aspect described herein may be implemented independently of any other aspects and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method practiced using any number of the aspects set forth herein. Additionally, such an apparatus may be implemented and/or such a method may be practiced using other structure and/or functionality in addition to one or more of the aspects set forth herein.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present disclosure, and the drawings only show the components related to the present disclosure rather than the number, shape and size of the components in actual implementation, and the type, amount and ratio of the components in actual implementation may be changed arbitrarily, and the layout of the components may be more complicated.
In addition, in the following description, specific details are provided to facilitate a thorough understanding of the examples. However, it will be understood by those skilled in the art that the aspects may be practiced without these specific details.
Fig. 1 is a flowchart of a first embodiment of a method for generating a deformed image according to the present disclosure, where the method for generating a deformed image according to this embodiment may be executed by a deformed image generating apparatus, where the deformed image generating apparatus may be implemented as software, or implemented as a combination of software and hardware, and the deformed image generating apparatus may be integrated in a certain device in an image processing system, such as an image processing server or an image processing terminal device. As shown in fig. 1, the method comprises the steps of:
s101, responding to the received type setting command, and setting the type of the deformation;
at the terminal equipment, a user can set the deformation through a human-computer interface, and at the moment, the terminal equipment receives a type setting command input by the user and sets the type of the deformation; the deformation type setting command may include a deformation type parameter, where the deformation type parameter specifies a type of deformation, and the deformation type may include zoom-in, zoom-out, translation, rotation, drag, and the like. The deformation type setting command may also include two parameters, namely a deformation type parameter and a deformation degree parameter, where the deformation type parameter is the same as the deformation type parameter, the deformation degree parameter specifies the degree of deformation, and the degree of deformation may be, for example, a magnification factor, a translation distance, a rotation angle, a dragging distance, and the like. When the deformation type is translation, the deformation degree parameter comprises the position of the target point and the amplitude of translation from the center point to the target point, wherein the amplitude can be a negative number and represents translation in the opposite direction; the deformation degree parameter can further comprise a translation attenuation coefficient, and the larger the translation attenuation coefficient is, the smaller the attenuation of the translation amplitude in the direction far away from the central point is. The deformation type also includes a special deformation type: flexible enlargement/reduction, and can freely adjust the image deformation degree of the image position with unnecessary distance from the deformation area to the central point.
S102, responding to the received range setting command, and setting the action range of deformation;
at the terminal equipment, a user can set the deformation action range through a human-computer interface, and at the moment, the terminal equipment receives a range setting command input by the user and sets the deformation action range; generally, the scope of action includes the shape of the scope of action, such as a circle, ellipse, rectangle, etc., as well as the size and location of the scope of action, such as the radius of the circle and the location of the center of the circle; the length, the rotation angle and the central position of the ellipse of the major and minor axes of the ellipse; the side length and the rotation angle of the rectangle and the position of the center of the rectangle; the setting command comprises a plurality of action range parameters to describe the shape, the plurality of action range parameters comprise a shape type parameter, a position coordinate parameter of a shape central point and a length parameter of the shape in the axial direction of the screen, and for some shapes such as an ellipse and a rectangle, the shape parameters also comprise a rotation angle parameter which defines the rotation angle of the shape relative to the axial direction of the screen; the length parameter of the shape in the screen axial direction comprises the length of the action range in the screen X-axis direction and/or the length of the action range in the screen Y-axis direction.
In one embodiment, the plurality of parameters may further include parameters max and min, where max determines an outer boundary, min determines an inner boundary, the inner boundary and the outer boundary form an annular range located within the action range, and the image deforms only within the annular range; wherein max is greater than min, and 0 ≦ max ≦ 1, and 0 ≦ min ≦ 1, and generally, max and min respectively represent the length parameter proportional relationship between the outer boundary and the inner boundary and the action range, for example, when max is 1, it represents that the outer boundary of the annular range coincides with the boundary of the action range. It is understood that the ring shape may be a circular ring, an elliptical ring, a rectangular ring, or the like.
In one embodiment, the parameters further include sign x and sign y, specifically: when sign X is equal to-1, the deformation only needs to be carried out on the half action range of the negative half shaft of the X shaft; when sign X is 1, the action range of only half of the positive half shaft of the X axis needs to be deformed; when signX is 0, it indicates that the deformation range is not selected in the X-axis direction; when sign Y is equal to-1, the deformation only needs to be carried out on the half action range of the negative half shaft of the Y shaft; when sign Y is 1, the action range of only half of the positive half shaft of the Y axis needs to be deformed; when sign Y is 0, it indicates that the deformation range is not selected in the Y-axis direction. For example, when sign x is 1 and sign y is 1, the image in the action range of the first quadrant of the action range is deformed, and so on, which is not described again.
And S103, processing the image collected by the image sensor according to the setting result of the setting command to generate a deformation image.
And calling different deformation processing methods for different deformation types, and calling corresponding deformation processing methods to perform deformation processing on the image in the deformation action range after the action range and the type of the deformation are determined, wherein the image is a real-time image acquired by an image sensor or an image obtained by the image sensor, the image comprises a picture or a video, and finally a deformation image is generated. In specific implementation, different deformation algorithms are preset corresponding to different deformations, and the parameter configuration deformation algorithm is transmitted in the command received in steps S101 and S102 to perform deformation processing on the pixel points of the image within the deformation action range, so as to obtain a final deformation image.
The core idea of the embodiment is as follows: the deformation is controlled by setting two parameters of the deformation type and the deformation action range. By the deformation image generation method, the deformation result can be controlled more flexibly.
Fig. 2a is a flowchart of a second embodiment of a method for generating a deformed image according to an embodiment of the present disclosure, and as shown in fig. 2a, the method may include the following steps:
s201, responding to the received type setting command, and setting the type of the deformation;
s202, responding to the received range setting command, and setting the action range of deformation;
s203, displaying a standard image on a display device, and displaying the action range on the standard image;
and S204, processing the image acquired by the image sensor according to the setting result of the setting command to generate a deformation image.
As shown in fig. 2b and 2c, in this embodiment, in order to facilitate the user to observe and adjust the deformation acting range, after step S202, step S203 is added: and displaying a standard image on a display device, and displaying the action range on the standard image. In an embodiment, the standard image is a face image, the action range is an elliptical region, the user may set a major axis, a minor axis and a rotation angle of the ellipse, the elliptical region may be displayed at a default position on the face image, and at this time, the user may change the position of the action range by dragging a center point of the ellipse, for example, dragging the ellipse from a nose region to an eye region, and then perform corresponding deformation processing on the eye image, such as the action range and the deformation effect shown in fig. 2c
The action range is displayed on the standard image, a user can conveniently check the predicted effect of the deformation, and parameters of various deformations are adjusted according to the effect. In one embodiment, the relative position of the range of action is determined by feature points, such as the human face feature points shown in FIG. 2 b. The characteristic points of the image are points which have clear characteristics in the image, can effectively reflect the essential characteristics of the image and can identify target objects in the image. If the target object is a human face, key points of the human face need to be acquired, and if the target image is a house, key points of the house need to be acquired. The method for acquiring key points is described by taking a human face as an example, the human face contour mainly comprises 5 parts of eyebrows, eyes, a nose, a mouth and cheeks, and sometimes also comprises pupils and nostrils, generally speaking, the complete description of the human face contour is realized, the number of the key points is about 60, if only a basic structure is described, the detailed description of each part is not needed, or the description of the cheeks is not needed, the number of the key points can be correspondingly reduced, and if the pupil, the nostril or the characteristics of five sense organs needing more detail are needed to be described, the number of the key points can be increased. Extracting face key points on the image, namely searching corresponding position coordinates of each face contour key point in the face image, namely key point positioning, wherein the process needs to be carried out based on the corresponding characteristics of the key points, after the image characteristics capable of clearly identifying the key points are obtained, searching and comparing are carried out in the image according to the characteristics, and the positions of the key points are accurately positioned on the image. Since feature points occupy only a very small area (usually, the size of only a few to tens of pixels) in an image, the area occupied by features corresponding to the feature points on the image is also very limited and local, and there are two types of feature extraction methods currently used: (1) extracting one-dimensional range image features vertical to the contour; (2) and extracting the two-dimensional range image features of the feature point square neighborhood. There are many ways to implement the above two methods, such as ASM and AAM methods, statistical energy function methods, regression analysis methods, deep learning methods, classifier methods, batch extraction methods, and so on. The number, accuracy and speed of the key points used by the various implementation methods are different, and the method is suitable for different application scenes.
Because the deformation of the standard image needs to have a mapping relation to the deformation of the image acquired by the image sensor, the deformation modes can be divided into fixed deformation and tracking deformation according to different mapping modes, in one embodiment, a fixed deformation is used, which is relatively simple, only the absolute position of the entire deformation range in the image sensor needs to be set, the realization method can be that the display device is corresponding to the pixel points of the image acquisition window of the image sensor one by one, the position of the deformation range in the display device is judged, then the corresponding position of the image acquired by the image acquisition window of the image sensor is processed by corresponding deformation, this way of deformation processing has the advantage of simplicity and ease of operation in another embodiment, when generating a deformed image, firstly, acquiring the characteristic points of the standard image in the step S203, and determining the position of the action range in the standard image through the characteristic points; identifying a first image corresponding to a standard image from images acquired through an image sensor; mapping the determined position in the standard image into the first image; and performing deformation processing on the first image to generate a deformed image. In the deformation mode, the relative position of the deformation range in the target image is determined, and no matter how the target image moves and changes, the deformation range is always located at the relative position, so that the purpose of tracking deformation is achieved. In a typical application, the standard image is a standard face image with a reference function, the standard image has 106 feature points through triangulation, when an action range is set, the relative position of the action range in the face image is determined by using a deformation range and the relative position of the feature points, the same triangulation is performed on the face image acquired by a camera, and then when the face in the camera moves or rotates, the deformation can be always fixed on the relative position on the face, so that the deformation tracking effect is achieved.
For the tracking deformation mode in the above embodiment, the same set of feature points is used for the standard image and the acquired image, and the number of the feature points is the same, for example, the number 6 feature point in the standard image and the number 6 feature point on the image acquired by the image sensor are the same in position on the standard image and position on the acquired image. Since the center point of the action range is always located inside a triangle, the position of the center point of the action range can be determined by using 3 feature points, as shown in fig. 3, A, B, C three points are feature points in the image, P is the position of the center point of the action range, D is the intersection point of the extension line of the line segment AP and the line segment BC, where the coordinate of the point a is (X is)a,Ya) The coordinate of the point B is (X)b,Yb) The coordinate of the point C is (X)c,Yc) The coordinate of the D point is (X)d,Yd) The coordinate of the point P is (X)p,Yp) Then 2 linear difference coefficients can be obtained Wherein BD, BC, AP and AD each represent the length of a line segment,
in the standard image, the coordinates of A, B, C, P are all known values, so λ can be calculated using BD, BC, AP, and AD1And λ2(ii) a When the image acquired by the image sensor is deformed, the acting range of the deformation needs to be determined firstly, and the characteristic point A with the same number as that of point A, B, C can be used1、B1、C1,A1、B1And C1Is a known value in the coordinate system of the image sensor, when A is used1、B1And C1Coordinate value of (d) and linear difference coefficient lambda1And λ2P corresponding to P point can be calculated1Coordinates of the point in the coordinate system of the image sensor. It should be noted that if A, B and C are numbered identically, A, B and C are co-located, and the P point is located at the A point, there is no need to calculate λ1And λ2A value of (d); when the numbers of B and C are the same, B, C, D are concurrent, the triangle is degenerated into line segment, and the position of P is from lambda2Determining without calculating λ1(ii) a When a and B or a and C are co-located, the order of A, B and C can be swapped, converting to the BC case. The method of recording the relative position between the deformation center point and the three feature points by using the linear difference coefficient to calculate the position of the P point mapped to the coordinate system of the image sensor is only an example, and practically any method of recording the position of the deformation center point by using the feature points can be introduced into the present disclosure. When the type of the deformation is translation or dragging, the scheme can also be used for describing the positions of the translated and dragged target points, the positions of the translated and dragged target points in the standard image are described by the method, and when the position of the target point in the actual image is mapped into the image collected by the image sensor, the position of the target point in the actual image can be determined. When the deformation type is translation, two translation modes can be preset, for example, when the number of the point A is less than 0, the target point is positioned at the farthest end of the action range in the positive direction of the X axis; when the number of the point A is greater than or equal to 0 and the number of the point B is smaller than 0, the target point is positioned at the farthest end of the action range in the positive direction of the Y axis.
In addition to the position of the center, recording is requiredThe length R of the range of action, which can be described by two characteristic points E, F and a length factor S, in particular: r is a known value in the standard image, and EF is the length of the line between the feature point E and the feature point F, whereby the length coefficient can be calculated E, F corresponding characteristic point E in the coordinate system of the image sensor1And F1Is a known value, the length R of the range of action in the coordinate system of the image sensor1Can be calculated by the following formula:E1F1xs, which has multiple lengths for the range of action of some shapes, such as an ellipse with a major axis and a minor axis and a rectangle with a length and a width, where multiple length coefficients S can be calculated for each length separately to record the relative length of each length.
The user can set whether the action range rotates along with the rotation of the target image, if the action range does not rotate along with the rotation, the x radius direction of the action range is only required to be kept parallel to the width (horizontal) direction of the screen, and meanwhile, the y radius direction is also necessarily parallel to the height (vertical) direction of the screen. When the range of action needs to be rotated following the rotation of the target image, the angle of the range of action needs to be determined, which indicates how much the range of action needs to be rotated to remain relatively stationary with the target image. The angle may be a vector consisting of two feature points G and HDirection determination of (2), e.g. X-axis of range of action andthe included angle needs to be kept at a fixed value when the target image acquired by the image sensorWhen the image rotates, the action range needs to be equal toKeeping a fixed included angle and making corresponding rotation.
The above recording of each parameter of the action range is to calculate a relative value by taking the feature point as a reference, and calculate an actual parameter of the action range in the coordinate system of the image sensor.
Fig. 4 is a flowchart of a third embodiment of a method for generating a deformation image according to an embodiment of the present disclosure, in which a deformation amplitude setting step is further added on the basis of the first embodiment. As shown in fig. 4, the following steps may be included:
s401, responding to the received type setting command, and setting the type of the deformation;
s402, responding to the received range setting command, and setting the action range of deformation;
s403, in response to receiving the deformation amplitude setting command, setting deformation amplitudes in the X-axis direction and the Y-axis direction of the deformation action range;
and S404, processing the image collected by the image sensor according to the setting result of the setting command to generate a deformation image.
In this embodiment, a command for setting the deformation amplitude is also included, and the command sets the deformation amplitude of the action range in the X-axis direction and the deformation amplitude in the Y-axis direction. The parameters can be set for the deformation of types such as magnification, reduction, rotation and the like, for example, the magnification is taken as an example, a user can set that an image is magnified by 3 times in the X-axis direction, the image is magnified by 2 times in the Y-axis direction, and the magnification of the area between the X-axis and the Y-axis can be subjected to gradient processing; the deformation amplitude can be a negative number, and the negative number represents the reverse deformation of the type of deformation, for example, for the deformation of the amplification type, the deformation amplitude is a negative number, which is equivalent to the reduction deformation. Thus, more diversified deformation processing effects can be obtained.
Fig. 5 is a schematic structural diagram of a first embodiment of a deformed image generating apparatus provided in an embodiment of the present disclosure, and as shown in fig. 5, the apparatus includes: a deformation type setting module 51, a range setting module 52, and a deformation execution module 53.
A type-of-deformation setting module 51 for setting a type of deformation in response to the received type setting command;
a range setting module 52, configured to set an action range of the deformation in response to the received range setting command;
and a deformation execution module 53, configured to process the image acquired by the image sensor according to the setting result of the setting command, and generate a deformation image.
The apparatus shown in fig. 5 can perform the method of the embodiment shown in fig. 1, and reference may be made to the related description of the embodiment shown in fig. 1 for a part of this embodiment that is not described in detail. The implementation process and technical effect of the technical solution refer to the description in the embodiment shown in fig. 1, and are not described herein again.
Fig. 6 is a schematic structural diagram of a second embodiment of a deformed image generating apparatus provided in the embodiment of the present disclosure, such as
As shown in fig. 6, on the basis of the embodiment shown in fig. 5, the apparatus further includes: a display module 61.
A type-of-deformation setting module 51 for setting a type of deformation in response to the received type setting command;
a range setting module 52, configured to set an action range of the deformation in response to the received range setting command;
the display module 61 is used for displaying a standard image on a display device and displaying the action range on the standard image;
and a deformation execution module 53, configured to process the image acquired by the image sensor according to the setting result of the setting command, and generate a deformation image.
The apparatus shown in fig. 6 can perform the method of the embodiment shown in fig. 2, and reference may be made to the related description of the embodiment shown in fig. 2 for a part of this embodiment that is not described in detail. The implementation process and technical effect of the technical solution refer to the description in the embodiment shown in fig. 2, and are not described herein again.
Fig. 7 is a schematic structural diagram of a third embodiment of a deformed image generating apparatus according to an embodiment of the present disclosure. Such as
As shown in fig. 7, on the basis of the embodiment shown in fig. 5, the apparatus further includes: an amplitude setting module 71.
A type-of-deformation setting module 51 for setting a type of deformation in response to the received type setting command;
a range setting module 52, configured to set an action range of the deformation in response to the received range setting command;
the amplitude setting module 71 is configured to set the deformation amplitudes in the X-axis direction and the Y-axis direction of the deformation action range;
and a deformation execution module 53, configured to process the image acquired by the image sensor according to the setting result of the setting command, and generate a deformation image.
The apparatus shown in fig. 7 can perform the method of the embodiment shown in fig. 3, and reference may be made to the related description of the embodiment shown in fig. 3 for a part of this embodiment that is not described in detail. The implementation process and technical effect of the technical solution refer to the description in the embodiment shown in fig. 3, and are not described herein again.
Fig. 8 is a hardware block diagram illustrating an electronic device according to an embodiment of the present disclosure. As shown in fig. 8, an electronic device 80 according to an embodiment of the present disclosure includes a memory 81 and a processor 82.
The memory 81 is used to store non-transitory computer readable instructions. In particular, memory 81 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc.
The processor 82 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 80 to perform desired functions. In one embodiment of the present disclosure, the processor 82 is configured to execute the computer readable instructions stored in the memory 81, so that the electronic device 80 performs all or part of the aforementioned steps of the deformation image generation method according to the embodiments of the present disclosure.
Those skilled in the art should understand that, in order to solve the technical problem of how to obtain a good user experience, the present embodiment may also include well-known structures such as a communication bus, an interface, and the like, and these well-known structures should also be included in the protection scope of the present invention.
For the detailed description of the present embodiment, reference may be made to the corresponding descriptions in the foregoing embodiments, which are not repeated herein.
Fig. 9 is a schematic diagram illustrating a computer-readable storage medium according to an embodiment of the present disclosure. As shown in fig. 9, a computer-readable storage medium 90 according to an embodiment of the disclosure has non-transitory computer-readable instructions 91 stored thereon. When executed by a processor, the non-transitory computer readable instructions 91 perform all or part of the steps of the deformed image generation method of the embodiments of the present disclosure as described above.
The computer-readable storage medium 90 includes, but is not limited to: optical storage media (e.g., CD-ROMs and DVDs), magneto-optical storage media (e.g., MOs), magnetic storage media (e.g., magnetic tapes or removable disks), media with built-in rewritable non-volatile memory (e.g., memory cards), and media with built-in ROMs (e.g., ROM cartridges).
For the detailed description of the present embodiment, reference may be made to the corresponding descriptions in the foregoing embodiments, which are not repeated herein.
Fig. 10 is a diagram illustrating a hardware structure of a terminal device according to an embodiment of the present disclosure. As shown in fig. 10, the morphing image generation terminal 100 includes the above-described morphing image generation apparatus embodiment.
The terminal device may be implemented in various forms, and the terminal device in the present disclosure may include, but is not limited to, mobile terminal devices such as a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a navigation apparatus, a vehicle-mounted terminal device, a vehicle-mounted display terminal, a vehicle-mounted electronic rear view mirror, and the like, and fixed terminal devices such as a digital TV, a desktop computer, and the like.
The terminal may also include other components as equivalent alternative embodiments. As shown in fig. 10, the morphing image generating terminal 100 may include a power supply unit 101, a wireless communication unit 102, an a/V (audio/video) input unit 103, a user input unit 104, a sensing unit 105, an interface unit 106, a controller 107, an output unit 108, and a storage unit 109, and the like. Fig. 10 illustrates a terminal having various components, but it is to be understood that not all illustrated components are required to be implemented, and that more or fewer components can alternatively be implemented.
The wireless communication unit 102 allows, among other things, radio communication between the terminal 100 and a wireless communication system or network. The a/V input unit 103 is used to receive audio or video signals. The user input unit 104 may generate key input data to control various operations of the terminal device according to a command input by a user. The sensing unit 105 detects a current state of the terminal 100, a position of the terminal 100, presence or absence of a touch input of the user to the terminal 100, an orientation of the terminal 100, acceleration or deceleration movement and direction of the terminal 100, and the like, and generates a command or signal for controlling an operation of the terminal 100. The interface unit 106 serves as an interface through which at least one external device is connected to the terminal 100. The output unit 108 is configured to provide output signals in a visual, audio, and/or tactile manner. The storage unit 109 may store software programs or the like for processing and control operations performed by the controller 107, or may temporarily store data that has been output or is to be output. The storage unit 109 may include at least one type of storage medium. Also, the terminal 100 may cooperate with a network storage device that performs a storage function of the storage unit 109 through a network connection. The controller 107 generally controls the overall operation of the terminal device. In addition, the controller 107 may include a multimedia module for reproducing or playing back multimedia data. The controller 107 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as a character or an image. The power supply unit 101 receives external power or internal power and supplies appropriate power required to operate the respective elements and components under the control of the controller 107.
Various embodiments of the morphed image generation methods presented in this disclosure may be implemented using a computer-readable medium, such as computer software, hardware, or any combination thereof. For a hardware implementation, various embodiments of the deformation image generation method proposed by the present disclosure may be implemented by using at least one of an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a processor, a controller, a microcontroller, a microprocessor, and an electronic unit designed to perform the functions described herein, and in some cases, various embodiments of the deformation image generation method proposed by the present disclosure may be implemented in the controller 107. For software implementation, various embodiments of the morphing image generation method presented in the present disclosure may be implemented with a separate software module that allows at least one function or operation to be performed. The software codes may be implemented by software applications (or programs) written in any suitable programming language, which may be stored in the memory unit 109 and executed by the controller 107.
For the detailed description of the present embodiment, reference may be made to the corresponding descriptions in the foregoing embodiments, which are not repeated herein.
The foregoing describes the general principles of the present disclosure in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present disclosure are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present disclosure. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the disclosure is not intended to be limited to the specific details so described.
The block diagrams of devices, apparatuses, systems referred to in this disclosure are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
Also, as used herein, "or" as used in a list of items beginning with "at least one" indicates a separate list, such that, for example, a list of "A, B or at least one of C" means A or B or C, or AB or AC or BC, or ABC (i.e., A and B and C). Furthermore, the word "exemplary" does not mean that the described example is preferred or better than other examples.
It is also noted that in the systems and methods of the present disclosure, components or steps may be decomposed and/or re-combined. These decompositions and/or recombinations are to be considered equivalents of the present disclosure.
Various changes, substitutions and alterations to the techniques described herein may be made without departing from the techniques of the teachings as defined by the appended claims. Moreover, the scope of the claims of the present disclosure is not limited to the particular aspects of the process, machine, manufacture, composition of matter, means, methods and acts described above. Processes, machines, manufacture, compositions of matter, means, methods, or acts, presently existing or later to be developed that perform substantially the same function or achieve substantially the same result as the corresponding aspects described herein may be utilized. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or acts.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the disclosure. Thus, the present disclosure is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit embodiments of the disclosure to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.
Claims (14)
1. A method of generating a deformed image, comprising:
setting the type of the deformation in response to the received type setting command;
setting an action range of the deformation in response to the received range setting command;
and processing the image acquired by the image sensor according to the setting result of the setting command to generate a deformation image.
2. A morphing image generating method as set forth in claim 1, wherein the setting of the type of morphing includes:
and setting a type parameter of deformation and a degree parameter of the deformation.
3. A deformation image generating method according to claim 1, wherein said setting the range of action of the deformation includes:
setting a shape of the actuation range, wherein the shape is described by a plurality of parameters;
the plurality of parameters includes at least a shape type parameter, a center point location parameter, and a length parameter.
4. A morphed image generating method according to claim 1, wherein the processing the image captured by the image sensor according to the setting result of the setting command to generate a morphed image comprises:
and calling a deformation algorithm corresponding to the set deformation, and performing deformation processing on pixel points in the image within the action range to obtain a deformed image.
5. A deformation image generating method according to claim 1, further comprising, after setting the range of action of the deformation:
and displaying a standard image on the display device, and displaying the action range on the standard image.
6. A morphing image generating method as set forth in claim 5, further comprising, after displaying the scope on a standard image:
and processing the standard image according to the setting result of the setting command to generate a deformation image of the standard image.
7. A morphed image generating method according to claim 6, wherein processing the image captured by the image sensor based on the setting result of the setting command to generate a morphed image comprises:
acquiring feature points of a standard image, and fixing the position of the action range in the standard image through the feature points;
identifying a first image corresponding to a standard image from images acquired through an image sensor;
mapping the fixed position in the standard image into the first image;
and performing deformation processing on the first image to generate a deformed image.
8. A morphing image generating method according to claim 6, characterized in that:
the scope of action includes a center point location parameter indicative of a location of the scope of action;
the central point position P of the action range is composed of 3 characteristic points A, B, C and 2 linear difference coefficients lambda1And λ2The description is specifically as follows:
p is located in a triangle formed by A, B, C, and D is the intersection of the line segment extending from point A to point P and line segment BC, whereinWhere BD, BC, AP, and AD represent the length of the line segment, respectively.
9. A morphing image generating method according to claim 6, characterized in that:
the range of action includes a length parameter indicative of an axial length of the range of action;
the length parameter R of the action range is described by 2 feature points E, F and a length coefficient S, specifically:
R=EF×S
where EF is the distance between two points E, F and the length coefficient S is calculated by EF and R.
10. A morphing image generating method according to claim 6, characterized in that:
the range of action includes an angle parameter representing a rotation angle of the range of action;
the angle parameter angle of the action range is described by 2 characteristic points G, H, specifically:
using vectorsAs a reference direction for the direction of the range of action.
11. A deformed image generating method according to claim 1, further comprising, before generating the deformed image:
and responding to the received deformation amplitude setting command, and setting the deformation amplitudes in the X-axis direction and the Y-axis direction of the deformation action range.
12. A deformed image generating apparatus, characterized by comprising:
the deformation type setting module is used for responding to the received type setting command and setting the type of deformation;
the range setting module is used for responding to the received range setting command and setting the action range of the deformation;
and the deformation execution module is used for processing the image acquired by the image sensor according to the setting result of the setting command to generate a deformation image.
13. An electronic device, characterized in that the electronic device comprises:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the morphed image generating method of any of claims 1-10.
14. A non-transitory computer-readable storage medium storing computer instructions for causing a computer to execute the deformation image generation method according to any one of claims 1 to 10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810838408.4A CN109146770A (en) | 2018-07-27 | 2018-07-27 | A kind of strain image generation method, device, electronic equipment and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810838408.4A CN109146770A (en) | 2018-07-27 | 2018-07-27 | A kind of strain image generation method, device, electronic equipment and computer readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109146770A true CN109146770A (en) | 2019-01-04 |
Family
ID=64798165
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810838408.4A Pending CN109146770A (en) | 2018-07-27 | 2018-07-27 | A kind of strain image generation method, device, electronic equipment and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109146770A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112767235A (en) * | 2019-11-06 | 2021-05-07 | 腾讯科技(深圳)有限公司 | Image processing method, image processing device, computer-readable storage medium and computer equipment |
CN111079588B (en) * | 2019-12-03 | 2021-09-10 | 北京字节跳动网络技术有限公司 | Image processing method, device and storage medium |
CN117726499A (en) * | 2023-05-29 | 2024-03-19 | 荣耀终端有限公司 | Image deformation processing method, electronic device, and computer-readable storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102096900A (en) * | 2007-08-30 | 2011-06-15 | 精工爱普生株式会社 | Image processing device, image processing method, and image processing program |
CN103824253A (en) * | 2014-02-19 | 2014-05-28 | 中山大学 | Figure five sense organ deformation method based on image local precise deformation |
CN107102803A (en) * | 2017-04-27 | 2017-08-29 | 努比亚技术有限公司 | A kind of image display method, equipment and computer-readable recording medium |
CN108280883A (en) * | 2018-02-07 | 2018-07-13 | 北京市商汤科技开发有限公司 | It deforms the generation of special efficacy program file packet and deforms special efficacy generation method and device |
-
2018
- 2018-07-27 CN CN201810838408.4A patent/CN109146770A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102096900A (en) * | 2007-08-30 | 2011-06-15 | 精工爱普生株式会社 | Image processing device, image processing method, and image processing program |
CN103824253A (en) * | 2014-02-19 | 2014-05-28 | 中山大学 | Figure five sense organ deformation method based on image local precise deformation |
CN107102803A (en) * | 2017-04-27 | 2017-08-29 | 努比亚技术有限公司 | A kind of image display method, equipment and computer-readable recording medium |
CN108280883A (en) * | 2018-02-07 | 2018-07-13 | 北京市商汤科技开发有限公司 | It deforms the generation of special efficacy program file packet and deforms special efficacy generation method and device |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112767235A (en) * | 2019-11-06 | 2021-05-07 | 腾讯科技(深圳)有限公司 | Image processing method, image processing device, computer-readable storage medium and computer equipment |
CN111079588B (en) * | 2019-12-03 | 2021-09-10 | 北京字节跳动网络技术有限公司 | Image processing method, device and storage medium |
CN117726499A (en) * | 2023-05-29 | 2024-03-19 | 荣耀终端有限公司 | Image deformation processing method, electronic device, and computer-readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
TWI785258B (en) | Method, device, and storage medium for processing human face image | |
CN108986016B (en) | Image beautifying method and device and electronic equipment | |
CN109063560B (en) | Image processing method, image processing device, computer-readable storage medium and terminal | |
WO2020019663A1 (en) | Face-based special effect generation method and apparatus, and electronic device | |
CN109003224B (en) | Face-based deformation image generation method and device | |
WO2020029554A1 (en) | Augmented reality multi-plane model animation interaction method and device, apparatus, and storage medium | |
CN108921856B (en) | Image cropping method and device, electronic equipment and computer readable storage medium | |
WO2019242271A1 (en) | Image warping method and apparatus, and electronic device | |
CN110072046B (en) | Image synthesis method and device | |
CN109064387A (en) | Image special effect generation method, device and electronic equipment | |
JP7383714B2 (en) | Image processing method and device for animal faces | |
CN104583902A (en) | Improved identification of a gesture | |
CN108921798B (en) | Image processing method and device and electronic equipment | |
CN111833461B (en) | Method and device for realizing special effect of image, electronic equipment and storage medium | |
CN109146770A (en) | A kind of strain image generation method, device, electronic equipment and computer readable storage medium | |
CN108961314B (en) | Moving image generation method, moving image generation device, electronic device, and computer-readable storage medium | |
WO2022121577A1 (en) | Image processing method and apparatus | |
CN111199169A (en) | Image processing method and device | |
WO2020155984A1 (en) | Facial expression image processing method and apparatus, and electronic device | |
CN108898551B (en) | Image merging method and device | |
CN110069126A (en) | The control method and device of virtual objects | |
CN113610864B (en) | Image processing method, device, electronic equipment and computer readable storage medium | |
CN108989681A (en) | Panorama image generation method and device | |
WO2020155981A1 (en) | Emoticon effect generating method and device and electronic device | |
CN110070478B (en) | Deformation image generation method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Country or region after: China Address after: 100080 408, 4th floor, 51 Zhichun Road, Haidian District, Beijing Applicant after: Tiktok Technology Co.,Ltd. Address before: 100080 408, 4th floor, 51 Zhichun Road, Haidian District, Beijing Applicant before: BEIJING MICROLIVE VISION TECHNOLOGY Co.,Ltd. Country or region before: China |