CN109947338B - Image switching display method and device, electronic equipment and storage medium - Google Patents

Image switching display method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN109947338B
CN109947338B CN201910224190.8A CN201910224190A CN109947338B CN 109947338 B CN109947338 B CN 109947338B CN 201910224190 A CN201910224190 A CN 201910224190A CN 109947338 B CN109947338 B CN 109947338B
Authority
CN
China
Prior art keywords
target
image
weight
point
key point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910224190.8A
Other languages
Chinese (zh)
Other versions
CN109947338A (en
Inventor
钱梦仁
沈珂轶
徐冬成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910224190.8A priority Critical patent/CN109947338B/en
Publication of CN109947338A publication Critical patent/CN109947338A/en
Application granted granted Critical
Publication of CN109947338B publication Critical patent/CN109947338B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention discloses an image switching display method and device, electronic equipment and a storage medium, and belongs to the technical field of image processing. The method comprises the following steps: acquiring a first image and a second image as a switching target; acquiring a first object position in the first image and a second object position in the second image, wherein the first object position is used for representing the display position of an object in the first image; generating at least one target image according to the first object position and the second object position, wherein the at least one target image is used for showing the effect that the object in the first image moves from the first object position to the second object position and gradually changes into the object in the second image; the at least one target image is displayed during a switch from the first image to the second image. The visual effect that one object moves to the other object and gradually changes to the other object is formed visually, and the transition effect in the image switching display process is improved.

Description

Image switching display method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image switching display method and apparatus, an electronic device, and a storage medium.
Background
A video usually shows a plurality of scenes, and a video transition refers to switching from one scene to another scene for playing in the video playing process. The video file includes a plurality of images, each scene corresponds to one or more images, and the video transition is to switch and display the plurality of images corresponding to different scenes. Therefore, in the art, it is common to process the image to achieve a better visual effect when the image is switched to be displayed.
In the related art, the image switching display process may include: the terminal obtains a switching mode selected by a user, such as a fade-in/fade-out mode, and performs transparency processing on both a first image corresponding to a current scene and a second image corresponding to a next scene, for example, the transparency of both the first image and the second image is adjusted to 50%, and when the first image and the second image are switched, the first image and the second image with the transparency of 50% are displayed in a superimposed manner, so that a transition effect that the current scene gradually disappears and the next scene gradually appears is visually achieved.
The above process is actually based on the two images that are transparently processed by the superposition display, so as to realize the transition effect, however, different scenes of the two images are displayed in a staggered manner during the superposition display, for example, a human face and a house are simultaneously displayed at the same position, so that the visual effect is poor, and the transition effect during the image switching display process is poor.
Disclosure of Invention
The embodiment of the invention provides an image switching display method and device, electronic equipment and a storage medium, which can solve the problem of poor transition effect in the image switching display process. The technical scheme is as follows:
in one aspect, an image switching display method is provided, and the method includes:
acquiring a first image and a second image as a switching target;
acquiring a first object position in the first image and a second object position in the second image, wherein the first object position is used for representing the display position of an object in the first image, and the second object position is used for representing the display position of the object in the second image;
generating at least one target image according to the first object position and the second object position, wherein the at least one target image is used for showing the effect that the object in the first image moves from the first object position to the second object position and changes gradually into the object in the second image;
displaying the at least one target image during a switch from the first image to the second image.
In another aspect, there is provided an image switching display device, the device including:
an acquisition module configured to acquire a first image and a second image as a switching target;
the acquiring module is further configured to acquire a first object position in the first image and a second object position in the second image, where the first object position is used to represent a display position of an object in the first image, and the second object position is used to represent a display position of an object in the second image;
a generating module, configured to generate at least one target image according to the first object position and the second object position, where the at least one target image is used to show an effect that an object in the first image moves from the first object position to the second object position and changes gradually to an object in the second image;
a display module for displaying the at least one target image during a process of switching from the first image to the second image.
In another aspect, an electronic device is provided and includes one or more processors and one or more memories, where at least one instruction is stored in the one or more memories and loaded and executed by the one or more processors to implement the operations performed by the image switching display method as described above.
In another aspect, a computer-readable storage medium is provided, in which at least one instruction is stored, and the at least one instruction is loaded and executed by a processor to implement the operations performed by the image switching display method as described above.
The technical scheme provided by the embodiment of the invention has the beneficial effects that at least:
by generating at least one target image according to the first object position and the first object position, displaying the effect that an object in the first image moves from the first object position to the second object position and gradually changes into an object in the second image by displaying the at least one target image in the process of switching display from the first image to the second image, visually forming the visual effect that one object moves to another object and gradually changes into another object, the transitional effect in the process of switching display of images is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of an implementation environment of an image switching display method according to an embodiment of the present invention;
FIG. 2 is a flowchart of an image switching display method according to an embodiment of the present invention;
FIG. 3 is a key point diagram of a head region according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a second weight increasing process provided by the embodiment of the present invention;
FIG. 5 is a schematic diagram of a morphing unit included in a head region according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of lines for generating a target image according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of an actual display interface for generating a target image according to an embodiment of the present invention;
FIG. 8 is a schematic diagram illustrating a principle of determining a target location based on key points according to an embodiment of the present invention;
FIG. 9 is a diagram illustrating deformation elements and second deformation elements included in a first image according to an embodiment of the present invention;
FIG. 10 is a schematic diagram of a target head region provided by an embodiment of the present invention;
FIG. 11 is a schematic diagram of an actual display interface for generating a target image according to an embodiment of the present invention;
FIG. 12 is a schematic diagram of a process for generating a target image according to an embodiment of the present invention;
FIG. 13 is a schematic diagram of an actual display interface for generating a target image in each of two ways provided by an embodiment of the invention;
fig. 14 is a block diagram of an image switching display apparatus according to an embodiment of the present invention;
fig. 15 is a schematic structural diagram of a terminal according to an embodiment of the present invention;
fig. 16 is a schematic structural diagram of a server according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic diagram of an implementation environment of an image switching display method according to an embodiment of the present invention, and referring to fig. 1, the implementation environment includes: a terminal 101. The terminal 101 has a target application installed thereon, and the terminal 101 can process an image based on the target application.
The terminal 101 may acquire a plurality of images and switch and display the plurality of images, wherein a first image of the plurality of images includes a first object region, and a second image of the plurality of images includes a second object region. The terminal 101 may also display the target image during the switching from the first image to the second image. The terminal 101 may generate the target image based on a first object position of a first object area of the first image and a second object position of a second object area of the second image, where the target image is used to show an effect that an object in the first image moves from the first object position to the second object position and gradually changes into an object in the second image, so that in a process of switching between the first image and the second image by the terminal 101, by displaying the target image, a transition visual effect that an object moves from one object to another object and gradually changes into another object during image switching is achieved, and a visual effect of image switching display is improved. The first image and the second image may be images of a GIF (Graphics Interchange Format) motion picture file or a video file.
The first object region and the second object region each include a head region and a body region, and the body region refers to a body part other than the head part of the subject body part. The body may include a plurality of body parts of the subject, such as the subject's neck, arms, torso, legs, feet, and the like.
In one possible implementation environment, the implementation environment includes: a server 102, the server 102 being a background server of the target application, the server 102 being capable of processing the image. That is, the above-described process of switching and displaying images, which is executed by the terminal 101, may be executed by the server 102.
In another possible implementation environment, the implementation environment may include: a terminal 101 and a server 102. The terminal 101 and the server 102 establish a communication connection, and the terminal 101 can perform data interaction with the server 102 based on the target application. The terminal 101 may transmit the first image and the second image to the server 102, the server 102 generates a target image based on the first image and the second image, and transmits the target image to the terminal 101, and the terminal 101 displays the target image in a process of switching from the first image to the second image. Of course, the server 102 may display the target image when the first image and the second image are switched.
Therefore, the above-mentioned image switching display process may be implemented by the terminal 101, the server 102, or both the terminal 101 and the server 102, which is not specifically limited in this embodiment of the present invention.
It should be noted that the target application may be a stand-alone application, or a plug-in installed in a stand-alone application, etc. The terminal 101 may be any Device that installs the game application, such as a mobile phone terminal, a PAD (Portable Android Device) terminal, or a computer terminal. The server 102 may be a server cluster or a single device. The embodiment of the present invention is not particularly limited to this.
Fig. 2 is a flowchart of an image switching display method according to an embodiment of the present invention. The execution subject of the method is a terminal or a server, or the method can be realized by the server and the terminal interactively, the embodiment of the invention is only explained by taking the terminal as an example, referring to fig. 2, and the method comprises the following steps:
201. the terminal acquires a first image and a second image as a switching target.
In the embodiment of the invention, the second image is an image to be switched and displayed when the first image is played at the terminal. The terminal may acquire a first image and a second image input by a user.
The terminal can start a target application and realize the switching display of the image based on the target application. In a possible implementation scenario, the terminal may make a video file or a GIF dynamic picture file for a user, and the terminal performs an image switching display process according to the embodiment of the present invention during the file making process. Then this step may be: when a file generation instruction is received, the terminal acquires a plurality of images of the user, and the terminal acquires a first image and a second image from the plurality of images. The file generation instruction may be triggered by a user on an application interface of the target application. For example, when the terminal detects that a file generation button in the application interface is triggered, the terminal receives a file generation instruction.
Wherein the first image and the second image may show different scenes. The terminal can also identify scenes of the multiple images according to the display sequence of the multiple images, and identify a first image and a second image which are adjacent in display sequence and different in scenes. The first and second images may be two-dimensional images or three-dimensional images, etc. The embodiment of the present invention is not particularly limited to this.
In another possible implementation scenario, the terminal may further obtain an existing file, and obtain the first image and the second image from the file, where the process may be: when the terminal receives the file, the terminal identifies scenes of the multiple images in the file according to the display sequence of the multiple images in the file, and identifies a first image and a second image which are adjacent in display sequence and different in scenes.
202. The terminal performs image recognition on the first image and the second image, and performs step 203 when a first object region in the first image and a second object region in the second image are recognized.
The terminal may call a target algorithm, perform image recognition on the first image and the second image respectively, determine whether the first image includes the first object region and whether the second image includes the second object region, and execute the subsequent step 203 only when the first object region of the first image and the second object region of the second image are detected. The first object region and the second object region include a head region and a body region. When the first object area of the first image or the second object area of the second image is not detected, that is, the first image includes the first object area but the second image does not include the second object area, the second image includes the second object area but the first image does not include the first object area, or neither the first image nor the second image includes the object area, the terminal does not execute the subsequent step 203, and the process is ended.
When the first object area of the first image or the second object area of the second image is not detected, the terminal may further perform repeated detection on the first image or the second image in which the object area is not detected, and when the first object area of the first image and the second object area of the second image are detected, step 203 is performed; and when the repetition times reach the preset times and the first object area of the first image or the second object area of the second image is not detected yet, ending the process.
The target algorithm may be set based on needs, and is not specifically limited in the embodiment of the present invention, for example, the target algorithm may be a Face detection library provided by a terminal system, or the target algorithm may also be a Face alignment detection algorithm, a DSFD (Dual Shot Face Detector) algorithm, or the like.
203. The terminal acquires a first object position in the first image and a second object position in the second image.
In this step, the first object position is used to indicate a display position of the object in the first image, and the second object position is used to indicate a display position of the object in the second image. The object may include a head and a body. In one possible embodiment, the terminal may adopt a head position for connecting the head to the body, and locate the head region and the body region at the display position, the first object position may include the position of the head region in the first object region, and correspondingly, the second object position may include the position of the head region in the second object region. In another possible embodiment, the first object position may further include a position of the head region and a position of the body region in the first object region. Accordingly, the present step includes the following two implementations.
In the first mode, the first object position comprises the position of the head in the first object area, the terminal identifies the head areas of the first image and the second image, and acquires the first head position in the first image and the second head position in the second image.
In embodiments of the present invention, the location of the terminal head region may include the location of the five sense organs. The terminal respectively acquires a first quintet position of the first object area and a second quintet position of the second object area. In a possible implementation manner, the terminal may identify the head regions of the first object region and the second object region respectively through a target detection algorithm, and obtain a first facial organ position of the first facial organ region and a second facial organ position of the second facial organ region. The first or second facial position may include: eye, nose, eyebrow, mouth, head contour, or ear position. In another possible embodiment, the terminal may further obtain a first facial feature position of the target facial feature in the first facial feature region and a second facial feature position of the target facial feature in the second facial feature region. The target facial features may include: one or more of eyes, nose, eyebrows, mouth, head contour, or ears.
In a possible implementation manner, the first object position and the second object position may be represented by positions of pixel points, where the pixel points may be key points of five sense organs of the face region, and the key points of the five sense organs are used for indicating positions of the five sense organs in the face region, and may also be used for indicating display features such as shapes and sizes of the five sense organs in the face region. The terminal may extract a first key point of a first head region of the first image and a second key point of a second head region of the second image, and may set a position of the first key point as a position of the first key point and a position of the second key point as a position of the second key point. Of course, each five sense organs can correspond to a plurality of key points, and the positions of the key points can also indicate the characteristics of the respective five sense organs; features of the five sense organs include, but are not limited to: the shape, size, position, etc. of the five sense organs, for example, the size of the eyes, the curved shape and length of the eyebrows, etc.
In one possible implementation mode, the terminal can conduct position acquisition by calling a target detection algorithm, such as a human face detection algorithm, in another possible implementation mode, the terminal can further encapsulate execution logic of the called head detection algorithm in a target interface, and each time the step is executed, the terminal inputs the first image and the second image into the target interface, executes the execution logic of the called target detection algorithm, and outputs a first object position of the first object area and a second object position of the second object area.
As shown in fig. 3, the terminal identifies the face region, and obtains a plurality of key points of each facial feature region, for example, the eyebrow region in the head region may correspondingly include eight key points. The positions of the eight key points represent the display positions of the eyebrow area and the characteristics of the shape, the size and the like.
Further, the terminal may obtain the first keypoint pixel value and the second keypoint pixel value and the pixel value, respectively.
In the second mode, the first object position comprises the position of the head and the body in the first object area, the terminal identifies the head area and the body area of the first image and the second image, and obtains the first head position and the first body position in the first image and the second head position and the second body position in the second image.
In an embodiment of the invention, the terminal may adopt the positions of the head and the body, representing the display position of the first object region in the first image, and the body region may include a plurality of body parts such as arms, a neck, a trunk, legs, feet and the like shown in the image. The terminal can identify the five sense organs and the body through a target detection algorithm. In one possible embodiment, the terminal may further obtain a first body position of the target body part in the first body region, a first facial feature position of the target facial feature in the first facial feature region, and a second body position of the target body part in the second body region, a second facial feature position of the target facial feature in the second facial feature region. The target body part may be one or more of a plurality of body parts illustrated in the image. The process of acquiring the position of the first five sense organs and the position of the second five sense organs is the same as the first method, and is not described herein again.
In a possible implementation manner, the terminal may represent the position of the first object region by the position of the pixel point, and the body includes a plurality of bones, the plurality of bones are connected by a plurality of bone joint points to form a skeleton of the body, and the terminal may represent the positions of the body parts in the body region by the bone joint points, then this step may be: the terminal identifies the head region and the body region of the first image and the second image, and extracts a first key point of a fifth sense organ in the first head region and a first bone joint point in the first body region of the first image, and a second key point of the fifth sense organ in the second head region and a second bone joint point in the second body region of the second image; the terminal takes the first key point and the position of the first bone joint point as the position of the first key point, and takes the second key point and the position of the second bone joint point as the position of the second key point. A bone joint point is an end point at one end of a bone or a connection point between two adjacent bones. The plurality of bones are the bones forming the body of the human body or the animal, for example, the skeleton is composed of the neck bone, the trunk bone, the limb bone and the like, and the trunk bone can comprise the arm bone, the leg bone, the abdomen bone and the like.
It should be noted that the key points of the five sense organs in the above process refer to points located in the region of the five sense organs on the face of the subject, and are used to describe the features of the five sense organs on the face, such as the positions, sizes, shapes, etc. of the five sense organs, such as the eyes, eyebrows, mouth, etc. Skeletal joint points refer to points located in a body region of a subject that are used to characterize various body parts in the body region. For example, the position, size, shape, etc. of body parts such as arms and legs.
204. The terminal determines at least one first target position based on the first object position and the second object position.
Wherein the at least one first target position is indicative of a position in the at least one target image for displaying the target head region and the target body region. In this step, the terminal may regard at least one position located between the first object position and the second object position as the at least one first target position based on the first object position and the second object position. In a possible implementation manner, the number of the first target positions may be a target number, each first target position corresponds to one target image, and the target number of target images are inserted between the first image and the second image for image switching display. Step 204 may also include: the terminal determines first target positions of the target number according to the target number, the first target positions and the second target positions, and each target image corresponds to one first target position. Wherein, the closer the display sequence of the target images is, the closer the first target position is to the first target position, and the closer the display sequence of the target images is, the closer the first target position is to the second target position.
In a possible implementation, the terminal may further determine the target number of first target locations based on the first weight of the first image and the second weight of the second image, and the process may include: the terminal obtains a first weight of a target number and a second weight of the target number, determines the first target position of the target number between the first target position and the second target position according to the target number, the first target position, the second target position, the first weight of the target number and the second weight of the target number, and each target image corresponds to one first weight and one second weight. The first weight is the weight of the first image, and the second weight is the weight of the second image; as the display order of the target images gets closer, the first weight and the second weight for each target image gradually decrease and increase.
In a possible embodiment, the number of first target positions of the target can be a uniform transition from the first object region to the second object region. Then, as the display sequence of the target images gets closer and farther, the first weight corresponding to each target image decreases at a target speed and the second weight increases at a target speed. For example, the terminal may determine a plurality of average positions between the first object position and the second object position as the plurality of first object positions according to the number of objects of the at least one object image. The plurality of average positions are used for dividing the distance from the first object position to the second object position into a plurality of equal distances, and each equal distance corresponds to two adjacent average positions. The plurality of uniform locations indicates a uniform transition from the first object region to the second object region.
The process of the terminal acquiring the first weight and the second weight may be: the terminal can control the first weight corresponding to each target image to be uniformly reduced and the second weight to be uniformly increased according to the target number and the following formula I;
Figure BDA0002004551000000101
wherein P (i) is used for representing the first weight, Q (i) is used for representing the second weight, N is used for representing the target number of the target images between the first image and the second image, 0< i ≦ N, i is used for representing the display sequence of the target images in at least one target image, and i and N are positive integers.
When the first weight decreases at a constant target speed and the second weight increases at a constant target speed, as shown in the above formula i, the target speed may be-1/(N +1), that is, in the formula i, the slope in p (i) ═ 1- [1/(N +1) ], i is the target speed.
In another possible embodiment, the number of first target locations of the target number may be an uneven transition from the first object area to the second object area, e.g. the first weight may decrease faster and the second weight may increase slower. As shown in fig. 4, two graphs in fig. 4 are the increasing process of the second weight, and in fig. 4, the abscissa in the left and right graphs may represent the time stamps representing the display order of the target images, for example, the time stamps of 3 target images are 0.25, 0.50, and 0.75, respectively, and the time unit may be second. The ordinate represents the magnitude of the second weight, and the second weight gradually increases as the time stamp of the target image increases, that is, the display order of the target images is further. Obviously, the left graph shows that the second weight increases at a constant speed, and correspondingly, the first weight also decreases at a constant speed; the right graph shows that the second weight is increased at a reduced speed, and correspondingly, the first weight is also increased at an increased speed.
It should be noted that the first target position includes a target head position and a target body position, the target head position is used for indicating the display position of the target head in the target image, and the target body position is used for indicating the display position of the target area in the target image. The first object location may include a location of the head region, or the first object location may include a location of the head region and the body region. The terminal may determine the first target position based on the position of the head region, or the terminal may determine the first target position based on the positions of the head region and the body region, respectively. Accordingly, this step can be implemented in the following two ways.
In a first mode, the terminal determines a target head position in at least one first target position according to the first head position and the second head position.
The target head position refers to a position in the target image for displaying the target head region. In the embodiment of the present invention, the display position of the target body region in the target image changes along with the display position of the target head region. The first head position includes a plurality of first keypoint locations and the second head position includes a plurality of second keypoint locations, the first and second keypoints may be five sense organ keypoints of the head region. In one possible embodiment, each first keypoint and each second keypoint corresponds to a keypoint index, for example, the index of the 87 first keypoints is 1 to 87, respectively; the 87 second key points are respectively numbered from 1 to 87; the terminal may make a location determination based on the keypoint label. The terminal may also determine the location of keypoints with the same keypoint designation in the first object region and the second object region; for each target image, the terminal determines the position of a first target point of the key point label in the target image according to the position of the key point with the same key point label, the display sequence of the target image in the at least one target image, the first weight of the first image and the second weight of the second image. The key points may include key points for the five sense organs, or alternatively, the key points may also include key points for the five sense organs and skeletal joints.
In the first mode, the position of the first target point is used to indicate the display positions of the first key point with the same key point label in the first image and the second key point with the same key point label in the second image in the target image. The first weight is used to represent the weight of the first image relative to the target image. The second weight is used to represent the weight of the second image relative to the target image.
In one possible implementation, the terminal determines the position of the keypoint label in the target image according to the position of the keypoint with the same keypoint label, the display order of the target image in at least one target image, the first weight and the second weight by the following formula two;
the formula II is as follows: m [ i ] [ k ] ═ S [ k ] × p (i) + E [ k ] × q (i);
wherein, M [ i ] [ k ] is used for representing the key point position with key point label k in the ith target image, i is used for representing the display sequence of the current target image in at least one target image, k is used for representing the key point label k, S [ k ] is used for representing the position of the first key point with key point label k in the first image, E [ k ] is used for representing the position of the second key point with key point label k in the second image, p (i) is the first weight of the first image, and q (i) is the second weight of the second image, namely (1.0-p (i)).
It should be noted that, as the target images get closer to each other in the display order of at least one target image, the first weight of the first image gradually decreases, and the second weight of the second image gradually increases. That is, the earlier the display order of the target image is, the larger the first weight of the first image is, the smaller the second weight of the second image is, and for the key point with the same key point label, the closer the key point position of the key point label in the target image is to the first key point position of the key point label in the first image; the key point position of the key point label in the target image is closer to the second key point position of the key point label in the second image as the display sequence of the plurality of target images gets closer, so that the key point position of the same key point label in the plurality of target images gradually changes from the first key point position of the same key point label to the second key point position of the same key point label as the display sequence of the plurality of target images gets closer, thereby visually forming a feature which gradually moves from the first object area to the second object area and gradually changes from the feature of the five sense organs in the first object area to the feature of the five sense organs in the second object area.
When the first weight and the second weight are uniformly changed, in one possible implementation, the terminal may determine the positions of the keypoints of the keypoint labels in the target image according to the positions of the keypoints with the same keypoint labels, the display order of the target image in at least one target image, and the number of targets, by the following formula three:
the formula III is as follows: m [ i ] [ k ] ═ (S [ k ] (N +1-i) + E [ k ] (i))/(N + 1);
when the number of the first target positions can be the target number, the terminal determines the first target point positions with uniform target number between the first key point position and the second key point position according to the formula III.
In the second mode, the terminal determines a target head position in at least one first target position according to the first head position and the second head position, and determines a target body position in at least one first target position according to the first body position and the second body position.
The target head position refers to a position in the target image for displaying the target head region, and the target body position refers to a position in the target image for displaying the target body region. The terminal may determine a display position of the target head region and a display position of the target body region in the target image, respectively. In embodiments of the present invention, the first head position and the first body position comprise positions of a plurality of first keypoints, the second head position and the second body position comprise positions of a plurality of second keypoints, and the first keypoints and the second keypoints may comprise five sense organ keypoints and skeletal joint points. The terminal determines the positions of key points with the same key point labels in the first object area and the second object area; for each target image, the terminal determines the position of a first target point of the key point label in the target image according to the position of the key point with the same key point label, the display sequence of the target image in the at least one target image, the first weight of the first image and the second weight of the second image.
It should be noted that, in the second manner, the position of the first target point is used to indicate the display positions of the first five sense organ keypoints with the same keypoint label in the first image and the second five sense organ keypoints with the same keypoint label in the second image in the target image, and the display positions of the first bone joint points with the same keypoint label in the first image and the second bone joint points with the same keypoint label in the second image in the target image.
It should be noted that, in the second manner, the process of determining the position of the first target point based on the first key point and the second key point is the same as the first manner, and is not described herein again.
205. The terminal generates a target head region and a target body region of the at least one target image based on the at least one first target location, the first target region, and the second target region.
The head region and the body region of the first image and the second image further include non-key points, the first non-key points are points in the first object region except the first key points, and the second non-key points are points in the second object region except the second key points. The first object region includes a first head region and includes a head region and a body region. The second object region includes a second head region and a second body region. The terminal may generate a target head region and a target body region in at least one target image according to the first target position, the plurality of first non-keypoints and the plurality of second non-keypoints.
Accordingly, for any one target image, the terminal may generate a target head region and a target body region of the any one target image through the following steps 2051-2053.
2051. And the terminal determines a first display position of the first non-key point in the target image according to the first non-key point and the position of the first target point.
In this step, the terminal may determine a position relationship between the first key point and the first non-key point, and determine the first display position according to the position relationship and the position of the first target point.
It should be noted that the positional relationship may be the distance between the first keypoint and the first non-keypoint, or the terminal may further divide the first object region into a plurality of deformation regions based on the first keypoint, and the positional relationship may also be the positional relationship between the plurality of sub-regions and the first non-keypoint. Accordingly, step 2051 may include the following two implementations.
In a first implementation manner, the terminal determines the weight of the first non-key point according to the distance between the first key point and the first non-key point, and determines the first display position of the first non-key point according to the weight of the first non-key point and the position of the first target point.
The first display position is used for indicating the display position of the first non-key point in the target image. The terminal acquires the distance between the first non-key point and the first key point, and determines the weight of the first non-key point according to the distance, wherein the weight is used for indicating the distance from the position of the first non-key point to the position of the first key point. And the terminal determines a first display position of the first non-key point according to the position change characteristic from the first key point to the first target point position and the weight of the first non-key point. The variation characteristics of the first keypoint to the first target point position include, but are not limited to: the distance, direction, etc. from the location of the first keypoint to the location of the first target point.
In a possible implementation manner, the terminal may obtain a product of the weight and the distance from the position of the first keypoint to the position of the first target point according to the distance and the direction from the position of the first keypoint to the position of the first target point, respectively, and the terminal determines the first display position of the first non-keypoint according to the product of the distance and the direction from the position of the first keypoint to the position of the first target point.
In a second implementation manner, the terminal determines a plurality of deformation units in the first object region by taking the first key point and a first boundary point on the boundary of the first object region as vertexes, and determines a first display position of a first non-key point included in each deformation unit according to the position of the first target point and the position of the first boundary point.
The terminal uses the first key points and the first boundary points as vertexes to divide the first object area into a plurality of first deformation units (which are referred to as first deformation units to distinguish second deformation units of a background area behind the first object area), determines each first key point included in each first deformation unit, and determines a first display position of each first non-key point according to the position of the first non-key point, the first key points included in the first deformation units and the position of a first target point corresponding to the first key point by using each first deformation unit as a unit. The first shape-changing unit can be a triangular unit. In one possible implementation, the terminal may determine the first display position of the first non-keypoint based on a triangular affine linear transformation according to a variation characteristic between the first keypoint and the first target point.
In one possible embodiment, if the first keypoints include the key points of five sense organs, the terminal divides the first head region into a plurality of first deformation units; in another possible embodiment, if the first keypoints include five sense organ keypoints and skeletal joint points, the terminal divides both the first head region and the first body region into a plurality of first morphable elements.
As shown in fig. 5, taking the example of dividing the head region into a plurality of first deformation units, the first object region includes a plurality of first key points, the terminal may divide the face image into a plurality of triangle units, and determine the display position of the first non-key point in each triangle unit according to the positions of three vertices in each triangle unit and the positions of the first target points corresponding to the three vertices.
2052. And the terminal determines a second display position of the second non-key point in the target image according to the second non-key point and the position of the first target point.
The terminal can acquire the position relationship between the second key point and the second non-key point, and determine the position of the third target point according to the position relationship and the position of the first target point. The implementation manner of this step is the same as that of step 2051, and is not described herein again.
2053. And the terminal assigns values to the first target position, the first display position and the second display position according to the pixel values of each point in the first object area and the second object area, the first weight of the first image and the weight of the second image, and generates a target head area and a target body area of the target image.
In this step, the terminal may assign a value to the position of the first target point according to the pixel values of the key points in the first object region and the second object region, the first weight of the first image, and the second weight of the second image; and the terminal assigns the position of the second target point according to the pixel values of the non-key points in the first object area and the second object area, the first weight of the first image and the second weight of the second image to obtain the target image. Wherein the second target point position is used to indicate a position in the target image for displaying the first non-keypoints and the second non-keypoints.
It should be noted that the number of the first target positions may be multiple, each first target position corresponds to one target image, and the terminal determines the pixel value of the area where each first target position is located according to the display order of the target images, the pixel values of the first key points, and the pixel values of the second key points. The more front the target image display sequence of the first target position is, the closer the pixel value of the key point in the area of the first target position is to the pixel value of the first key point; the closer the target image display sequence of the first target position is, the closer the pixel value of the key point in the area of the first target position is to the pixel value of the second key point.
For the assignment process of the first target point position, the terminal may determine a first pixel value according to the pixel value of the first keypoint, the pixel value of the second keypoint, the first weight of the first image, and the second weight of the second image by the following formula four, and assign the first pixel value as the pixel value of the first target point position:
the formula four is as follows: m1[i][k]=S1[k]*P(i)+E1[k]*Q(i);
Wherein M is1[i][k]The method comprises the steps of displaying pixel values of key points with key point labels of k in the ith target image, i is used for displaying the display sequence of the current target image in at least one target image, k is used for displaying the key point labels of k, S [ k ]]For indicating the position of a first keypoint, denoted k, of the keypoint in the first image, E k]P (i) is the first weight of the first image, and q (i) is the second weight, i.e., (1.0-p (i)).
In a possible embodiment, the first weight and the second weight may be uniformly changed, that is, if there is a uniform transition from the first object region to the second object region, the terminal may determine the first pixel value according to the pixel value of the first keypoint, the pixel value of the second keypoint, the first weight of the first image, and the second weight of the second image by the following formula five, and assign the first pixel value as the pixel value of the first target point position:
the formula five is as follows: m1[i][k]=(S1[k]*(N+1-i)+E1[k]*(i))/(N+1);
Wherein N is used for representing the target number of the target images between the first image and the second image, i is more than 0 and less than or equal to N, and both i and N are positive integers.
For the assignment process of the second target point position, the terminal can assign the value to the display position of the target image according to whether the first display position of the first non-key point and the second display position of the second non-key point are overlapped. The process may be: when the non-key points from different images do not coincide at any display position of the target image, the terminal respectively adopts the pixel values of the first key point or the second non-key point corresponding to the display position to carry out assignment; that is, the terminal may assign the pixel value of the first non-key point or the second non-key point corresponding to the display position to the pixel value of the display position. When the non-key points from different images coincide at any display position of the target image, the terminal assigns values to the display position according to the first weight of the first image, the second weight of the second image and the pixel values of the first non-key point and the second non-key point corresponding to the display position. The terminal may obtain a first product of a pixel value of a first non-key point corresponding to the display position and a first weight, obtain a second product of a pixel value of a second non-key point corresponding to the display position and a second weight, and assign a pixel value corresponding to a sum of the first product and the second product to the pixel value of the display position.
It should be noted that, when the terminal generates the target image, there may be a portion of pixels in the target image without corresponding pixel values, that is, there may be a blank region in the target image, the terminal may adopt a reverse mapping manner, and the terminal assigns, according to the position of the blank pixel in the first image, the pixel value of the point of the blank pixel in the corresponding position in the first image to the pixel value of the blank pixel. Of course, the terminal may also obtain the pixel value of each blank pixel in the blank area of the target image and the first non-key point closest to the blank pixel, and assign the pixel value of the first non-key point to the pixel value of the blank pixel.
As shown in fig. 6, a in fig. 6 is a first image, b is an intermediate image for displaying the first image according to the position of the first target point, c is a second image, and d is an intermediate image for displaying the second image according to the position of the first target point. In the b diagram and the d diagram, the corresponding display positions of the first object area and the second object area in the target image are both the first target positions. And the display positions of the first key points in the first object area and the second object area in the b picture and the d picture are the positions of the first target points. As can be seen from the analysis, the image a is an image in which the head is tilted to the right, and the image c is an image in which the head is upright, and after being displayed based on the first target position, the image a and the image c are visually tilted such that the head regions of the two users are merged at the first target position and displayed, and the body region is changed according to the tilt direction of the head region when the images are subsequently generated. Fig. 7 is a schematic view of an actual interface corresponding to fig. 6, and a change situation of the image before and after the display based on the first target position can be more clearly shown in fig. 7. It should be noted that the first mode is mainly a process of determining the first target position through the key points in the image, as shown in fig. 8, fig. 8 is a process of determining the position by the first mode in the step 2051, and the display position is determined based on the positions of the key points in the human-shaped image, and the process of changing from left to right in fig. 8 is visually shown.
In the embodiment of the present invention, the first image and the second image may further include a background region, and the background region in the target image may be generated in step 206, or the terminal may further obtain the target image based on step 205 directly.
206. The terminal generates a background region of the at least one target image based on the background regions in the first and second images.
The first image and the second image may further include a background region except for the head region, the first background region is a region of the first image except for the first object region, the first background region includes a plurality of first background points therein, the second background region is a region of the second image except for the second object region, and the second background region includes a plurality of second background points therein. The terminal can generate a background area of at least one target image according to the positions of the first target points, the positions of the first background points and the positions of the second background points.
Accordingly, for any one of the target images, the terminal may generate the background region of the any one of the target images through the following steps 2061-2063.
2061. The terminal determines a third display position of the first background area in the target image according to the first target position of the target image and the first background position of the first background area in the first image.
Similarly to step 2051, the terminal may determine a position relationship between the first key point and the first background point in the first image, and determine the third display position according to the position relationship between the first key point and the first background point and the position of the first target point, where the position relationship may be a distance between the first key point and the first background point. Alternatively, the terminal may further divide the first background region into a plurality of sub-regions according to the first object region and each boundary point of the first image, and determine the third display position based on the plurality of sub-regions.
Accordingly, step 2061 may include the following two implementations.
In a first implementation manner, the terminal determines the weight of the first background point according to the distance between the first key point and the first background point, and determines the third display position of the first background point in the target image according to the weight of the first background point and the position of the first target point.
Wherein, the first background point is a pixel point in the first background area; the third display position is used for indicating the display position of the first background point in the target image. The terminal obtains the distance between the first background point and the first key point, and determines the weight of the first background point according to the distance. And the terminal determines a third display position of the first background point according to the position change characteristic from the first key point to the first target point position and the weight of the first background point. The variation characteristics of the first keypoint to the first target point position include, but are not limited to: the distance, direction, etc. from the location of the first keypoint to the location of the first target point.
In a possible implementation manner, the terminal may obtain a product of the weight and a distance from the position of the first keypoint to the position of the first target point according to the distance and the direction from the position of the first keypoint to the position of the first target point, respectively, and the terminal determines the third display position of the first background point according to a product of the distance and the direction from the position of the first keypoint to the position of the first target point.
In a second implementation manner, the terminal determines a plurality of second deformation units in the first background region by taking the first boundary point and the second boundary point as vertexes, and determines a third display position of the first background point in the target image according to the position of the first background point, the position of the first boundary point and the position of the second boundary point included in each second deformation unit.
The first boundary point is a pixel point on the boundary of the first object region, and the second boundary point is a pixel point on the boundary of the first image. Wherein the second boundary point may include an image vertex and a midpoint of each head region boundary on an image boundary of the first image. The first boundary point may include a region vertex on a region boundary of the first object region, and of course, the second boundary point may also include a midpoint of each region boundary, which is not specifically limited in the embodiment of the present invention.
The terminal divides the first background area into a plurality of second deformation units by taking the first boundary point and the second boundary point as vertexes, and determines a third display position of the first background point by taking each second deformation unit as a unit according to the position of the first background point in the first image, the positions of the first boundary point and the second boundary point included in the second deformation units and the corresponding display positions of the first boundary point and the second boundary point in the target image. The second boundary point may be an image vertex or a middle point on the first image boundary, and the display position of the second boundary point on the target image may be an image vertex or a middle point on the target image boundary.
2062. The terminal determines a fourth display position of the second background area in the target image according to the first target position of the target image and the fourth display position of the second background area in the second image;
the implementation manner of this step is the same process as that of step 2062, and is not described here again.
2063. And the terminal assigns values to the third display position and the fourth display position according to the pixel values of each point in the first background area and the second background area, the first weight of the first image and the weight of the second image, and generates a target background area of the target image.
In this step, the third display position includes display positions corresponding to a plurality of first background points in the target image, and for the assignment process of the third display position, the terminal may assign a value to the third display position and the fourth display position of the target image according to whether the display positions of the first background points in the target image and the display positions of the second background points in the target image are overlapped. The process is the same as the process of assigning the second target point position in step 2053, and is not described here again.
As shown in fig. 9, the terminal may determine, based on the plurality of triangular units, display positions of background points inside each triangular unit, that is, a third target point position and a fourth target point position, according to the display position corresponding to the first boundary point or the second boundary point included in each triangular unit. As shown in fig. 10, fig. 10 is a result of fusing the first object region and the second object region in the second manner in step 2052. As shown in fig. 11, based on the second mode, the head region, the body region, and the background region are fused to obtain a complete fused image, that is, a target image including the target head region, the target body region, and the background region is generated.
With reference to the processes of steps 204 to 206, as shown in fig. 12, in the embodiment of the present invention, first target positions of the first target region and the second target region are determined, based on the first target positions, deformation processing is performed on the first image and the second image, the first target region and the second target region are respectively displayed at the first target positions, as shown in the middle image in fig. 12, and then, in the process of generating the target image, two middle images corresponding to the first image and the second image respectively are subjected to image fusion based on pixel values, so as to obtain the target image, that is, the rightmost image.
It should be noted that the first manner in step 205 is the same as the first manner adopted in step 206, and the second manner adopted in step 205 is the same as the second manner adopted in step 205, and the first manner and the second manner are compared with each other through table 1 below.
TABLE 1
Figure BDA0002004551000000191
Figure BDA0002004551000000201
As can be seen from table 1 above, if the details are not considered, the effect of the first method is better, as shown in fig. 13, the upper half is to generate the target image by using the first method, and the lower half is to generate the target image by using the second method. As shown in fig. 13, because the first method generates the target image based on the weight of the non-key point, compared with the second method in which linear deformation based on triangles is adopted, the deformation inside each triangle is linear, and some detail positions are excessively deformed and do not conform to the objective law, the first method can achieve the effect that the deformation is weaker the farther away, and is more natural. However, since the second method is based on the triangle to perform linear deformation, the display position of each non-key point is not affected by the positions of a plurality of surrounding key points, as shown in fig. 13, compared to the first method in which the display position of the non-key point is affected by a plurality of surrounding key points when the weight is determined based on the distance between the key point and the non-key point, the degree of deformation is inconsistent, and the detail effect of the second method is better. In addition, the second mode adopts the OpenGL rendering principle, deformation is carried out based on the deformation unit as a unit, and compared with the first mode in which each pixel point needs to be adjusted, the calculation amount is large, the calculation amount of the second mode is small, the requirement on the operation capability of the terminal is low, and the method can be well realized on a GPU. However, in the second method, when the face area is extremely large, the triangle vertex may extend beyond the screen, resulting in an abnormal effect.
In addition, compared with the scene of face matting change in the related art, the face matting change in the related art mainly adjusts five sense organs in the head region, but does not adjust the background image in the same way. In the embodiment of the invention, the whole position of the head and the five sense organs in the head area are transited. Positionally, it is a process of gradually moving from the display position in the first image to the display position in the second image.
In a possible implementation scenario, the process of step 201 and 206 may also be executed by a server, and in the embodiment of the present invention, the terminal may further send an acquisition instruction to the server, where the acquisition instruction is used to instruct to return the target image based on the first image and the second image; the terminal receives the target image sent by the server.
In the embodiment of the present invention, the step 204-206 is a specific implementation manner of the step "the terminal generates at least one target image according to the first object position and the second object position". The above steps are actually to determine the first target position based on the first object position and the second object position, and then to assign a pixel value based on the first target position, thereby generating the target image. In another possible implementation, the terminal may further perform fusion on the first object region and the second object region in a preset image based on the first object position and the second object position directly and in a pixel value-based assignment manner to obtain a target head region, and then perform position adjustment and the like on the target head region based on the first object position determined by the first object position and the second object position, which is not specifically limited in the embodiment of the present invention. In another possible implementation manner, the terminal may further generate the target head region and the target body region based directly on the step 204-. The embodiment of the present invention is not particularly limited to this.
207. And the terminal packages the first image, the target image and the second image into a target file according to the display sequence corresponding to the first image, the target image and the second image respectively.
The terminal can also package the first image, the target number of target images and the second image and the display sequence indication information corresponding to the first image, the target number of target images and the second image in the target file according to the display sequence corresponding to the first image, the target number of target images and the second image. The target file may be a video file, or the target file may be an animation file of the GIF. The display order indication information is used to indicate the display order of the first image, the second image, or the target image.
It should be noted that, this step 207 is an optional step in the embodiment of the present invention, that is, after the terminal executes the step 206, the process of encapsulating into the target file is not executed first, but the step 208 is executed directly, and of course, the terminal may also sequentially execute the processes of generating the target image, determining the target file, and performing switching display according to the sequence of the step 206 and the step 208, which is not specifically limited in the embodiment of the present invention.
208. The terminal displays the at least one target image in the process of switching from the first image to the second image.
In the embodiment of the invention, when a switching instruction is received, the terminal displays the first image; when the first image display is finished, the terminal sequentially displays the target images of the target number according to the display sequence of each target image in the target images of the target number respectively; when the display of the target number of target images is finished, the terminal displays the second image. The playing instruction may be triggered based on a playing button in the application interface, or when the target image is generated, the terminal may further display a prompt message on the application interface, where the prompt message is used to prompt a user whether to preview an image switching display process, and when a confirmation operation of the user is received, the terminal receives the playing instruction.
In the embodiment of the invention, at least one target image is generated according to the first object position and the first object position, the at least one target image is displayed in the process of switching display from the first image to the second image, the effect that an object in the first image moves from the first object position to the second object position and gradually changes into an object in the second image is shown, the visual effect that one object moves to the other object and gradually changes into the other object is visually formed, and the transition effect in the process of switching display of images is improved.
Fig. 14 is a schematic structural diagram of an image switching display device according to an embodiment of the present invention. Referring to fig. 14, the apparatus includes: an acquisition module 1401, a generation module 1402, and a display module 1403.
An acquisition module 1401 configured to acquire a first image and a second image as a switching target;
the obtaining module 1401 is further configured to obtain a first object position in the first image and a second object position in the second image, where the first object position is used to indicate a display position of an object in the first image, and the second object position is used to indicate a display position of an object in the second image;
a generating module 1402, configured to generate at least one target image according to the first object position and the second object position, the at least one target image being used to show an effect of the object in the first image moving from the first object position to the second object position and gradually changing into the object in the second image;
a display module 1402, configured to display the at least one target image during a process of switching from the first image to the second image.
Optionally, the generating module includes:
a determining unit for determining at least one first target position based on the first object position and the second object position, the at least one first target position being indicative of a position in the at least one target image for displaying a target head region and a target body region;
a production unit for generating the at least one target image based on the at least one first target position, the first object position and the second object position.
Optionally, the determining unit is further configured to determine first target positions of the target number according to the target number, the first target positions and the second target positions, where each target image corresponds to one first target position;
wherein, the closer the display sequence of the target images is, the closer the first target position is to the first target position, and the closer the display sequence of the target images is, the closer the first target position is to the second target position.
Optionally, the determining unit is further configured to obtain a target number of first weights and a target number of second weights, where the first weights are weights of the first image and the second weights are weights of the second image; determining a target number of first target positions between the first target position and the second target position according to the target number, the first target position, the second target position, the target number of first weights and the target number of second weights, wherein each target image corresponds to one first weight and one second weight; as the display order of the target images gets closer, the first weight and the second weight corresponding to each target image gradually decrease and increase.
Optionally, the determining unit is further configured to obtain a target number of first weights and a target number of second weights according to the target number and a first formula, where the first weights and the second weights corresponding to each target image are uniformly decreased and uniformly increased;
Figure BDA0002004551000000231
wherein P (i) is used for representing the first weight, Q (i) is used for representing the second weight, N is used for representing the target number of the target images between the first image and the second image, 0< i ≦ N, i is used for representing the display sequence of the target images in at least one target image, and i and N are positive integers.
Optionally, the determining unit is further configured to determine positions of keypoints in the first object region in the first image and the second object region in the second image, where the keypoint labels are the same;
for each target image, determining the position of a first target point of the key point label in the target image according to the position of the key point with the same key point label, the display sequence of the target image in the at least one target image, the first weight of the first image and the second weight of the second image.
Optionally, the keypoints comprise the key points of the five sense organs, or alternatively, the key points comprise the key points of the five sense organs and the skeletal joint points.
Optionally, the generating unit is further configured to, for any one target image, determine a first display position of a first non-key point in the target image according to the first non-key point and the position of the first target point, where the first non-key point is a point other than the first key point in the first object region; determining a second display position of a second non-key point in the target image according to the second non-key point and the position of the first target point, wherein the second non-key point is a point in the second object region except for the second key point; and assigning values to the first target position, the first display position and the second display position according to the pixel values of the points in the first object region and the second object region, the first weight of the first image and the second weight of the second image, so as to generate a target head region and a target body region of the target image.
Optionally, the generating unit is further configured to implement any one of the following:
determining the weight of the first non-key point according to the distance between the first key point and the first non-key point, and determining the first display position of the first non-key point according to the weight of the first non-key point and the position of the first target point;
and determining a plurality of deformation units in the first object area by taking the first key point, a first boundary point on the boundary of the first object area and a second boundary point on the boundary of the first image as vertexes, and determining a first display position of a first non-key point included in each deformation unit according to the position of the first target point, the position of the first boundary point and the position of the second boundary point.
Optionally, the generating unit is further configured to assign a value to the first target point position according to the pixel value of each keypoint in the first object region, the first weight of the first image, and the second weight of the second image, by using the following formula four;
the formula four is as follows: m1[i][k]=S1[k]*P(i)+E1[k]*Q(i);
Wherein M is1[i][k]The pixel values of key points with key point number k in the ith target image, i is used for indicating the display sequence of the target images in at least one target image, k is used for indicating the key point number k, S [ k ]]For indicating the position of a first keypoint of the first image with keypoint index k, E k]P (i) is a first weight of the first image, q (i) is used for representing the second weight;
and assigning a second target point position according to the pixel values of the non-key points in the first object area and the second object area, the first weight and the second weight, wherein the second target point is used for indicating the display positions of the first non-key point and the second non-key point in the target image.
Optionally, the generating module is further configured to send an acquisition instruction to a server, where the acquisition instruction is used to instruct to return the at least one target image based on the first image and the second image; and receiving the at least one target image sent by the server.
In the embodiment of the invention, at least one target image is generated according to the first object position and the first object position, the at least one target image is displayed in the process of switching display from the first image to the second image, the effect that an object in the first image moves from the first object position to the second object position and gradually changes into an object in the second image is shown, the visual effect that one object moves to the other object and gradually changes into the other object is visually formed, and the transition effect in the process of switching display of images is improved.
All the above optional technical solutions may be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
It should be noted that: in the image switching display device provided in the above embodiment, only the division of the above functional modules is taken as an example for the image switching display, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the electronic device is divided into different functional modules to complete all or part of the above described functions. In addition, the image switching display device and the image switching display method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are detailed in the method embodiments and are not described herein again.
Fig. 15 is a schematic structural diagram of a terminal according to an embodiment of the present invention. The terminal 1500 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Terminal 1500 may also be referred to as user equipment, a portable terminal, a laptop terminal, a desktop terminal, or other names.
In general, terminal 1500 includes: a processor 1501 and memory 1502.
Processor 1501 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The processor 1501 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). Processor 1501 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also referred to as a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1501 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, processor 1501 may also include an AI (Artificial Intelligence) processor for processing computational operations related to machine learning.
The memory 1502 may include one or more computer-readable storage media, which may be non-transitory. The memory 1502 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1502 is used to store at least one instruction for execution by processor 1501 to implement the image switching display method provided by method embodiments herein.
In some embodiments, the terminal 1500 may further include: a peripheral interface 1503 and at least one peripheral. The processor 1501, memory 1502, and peripheral interface 1503 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 1503 via buses, signal lines, or circuit boards. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1504, touch screen display 1505, camera 1506, audio circuitry 1507, positioning assembly 1508, and power supply 1509.
The peripheral interface 1503 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 1501 and the memory 1502. In some embodiments, the processor 1501, memory 1502, and peripheral interface 1503 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1501, the memory 1502, and the peripheral interface 1503 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 1504 is used to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuitry 1504 communicates with communication networks and other communication devices via electromagnetic signals. The radio frequency circuit 1504 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1504 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1504 can communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 1504 may also include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1505 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1505 is a touch display screen, the display screen 1505 also has the ability to capture touch signals on or over the surface of the display screen 1505. The touch signal may be input to the processor 1501 as a control signal for processing. In this case, the display screen 1505 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, display 1505 may be one, providing the front panel of terminal 1500; in other embodiments, display 1505 may be at least two, each disposed on a different surface of terminal 1500 or in a folded design; in still other embodiments, display 1505 may be a flexible display disposed on a curved surface or a folded surface of terminal 1500. Even further, the display 1505 may be configured in a non-rectangular irregular pattern, i.e., a shaped screen. The Display 1505 can be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and other materials.
The camera assembly 1506 is used to capture images or video. Optionally, the camera assembly 1506 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1506 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 1507 may include a microphone and speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1501 for processing or inputting the electric signals to the radio frequency circuit 1504 to realize voice communication. For stereo capture or noise reduction purposes, multiple microphones may be provided, each at a different location of the terminal 1500. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1501 or the radio frequency circuit 1504 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 1507 may also include a headphone jack.
The positioning component 1508 is used to locate the current geographic position of the terminal 1500 for navigation or LBS (Location Based Service). The Positioning component 1508 may be a Positioning component based on the united states GPS (Global Positioning System), the chinese beidou System, the russian graves System, or the european union's galileo System.
Power supply 1509 is used to power the various components in terminal 1500. The power supply 1509 may be alternating current, direct current, disposable or rechargeable. When the power supply 1509 includes a rechargeable battery, the rechargeable battery may support wired charging or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal 1500 also includes one or more sensors 1510. The one or more sensors 1510 include, but are not limited to: acceleration sensor 1511, gyro sensor 1512, pressure sensor 1513, fingerprint sensor 1514, optical sensor 1515, and proximity sensor 1516.
The acceleration sensor 1511 may detect the magnitude of acceleration on three coordinate axes of the coordinate system established with the terminal 1500. For example, the acceleration sensor 1511 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 1501 may control the touch screen display 1505 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1511. The acceleration sensor 1511 may also be used for acquisition of motion data of a game or a user.
The gyroscope sensor 1512 can detect the body direction and the rotation angle of the terminal 1500, and the gyroscope sensor 1512 and the acceleration sensor 1511 cooperate to collect the 3D motion of the user on the terminal 1500. The processor 1501 may implement the following functions according to the data collected by the gyro sensor 1512: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensor 1513 may be disposed on a side bezel of terminal 1500 and/or underneath touch display 1505. When the pressure sensor 1513 is disposed on the side frame of the terminal 1500, the holding signal of the user to the terminal 1500 may be detected, and the processor 1501 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 1513. When the pressure sensor 1513 is disposed at a lower layer of the touch display 1505, the processor 1501 controls the operability control on the UI interface according to the pressure operation of the user on the touch display 1505. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1514 is configured to capture a fingerprint of the user, and the processor 1501 identifies the user based on the fingerprint captured by the fingerprint sensor 1514, or the fingerprint sensor 1514 identifies the user based on the captured fingerprint. Upon recognizing that the user's identity is a trusted identity, the processor 1501 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 1514 may be disposed on the front, back, or side of the terminal 1500. When a physical key or vendor Logo is provided on the terminal 1500, the fingerprint sensor 1514 may be integrated with the physical key or vendor Logo.
The optical sensor 1515 is used to collect ambient light intensity. In one embodiment, processor 1501 may control the brightness of the display on touch screen 1505 based on the intensity of ambient light collected by optical sensor 1515. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 1505 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 1505 is turned down. In another embodiment, the processor 1501 may also dynamically adjust the shooting parameters of the camera assembly 1506 based on the ambient light intensity collected by the optical sensor 1515.
A proximity sensor 1516, also known as a distance sensor, is typically provided on the front panel of the terminal 1500. The proximity sensor 1516 is used to collect the distance between the user and the front surface of the terminal 1500. In one embodiment, when the proximity sensor 1516 detects that the distance between the user and the front surface of the terminal 1500 gradually decreases, the processor 1501 controls the touch display 1505 to switch from the bright screen state to the dark screen state; when the proximity sensor 1516 detects that the distance between the user and the front surface of the terminal 1500 gradually becomes larger, the processor 1501 controls the touch display 1505 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 15 does not constitute a limitation of terminal 1500, and may include more or fewer components than shown, or some components may be combined, or a different arrangement of components may be employed.
Fig. 16 is a schematic structural diagram of a server according to an embodiment of the present invention, where the server 1600 may generate relatively large differences due to different configurations or performances, and may include one or more processors (CPUs) 1601 and one or more memories 1602, where the memory 1602 stores at least one instruction, and the at least one instruction is loaded and executed by the processor 1601 to implement the image switching display method provided by each method embodiment. Of course, the server may also have components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input/output, and the server may also include other components for implementing the functions of the device, which are not described herein again.
In an exemplary embodiment, there is also provided a computer-readable storage medium, such as a memory, including instructions executable by a processor in a terminal to perform the image switching display method in the above-described embodiments. For example, the computer-readable storage medium may be a ROM (Read-Only Memory), a RAM (random access Memory), a CD-ROM (Compact Disc Read-Only Memory), a magnetic tape, a floppy disk, an optical data storage device, and the like.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, and the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (23)

1. An image switching display method, characterized in that the method comprises:
acquiring a first image and a second image as a switching target;
acquiring a first object position in the first image and a second object position in the second image, wherein the first object position is used for representing the display position of an object in the first image, and the second object position is used for representing the display position of the object in the second image;
determining at least one first target position for indicating a position in at least one target image for displaying a target head region and a target body region, based on the first object position and the second object position, the at least one first target position being at least one position between the first object position and the second object position;
generating the at least one target image based on the at least one first target position, the first object position and the second object position, the at least one target image being used for showing the effect that the object in the first image moves from the first object position to the second object position and gradually changes to the object in the second image;
displaying the at least one target image during a switch from the first image to the second image.
2. The method of claim 1, wherein determining at least one first target location based on the first object location and the second object location comprises:
determining first target positions of the target number according to the target number, the first target positions and the second target positions, wherein each target image corresponds to one first target position;
wherein the closer the display order of the target images is, the closer the first target position is to the first target position, and the closer the display order of the target images is, the closer the first target position is to the second target position.
3. The method of claim 2, wherein determining a first target position for a number of targets based on the number of targets, the first target position, and the second target position comprises:
acquiring a first weight and a second weight, wherein the first weight is the weight of the first image, and the second weight is the weight of the second image;
determining a target number of first target positions between the first target position and the second target position according to the target number, the first target position, the second target position, the target number of first weights and the target number of second weights, wherein each target image corresponds to one first weight and one second weight;
and gradually reducing the first weight and gradually increasing the second weight corresponding to each target image as the display sequence of the target images gets closer and farther.
4. The method of claim 3, wherein obtaining the target number of first weights and the target number of second weights comprises:
according to the target number and a first formula I, acquiring a first weight of the target number and a second weight of the target number, wherein the first weight corresponding to each target image is uniformly reduced, and the second weight is uniformly increased;
the formula I is as follows:
Figure FDA0003075001710000021
wherein P (i) is used for representing the first weight, Q (i) is used for representing the second weight, N is used for representing the target number of the target images between the first image and the second image, 0< i ≦ N, i is used for representing the display sequence of the target images in at least one target image, and i and N are positive integers.
5. The method of claim 1, wherein determining at least one first target location based on the first object location and the second object location comprises:
determining the positions of key points of which a first object area in the first image and a second object area in the second image have the same key point label;
for each target image, determining a first target point position of the key point label in the target image according to the position of the key point with the same key point label, the display sequence of the target image in the at least one target image, the first weight of the first image and the second weight of the second image.
6. The method of claim 5, wherein the keypoints comprise facial key points, or alternatively, facial key points and skeletal joint points.
7. The method of claim 5, wherein generating the at least one target image based on the at least one first target location, the first object location, and the second object location comprises:
for any target image, determining a first display position of a first non-key point in the target image according to the first non-key point and the position of the first target point, wherein the first non-key point is a point except for the first key point in the first target area;
determining a second display position of a second non-key point in the target image according to the second non-key point and the position of the first target point, wherein the second non-key point is a point except for the second key point in the second object area;
and assigning values to the first target position, the first display position and the second display position according to the pixel values of the points in the first object region and the second object region, the first weight of the first image and the second weight of the second image, so as to generate a target head region and a target body region of the target image.
8. The method according to claim 7, wherein the determining the first display position of the first non-keypoint in the target image according to the first non-keypoint and the first target point position comprises any one of the following implementation manners:
determining the weight of a first non-key point according to the distance between the first key point and the first non-key point, and determining a first display position of the first non-key point according to the weight of the first non-key point and the position of a first target point;
and determining a plurality of deformation units in the first object area by taking the first key point, a first boundary point on the boundary of the first object area and a second boundary point on the boundary of the first image as vertexes, and determining a first display position of a first non-key point included in each deformation unit according to the position of the first target point, the position of the first boundary point and the position of the second boundary point.
9. The method of claim 7, wherein assigning the first target position, the first display position, and the second display position based on pixel values of respective points in the first object region and the second object region, a first weight of the first image, and a second weight of the second image, and generating a target head region and a target body region of the target image comprises:
assigning a value to the position of the first target point according to the pixel value of each key point in the first object region, the first weight of the first image and the second weight of the second image by using a fourth formula;
the formula four is as follows: m1[i][k]=S1[k]*P(i)+E1[k]*Q(i);
Wherein M is1[i][k]The pixel values of key points with key point number k in the ith target image, i is used for indicating the display sequence of the target images in at least one target image, k is used for indicating the key point number k, S [ k ]]For indicating the position of a first keypoint of keypoint index k in said first image, E k]Pixel values for representing second keypoints in said second image with keypoint index k, p (i) being a first weight of the first image, q (i) being for representing said second weight;
and assigning a value to a second target point position according to the pixel values of the non-key points in the first object area and the second object area, the first weight and the second weight, wherein the second target point is used for indicating the display positions of the first non-key point and the second non-key point in the target image.
10. The method of claim 1, wherein generating at least one target image from the first object location and the second object location comprises:
sending a retrieval instruction to a server, wherein the retrieval instruction is used for instructing to return the at least one target image based on the first image and the second image;
and receiving the at least one target image sent by the server.
11. The method of claim 1, wherein after generating at least one target image based on the first object location and the second object location, the method further comprises:
and packaging the first image, the at least one target image and the second image into a target file according to the display sequence corresponding to the first image, the at least one target image and the second image respectively.
12. An image switching display device, characterized in that the device comprises:
an acquisition module configured to acquire a first image and a second image as a switching target;
the acquiring module is further configured to acquire a first object position in the first image and a second object position in the second image, where the first object position is used to represent a display position of an object in the first image, and the second object position is used to represent a display position of an object in the second image;
a generating module for determining at least one first target position based on the first object position and the second object position, the at least one first target position being indicative of a position in at least one target image for displaying a target head region and a target body region, the at least one first target position being at least one position between the first object position and the second object position; generating the at least one target image based on the at least one first target position, the first object position and the second object position, the at least one target image being used for showing the effect that the object in the first image moves from the first object position to the second object position and gradually changes to the object in the second image;
a display module for displaying the at least one target image during a process of switching from the first image to the second image.
13. The apparatus of claim 12, wherein the generating module is configured to:
determining first target positions of the target number according to the target number, the first target positions and the second target positions, wherein each target image corresponds to one first target position;
wherein the closer the display order of the target images is, the closer the first target position is to the first target position, and the closer the display order of the target images is, the closer the first target position is to the second target position.
14. The apparatus of claim 13, wherein the generating module is configured to:
acquiring a first weight and a second weight, wherein the first weight is the weight of the first image, and the second weight is the weight of the second image;
determining a target number of first target positions between the first target position and the second target position according to the target number, the first target position, the second target position, the target number of first weights and the target number of second weights, wherein each target image corresponds to one first weight and one second weight;
and gradually reducing the first weight and gradually increasing the second weight corresponding to each target image as the display sequence of the target images gets closer and farther.
15. The apparatus of claim 14, wherein the generating module is configured to:
according to the target number and a first formula I, acquiring a first weight of the target number and a second weight of the target number, wherein the first weight corresponding to each target image is uniformly reduced, and the second weight is uniformly increased;
the formula I is as follows:
Figure FDA0003075001710000051
wherein P (i) is used for representing the first weight, Q (i) is used for representing the second weight, N is used for representing the target number of the target images between the first image and the second image, 0< i ≦ N, i is used for representing the display sequence of the target images in at least one target image, and i and N are positive integers.
16. The apparatus of claim 12, wherein the generating module is configured to:
determining the positions of key points of which a first object area in the first image and a second object area in the second image have the same key point label;
for each target image, determining a first target point position of the key point label in the target image according to the position of the key point with the same key point label, the display sequence of the target image in the at least one target image, the first weight of the first image and the second weight of the second image.
17. The device of claim 16, wherein the keypoints comprise facial key points, or alternatively, facial key points and skeletal joint points.
18. The apparatus of claim 16, wherein the generating module is configured to:
for any target image, determining a first display position of a first non-key point in the target image according to the first non-key point and the position of the first target point, wherein the first non-key point is a point except for the first key point in the first target area;
determining a second display position of a second non-key point in the target image according to the second non-key point and the position of the first target point, wherein the second non-key point is a point except for the second key point in the second object area;
and assigning values to the first target position, the first display position and the second display position according to the pixel values of the points in the first object region and the second object region, the first weight of the first image and the second weight of the second image, so as to generate a target head region and a target body region of the target image.
19. The apparatus of claim 18, wherein the generating means is configured to perform any of:
determining the weight of a first non-key point according to the distance between the first key point and the first non-key point, and determining a first display position of the first non-key point according to the weight of the first non-key point and the position of a first target point;
and determining a plurality of deformation units in the first object area by taking the first key point, a first boundary point on the boundary of the first object area and a second boundary point on the boundary of the first image as vertexes, and determining a first display position of a first non-key point included in each deformation unit according to the position of the first target point, the position of the first boundary point and the position of the second boundary point.
20. The apparatus of claim 18, wherein the generating module is configured to:
assigning a value to the position of the first target point according to the pixel value of each key point in the first object region, the first weight of the first image and the second weight of the second image by using a fourth formula;
the formula four is as follows: m1[i][k]=S1[k]*P(i)+E1[k]*Q(i);
Wherein M is1[i][k]The pixel values of key points with key point number k in the ith target image, i is used for indicating the display sequence of the target images in at least one target image, k is used for indicating the key point number k, S [ k ]]For indicating the position of a first keypoint of keypoint index k in said first image, E k]For representing pixel values of a second keypoint of said second image with keypoint index k, P (i) being a first weight of the first image,q (i) for representing the second weight;
and assigning a value to a second target point position according to the pixel values of the non-key points in the first object area and the second object area, the first weight and the second weight, wherein the second target point is used for indicating the display positions of the first non-key point and the second non-key point in the target image.
21. The apparatus of claim 12, wherein the generating module is configured to:
sending a retrieval instruction to a server, wherein the retrieval instruction is used for instructing to return the at least one target image based on the first image and the second image;
and receiving the at least one target image sent by the server.
22. An electronic device, comprising one or more processors and one or more memories having stored therein at least one instruction that is loaded and executed by the one or more processors to perform operations performed by the image switching display method of any one of claims 1 to 11.
23. A computer-readable storage medium having stored therein at least one instruction, which is loaded and executed by a processor to perform operations performed by the image switching display method according to any one of claims 1 to 11.
CN201910224190.8A 2019-03-22 2019-03-22 Image switching display method and device, electronic equipment and storage medium Active CN109947338B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910224190.8A CN109947338B (en) 2019-03-22 2019-03-22 Image switching display method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910224190.8A CN109947338B (en) 2019-03-22 2019-03-22 Image switching display method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109947338A CN109947338A (en) 2019-06-28
CN109947338B true CN109947338B (en) 2021-08-10

Family

ID=67011005

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910224190.8A Active CN109947338B (en) 2019-03-22 2019-03-22 Image switching display method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109947338B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110942501B (en) * 2019-11-27 2020-12-22 深圳追一科技有限公司 Virtual image switching method and device, electronic equipment and storage medium
CN111209050A (en) * 2020-01-10 2020-05-29 北京百度网讯科技有限公司 Method and device for switching working mode of electronic equipment
CN113973189B (en) * 2020-07-24 2022-12-16 荣耀终端有限公司 Display content switching method, device, terminal and storage medium
CN112887699B (en) * 2021-01-11 2023-04-18 京东方科技集团股份有限公司 Image display method and device
CN113018855B (en) * 2021-03-26 2022-07-01 完美世界(北京)软件科技发展有限公司 Action switching method and device for virtual role
US11741870B2 (en) * 2021-06-23 2023-08-29 Samsung Electronics Co., Ltd. Electronic device, method, and computer-readable storage medium for reducing afterimage in display area

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1681000A (en) * 2004-04-05 2005-10-12 精工爱普生株式会社 Dynamic cross fading method and apparatus
CN102449664A (en) * 2011-09-27 2012-05-09 华为技术有限公司 Gradual-change animation generating method and apparatus
CN108769361A (en) * 2018-04-03 2018-11-06 华为技术有限公司 A kind of control method and terminal of terminal wallpaper
CN109068053A (en) * 2018-07-27 2018-12-21 乐蜜有限公司 Image special effect methods of exhibiting, device and electronic equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103853438B (en) * 2012-11-29 2018-01-26 腾讯科技(深圳)有限公司 atlas picture switching method and browser

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1681000A (en) * 2004-04-05 2005-10-12 精工爱普生株式会社 Dynamic cross fading method and apparatus
CN102449664A (en) * 2011-09-27 2012-05-09 华为技术有限公司 Gradual-change animation generating method and apparatus
CN108769361A (en) * 2018-04-03 2018-11-06 华为技术有限公司 A kind of control method and terminal of terminal wallpaper
CN109068053A (en) * 2018-07-27 2018-12-21 乐蜜有限公司 Image special effect methods of exhibiting, device and electronic equipment

Also Published As

Publication number Publication date
CN109947338A (en) 2019-06-28

Similar Documents

Publication Publication Date Title
CN109978989B (en) Three-dimensional face model generation method, three-dimensional face model generation device, computer equipment and storage medium
CN109947338B (en) Image switching display method and device, electronic equipment and storage medium
CN110992493B (en) Image processing method, device, electronic equipment and storage medium
CN109308727B (en) Virtual image model generation method and device and storage medium
CN110427110B (en) Live broadcast method and device and live broadcast server
CN111464749B (en) Method, device, equipment and storage medium for image synthesis
CN110064200B (en) Object construction method and device based on virtual environment and readable storage medium
CN112287852B (en) Face image processing method, face image display method, face image processing device and face image display equipment
CN110599593B (en) Data synthesis method, device, equipment and storage medium
CN109302632B (en) Method, device, terminal and storage medium for acquiring live video picture
CN111028144B (en) Video face changing method and device and storage medium
CN112337105B (en) Virtual image generation method, device, terminal and storage medium
CN112565806B (en) Virtual gift giving method, device, computer equipment and medium
CN110956580B (en) Method, device, computer equipment and storage medium for changing face of image
CN111723803B (en) Image processing method, device, equipment and storage medium
CN109547843B (en) Method and device for processing audio and video
CN112308103B (en) Method and device for generating training samples
CN112135191A (en) Video editing method, device, terminal and storage medium
CN113384880A (en) Virtual scene display method and device, computer equipment and storage medium
CN114741559A (en) Method, apparatus and storage medium for determining video cover
CN112396076A (en) License plate image generation method and device and computer storage medium
CN110312144B (en) Live broadcast method, device, terminal and storage medium
CN112235650A (en) Video processing method, device, terminal and storage medium
CN112967261B (en) Image fusion method, device, equipment and storage medium
CN110335224B (en) Image processing method, image processing device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant