CN109947338A - Image switches display methods, device, electronic equipment and storage medium - Google Patents

Image switches display methods, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN109947338A
CN109947338A CN201910224190.8A CN201910224190A CN109947338A CN 109947338 A CN109947338 A CN 109947338A CN 201910224190 A CN201910224190 A CN 201910224190A CN 109947338 A CN109947338 A CN 109947338A
Authority
CN
China
Prior art keywords
image
key point
weight
target
target image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910224190.8A
Other languages
Chinese (zh)
Other versions
CN109947338B (en
Inventor
钱梦仁
沈珂轶
徐冬成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910224190.8A priority Critical patent/CN109947338B/en
Publication of CN109947338A publication Critical patent/CN109947338A/en
Application granted granted Critical
Publication of CN109947338B publication Critical patent/CN109947338B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a kind of image switching display methods, device, electronic equipment and storage mediums, belong to technical field of image processing.This method comprises: obtaining the first image and the second image as switching target;The second object's position in first image in the first object's position and second image is obtained, which is used to indicate the display position of first objects in images;According to first object's position and second object's position, at least one target image is generated, at least one target image is for showing that the object in first image is being moved to second object's position from first object's position and is fading to the effect of the object in second image;During switching from first image to second image, at least one target image is shown.An object movement is visually formed to another object and fades to the visual effect of another object, improves the transition effect during image switching display.

Description

Image switches display methods, device, electronic equipment and storage medium
Technical field
The present invention relates to technical field of image processing, in particular to a kind of image switches display methods, device, electronic equipment And storage medium.
Background technique
Multiple scenes would generally be shown in one section of video, video transition refers in video display process by a scene switching It is played out to another scene.Video file includes multiple images, and each scene corresponds to one or more image, video Transition, which that is to say, switches over display for multiple corresponding images of different scenes.Thus, in this field, it will usually be carried out to image Processing, so as to reach preferable visual effect when image switching display.
In the related technology, image switching display process may include: the terminal acquisition selected switch mode of user, such as The mode of coming out gradually is faded out, the first image corresponding to current scene, corresponding second image of next scene carry out at transparence Reason, for example, the transparency of the first image and the second image is adjusted to 50%, when the first image and the second image switch, It is that 50% the first image and the second image are overlapped display by transparency, to visually reach current scene gradually Disappearance, the transition effect that next scene gradually appears.
The above process is actually based on two images of Overlapping display transparency process, realizes transition effect, however, folded The different scenes interlaced display of two kinds of images when adding display, for example, showing face and house, visual effect on same position simultaneously Poor, the transition effect so as to cause above-mentioned image switching display process is poor.
Summary of the invention
The embodiment of the invention provides a kind of image switching display methods, device, electronic equipment and storage mediums, can solve The certainly poor problem of the transition effect of image switching display process.The technical solution is as follows:
On the one hand, a kind of image switching display methods is provided, which comprises
Obtain the first image and the second image as switching target;
The second object's position in acquisition the first image in the first object's position and second image, described first Object's position is used to indicate the display position of object in the first image, and second object's position is for indicating described second The display position of objects in images;
According to first object's position and second object's position, generate at least one target image, it is described at least One target image is for showing that the object in the first image is being moved to described second pair from first object's position As position and fade to the effect of the object in second image;
During switching from the first image to second image, at least one described target image is shown.
On the other hand, a kind of image shifting display is provided, described device includes:
Module is obtained, for obtaining the first image and as the second image of switching target;
The acquisition module, be also used to obtain in the first image in the first object's position and second image Two object's positions, first object's position are used to indicate the display position of object in the first image, second object Position is used to indicate the display position of second objects in images;
Generation module, for generating at least one target according to first object's position and second object's position Image, at least one described target image are used to show the object in the first image mobile from first object's position To second object's position and fade to the effect of the object in second image;
Display module, for during switching from the first image to second image, display to be described at least One target image.
On the other hand, provide a kind of electronic equipment, the electronic equipment include one or more processors and one or Multiple memories are stored at least one instruction in one or more of memories, and at least one instruction is by described one A or multiple processors are loaded and are executed to realize the operation as performed by above-mentioned image switching display methods.
On the other hand, a kind of computer readable storage medium is provided, at least one finger is stored in the storage medium It enables, at least one instruction is loaded as processor and executed to realize the behaviour as performed by above-mentioned image switching display methods Make.
Technical solution bring beneficial effect provided in an embodiment of the present invention at least may include:
By generating at least one target image according to the first object's position and the first object's position, from the first image During the switching display of the second image of phase, by showing at least one target image, to show that the object in the first image exists The second object's position is moved to from the first object's position and fades to the effect of the object in the second image, visually forms one A object is mobile to another object and to fade to the visual effect of another object, during improving image switching display Transition effect.
Detailed description of the invention
To describe the technical solutions in the embodiments of the present invention more clearly, make required in being described below to embodiment Attached drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for For those of ordinary skill in the art, without creative efforts, it can also be obtained according to these attached drawings other Attached drawing.
Fig. 1 is a kind of schematic diagram of the implementation environment of image switching display methods provided in an embodiment of the present invention;
Fig. 2 is a kind of flow chart of image switching display methods provided in an embodiment of the present invention;
Fig. 3 is a kind of key point schematic diagram of head zone provided in an embodiment of the present invention;
Fig. 4 is that a kind of second weight provided in an embodiment of the present invention increases process schematic;
Fig. 5 is the deformation cell schematics that a kind of head zone provided in an embodiment of the present invention includes;
Fig. 6 is a kind of lines schematic diagram for generating target image provided in an embodiment of the present invention;
Fig. 7 is a kind of actual displayed interface schematic diagram for generating target image provided in an embodiment of the present invention;
Fig. 8 is a kind of schematic illustration that target position is determined based on key point provided in an embodiment of the present invention;
Fig. 9 is a kind of deformation unit that first image includes and the second deformation unit signal provided in an embodiment of the present invention Figure;
Figure 10 is a kind of target cranial area schematic provided in an embodiment of the present invention;
Figure 11 is a kind of actual displayed interface schematic diagram for generating target image provided in an embodiment of the present invention;
Figure 12 is a kind of process schematic for generating target image provided in an embodiment of the present invention;
Figure 13 is the actual displayed interface schematic diagram of each self-generating target image of two ways provided in an embodiment of the present invention;
Figure 14 is a kind of structural block diagram of image shifting display provided in an embodiment of the present invention;
Figure 15 is a kind of structural schematic diagram of terminal provided in an embodiment of the present invention;
Figure 16 is a kind of structural schematic diagram of server provided in an embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are some of the embodiments of the present invention, instead of all the embodiments.Based on this hair Embodiment in bright, every other implementation obtained by those of ordinary skill in the art without making creative efforts Example, shall fall within the protection scope of the present invention.
Fig. 1 is a kind of schematic diagram of the implementation environment of image switching display methods provided in an embodiment of the present invention, referring to figure 1, which includes: terminal 101.Target application is installed in the terminal 101, which can answer based on the target With handling image.
Wherein, multiple available images of the terminal 101, and multiple images are switched and are shown, also, multiple images In the first image include the first subject area, the second image includes the second subject area.The terminal 101 can also be from One image is into the second image handoff procedure, displaying target image.Wherein, the terminal 101 can based on the first image first Second object's position of the second subject area of the first object's position and the second image of subject area, generates the target image, The target image is used to show that object in the first image to be moved to second object's position and gradually from first object's position Become the effect of the object in second image, thus carried out in the first image and the second image handoff procedure in the terminal 101, By showing the target image, to the movement of another object and another object is faded to from an object when reaching image switching Transition visual effect, promoted image switching display visual effect.Wherein, first image and the second image can be The image of GIF (Graphics Interchange Format, graphic interchange format) motion picture files or video file.
It should be noted that first subject area and the second subject area include head zone and body region, The body region refers to the physical feeling in subject's body position in addition to head.Body may include multiple body of object Position, for example, the neck of object, arm, trunk, leg, foot etc..
In a kind of possible implementation environment, which includes: server 102, which is that the target is answered Background server, the server 102 can be handled image.It that is to say, the image switching that above-mentioned terminal 101 executes The process of display can also be executed by server 102.
In alternatively possible implementation environment, which may include: terminal 101 and server 102.The terminal 101 and server 102 establish communication connection, the terminal 101 can be based on the target application, with the server 102 carry out data Interaction.First image and the second image can be sent to server 102 by the terminal 101, the server 102 be based on this first Image and the second image generate target image, and the target image are sent to terminal 101, and the terminal 101 is from the first image Into the second image handoff procedure, displaying target image.It certainly, can also be in the first image and the second image on the server 102 When switching, the target image is shown.
Therefore, the process of above-mentioned image switching display, can be realized by terminal 101, can also be realized by server 102, It can also realize that the embodiment of the present invention is not specifically limited in this embodiment by terminal 101 and the interaction of server 102.
It should be noted that the target application can be an independent application program, or to be mounted on independent answer With the plug-in unit etc. in program.The terminal 101 can be mobile phone terminal, PAD (Portable AndroidDevice, tablet computer) Any equipment for installing the game application such as terminal or computer terminal.The server 102 can be server cluster, can also be with For individual equipment.The embodiment of the present invention is not specifically limited in this embodiment.
Fig. 2 is a kind of flow chart of image switching display methods provided in an embodiment of the present invention.The executing subject of this method For terminal or server, alternatively, this method can also be by server and terminal interaction realization, the embodiment of the present invention is only with terminal For be illustrated, referring to fig. 2, this method comprises:
201, terminal obtains the first image and the second image as switching target.
In the embodiment of the present invention, at the end of which refers to that terminal first image plays, the figure of to be switched display Picture.The first image and the second image of the available user's input of the terminal.
The terminal can star target application, and be based on the target application, realize that the switching of image is shown.In a kind of possibility Implement scene in, the terminal can for user make video file or GIF motion picture files etc., terminal production file In the process, the image switching display process of the embodiment of the present invention is executed.Then this step can be with are as follows: when receiving file generated instruction When, which obtains multiple images of the user, which obtains the first image and the second image from multiple images.This article Part generates instruction and can be triggered on the application interface of the target application by user.For example, when the terminal detects this using boundary When file generated button in face is triggered, which receives file generated instruction.
Wherein, the scene that first image and the second image are shown can be different.The terminal can also according to this multiple The display order of image identifies the scene of multiple images, identifies that display order is adjacent and scene is different first Image and the second image.First image and the second image can be two dimensional image or 3-D image etc..The embodiment of the present invention It is not specifically limited in this embodiment.
In alternatively possible implement scene, which can also obtain existing file, and first is obtained out of this document Image and the second image, the process can be with are as follows: when terminal receives this document, the terminal is according to multiple images in this document Display order identifies the scene of multiple images in this document, identifies that display order is adjacent and scene is different One image and the second image.
202, the terminal carries out image recognition to first image and second image, when recognizing in first image the In an object region and second image when the second subject area, step 203 is executed.
The terminal can carry out image recognition to first image and the second image respectively with invocation target algorithm, and judgement should Whether it includes the second subject area that whether the first image includes the first subject area, the second image, when detecting first image The first subject area and the second image the second subject area when, which just executes subsequent step 203.First target area Domain and the second subject area include head zone and body region.When the first subject area or second that the first image is not detected It when the second subject area of image, that is to say, the first image includes the first subject area but the second image does not include the second object Region, the second image include the second subject area but the first image does not include the first subject area, alternatively, the first image and second Not comprising when having subject area in image, which does not execute subsequent step 203, terminates.
Wherein, when the second subject area of the first subject area or the second image that the first image is not detected, the end End can also repeatedly detect the first image or the second image that subject area is not detected, and upon this detection the When the first subject area of one image and the second subject area of the second image, step 203 is executed;When number of repetition reaches pre- If when number, be still not detected the first image the first subject area or the second image the second subject area when, terminate.
Wherein, which can be based on needing to be configured, and the embodiment of the present invention is not specifically limited in this embodiment, example Such as, which can be detected for the Face datection library carried based on terminal system, alternatively, the target algorithm can be with For face alignment detection algorithm, DSFD (Dual Shot Face Detector, dijection surface detector) algorithm etc..
203, the terminal obtains in first image the second object's position in the first object's position and second image.
In this step, which is used to indicate the display position of first objects in images, second object Position is used to indicate the display position of second objects in images.The object may include head and body.A kind of possible In embodiment, since head is connected with body, which can use head position, positioning head region and body region Display position, which may include head zone position in first subject area, correspondingly, second Object's position also may include head zone position in the second subject area.It, should in alternatively possible embodiment First object's position can also include head zone position and body region position in the first subject area.Accordingly , this step includes following two implementation.
First way, the first object's position include head position in the first subject area, which goes out should The head zone of first image and second image obtains the first head position and in second image second in first image Head position.
In the embodiment of the present invention, the position of the terminal head zone may include face position.The terminal obtains respectively should First face position of the first subject area and the second face position of the second subject area.In a kind of possible embodiment party In formula, the terminal can by algorithm of target detection, respectively to the head zone of the first subject area and the second subject area into Row identification obtains first face position in the first face region and second face position in the second face region.One or five Official position or the second face position may include: the position of eyes, nose, eyebrow, mouth, contouring head or ear.Another In the possible embodiment of kind, which can also obtain the first face position of target face in the first face region, with And second target face in face region the second face position.The target face may include: eyes, nose, eyebrow, mouth Bar, it is one or more in contouring head or ear.
In a kind of possible embodiment, first object's position and the second object's position can be using the positions of pixel It sets to indicate, which can be the face key point of face area, and face key point is used to indicate face in face area Position, may be used to indicate that in face area that shape, size of face etc. show feature.The terminal can extract this first Second face key point of the second head zone of the first face key point and second image of the first head zone of image, Using the position of the first face key point as the position of the first key point, closed the position of the second face key point as second The position of key point.Certainly, each face can be corresponding with multiple key points, and the position of multiple key point can also indicate that this is each The feature of a face;The feature of face includes but is not limited to: shape, size, position of face etc., for example, the size of eyes, eyebrow The curved shape of hair and length etc..
In a kind of possible embodiment, which can be calculated by invocation target detection algorithm, such as Face datection Method carries out position acquisition, and in alternatively possible embodiment, which can also holding the call header detection algorithm Row logic, is encapsulated in target interface, and when executing this step every time, which inputs the mesh for first image and the second image Tag splice mouth executes the execution logic of invocation target detection algorithm, exports the first object's position of first subject area, Yi Ji Second object's position of two subject areas.
As shown in figure 3, the terminal identifies the human face region, multiple key points in each face region are obtained, For example, brow region can be corresponded to including eight key points in the head zone.The position of eight key points characterizes eyebrow The features such as the display position in region and shape, size.
Further, which can obtain the first key point pixel value and the second key point and pixel value respectively.
The second way, the first object's position include head and body position in the first subject area, which knows Not Chu first image and second image head zone and body region, obtain in first image the first head position and Second head position and the second body position in first body position and second image.
In the embodiment of the present invention, which can use the position on head and body, indicate first subject area the The display position of one image, the body region may include the multiple physical feelings shown in image, such as arm, neck Portion, trunk, leg, foot etc..The terminal can carry out the identification of face and body by algorithm of target detection.In a kind of possibility Embodiment in, which can also obtain the first body position of target body site in the first body region, the one or five Second body position of target body site in first face position of target face and the second body region in official region, Second face position of target face in second face region.The target body site can be by multiple bodies for showing in image One or more of body region.For the acquisition process of the first face position and the second face position, with the first above-mentioned side Similarly, details are not described herein again for formula.
In a kind of possible embodiment, which can indicate the position of the first subject area by the position of pixel It sets, and the body includes multiple bones, multiple bone is connected into the skeleton of body by multiple skeletal joint points, which can be with The position of each physical feeling in body region is indicated by skeletal joint point, then this step can be with are as follows: the terminal recognition The head zone of first image and second image and body region out extract in the first head zone of first image the In one face key point and the first body region second in the first skeletal joint point and the second head zone of second image Second skeletal joint point in face key point and the second body region;The terminal is by the first face key point and first bone The position of the second face key point and second skeletal joint point is made in position of the position of artis as the first key point For the position of the second key point.Skeletal joint point is the tie point between the endpoint or two neighboring bone of one end of bone. Multiple bone is the bone for constituting the body of human body or animal, for example, neck bone, skeleton trunci, appendicular skeleton etc. The frame of composition is as a result, and skeleton trunci may include having arm bone, leg bone, abdomen bone etc..
It should be noted that the face key point in the above process refers to the point positioned at face's face region of object, use The feature of face's face is described, for example, position, size, the shape etc. of the face such as eyes, eyebrow, mouth.Skeletal joint point is Refer to the point for being located at the body region of object, for describing the feature of each physical feeling in body region.For example, arm, leg Etc. position, size, the shape of physical feelings etc..
204, the terminal determines at least one first object position according to first object's position and second object's position It sets.
Wherein, which is used to indicate at least one target image for displaying target head The position in portion region and target body region.In this step, which can be according to first object's position and the second object position It sets, by least one position between first object's position and the second object's position, at least one first mesh as this Cursor position.In a kind of possible embodiment, the quantity of the first object position can be destination number, each first mesh Cursor position corresponds to a target image, and the subsequent destination number target image that is inserted between the first image and the second image carries out Image switching display.Step 204 can also include: the terminal according to destination number, first object's position and second object Position determines the first object position of destination number, the corresponding first object position of every target image.Wherein, the target The display order of image is more forward, and the first object positional distance first object's position is closer, and the display of the target image is suitable More rearward, the first object positional distance second object's position is closer for sequence.
In a kind of possible embodiment, which is also based on the first weight and the second image of the first image Second weight, determines the destination number first object position, which may include: that the terminal obtains destination number first Weight and destination number the second weight, the terminal according to the destination number, first object's position, second object's position with And the second weight of the first weight of the destination number and the destination number, in first object's position and second object's position Between determine the first object position of destination number, corresponding first weight of every target image and second weight.It should First weight is the weight of first image, which is the weight of second image;It should be noted that with the mesh The display order of logo image more and more rearward, corresponding first weight of every target image is gradually reduced, the second weight gradually Increase.
In a kind of possible embodiment, which can be from the first subject area to The even transition of two subject areas.Then with the display order of the target image more and more rearward, every target image is corresponding The first weight at the uniform velocity reduced with target velocity, the second weight is at the uniform velocity increased with target velocity.For example, the terminal can be according to this The destination number of at least one target image, by multiple mean places between the first object's position and the second object's position, really It is set to multiple first object position.Wherein, multiple mean place is for will be from the first object's position to the second object's position Distance be divided into multiple equidistances, each equidistance corresponds to two neighboring mean place.Multiple uniform position instruction From the even transition process of the first subject area to the second subject area.
The process that then terminal obtains first weight and the second weight can be with are as follows: the terminal can be according to the destination number With following formula one, control that corresponding first weight of this every target image uniformly reduces, the second weight uniformly increases;
Wherein, P (i) is for indicating first weight, and Q (i) is for indicating second weight, and N is for indicating the first image And second target image between image destination number, 0 < i≤N, i are for indicating target image at least one target image Display order, i and N are positive integer.
It should be noted that when the first weight is at the uniform velocity reduced with target velocity, the second weight is at the uniform velocity increased with target velocity, Shown in formula one as above, which can be -1/ (N+1), that is to say, in formula one, in P (i)=1- [1/ (N+1)] * i Slope be the target velocity.
In alternatively possible embodiment, the destination number first object position can from the first subject area to The uneven transition of second subject area, for example, the speed that first weight reduces can be very fast, the speed that the second weight increases It can be relatively slow.As shown in figure 4, two figures are the increase process of the second weight in Fig. 4, in Fig. 4, in two figures of left and right, abscissa can To indicate timestamp, represent the display order of target image, for example, the timestamp of 3 target images is respectively 0.25,0.50, 0.75, chronomere can be the second.Ordinate represents the size of the second weight, as the timestamp of target image increases, namely It is that more rearward, the second weight also becomes larger the display order of target image.Obviously, left figure is that the second weight at the uniform velocity increases, phase It answers, the first weight also at the uniform velocity reduces;Right figure is that the deceleration of the second weight increases, correspondingly, the first weight also accelerates to increase.
It should be noted that the first object position includes target cranial position and target body position, the target cranial Position is used to indicate the display position of target cranial in target image, which is used to indicate target in target image The display position in region.First object's position may include head zone position, alternatively, first object's position can also To include head zone and body region position.The terminal can determine the first object based on head zone position Position, alternatively, the terminal can be based respectively on head zone and body region position determines first object position.Accordingly , this step can be realized by following two mode.
First way, the terminal determine at least one first mesh according to first head position and the second head position Target cranial position in cursor position.
The target cranial position refers to the position that displaying target head zone is used in target image.The embodiment of the present invention In, the display position in target body region follows the display position in the target cranial region to be changed in the target image.It should First head position includes the position of multiple first key points, and the second head position includes the position of multiple second key points, should First key point and the second key point can be the face key point of head zone.In a kind of possible embodiment, each First key point and each second key point are corresponding with crucial piont mark, for example, the label of 87 the first key points be respectively 1 to 87;The label of 87 the second key points is respectively 1 to 87;The terminal can carry out position based on crucial piont mark and determine.The end End can also determine first subject area and the position with the identical key point of crucial piont mark in second subject area; For each target image, which has the position of the key point of identical crucial piont mark, the target image at this according to this First weight of display order, first image at least one target image and the second weight of second image, really The first object point position of the key piont mark in the fixed target image.The key point may include face key point, alternatively, should Key point can also include face key point and skeletal joint point.
It should be noted that it is identical that the position of the first object point is used to indicate this in the first image in first way Second face key point of the identical crucial piont mark of the first face key point of crucial piont mark and this in the second image is in target Display position in image.First weight is for indicating weight of first image relative to target image.Second weight is used for table Show weight of second image relative to target image.
In a kind of possible embodiment, the terminal had according to this key point of identical crucial piont mark position, Display order, first weight and second weight of the target image at least one target image, by following formula two, Determine the key point position of the key piont mark in target image;
Formula two: M [i] [k]=S [k] * P (i)+E [k] * Q (i);
Wherein, for M [i] [k] for indicating key point position of the key point marked as k in i-th target image, i is used for table Show current target image in the display order of at least one target image, k is for indicating that key point is used for table marked as k, S [k] Show the position of first key point of the key point marked as k in the first image, E [k] is for indicating crucial piont mark in the second image For the position of the second key point of k, P (i) is the first weight of the first image, and Q (i) is the second weight of the second image, namely It is (1.0-P (i)).
It should be noted that with target image at least one target image display order more and more rearward, should First weight of the first image gradually becomes smaller, and the second weight of the second image is gradually increased.It that is to say, the display of target image is suitable Sequence is more forward, and the first weight of first image is bigger, and the second weight of the second image is smaller, for identical crucial piont mark Key point the first of the key piont mark is closed in the first image of key point positional distance of the key piont mark in the target image Key point position is closer;More rearward, the first weight of the first image is smaller for the display order of target image, the second power of the second image It is again bigger, then in the target image in the second image of key point positional distance of the key piont mark key piont mark second Key point position is closer, thus reach with multiple target images display order more and more rearward, phase in multiple target images Identical key point is tapered to from the first key point position of identical crucial piont mark with the key point position of crucial piont mark Second key point position of label is gradually moved into the second subject area to visually be formed from the first subject area, and The five features of the second subject area is gradually transformed to from the five features of the first subject area.
When the first weight and the second weight even variation, in a kind of possible embodiment, which can basis Display order of the position, the target image of the key point with identical crucial piont mark at least one target image, The destination number determines the key point position of the key piont mark in target image by following formula three:
Formula three: M [i] [k]=(S [k] * (N+1-i)+E [k] * (i))/(N+1);
Wherein, when the quantity of first object position can be destination number, the terminal is then according to the formula three, first Between key point position and the second key point position, out position uniform destination number first object point position is determined.
The second way, the terminal determine at least one first mesh according to first head position and the second head position Target cranial position in cursor position, and, according to the first body position and the second body position, determine at least one first object Target body position in position.
The target cranial position refers to the position that displaying target head zone is used in target image, the target body position Refer to the position that displaying target body region is used in target image.The terminal can determine the target head in target image respectively The display position in portion, display position, region and target body region.In the embodiment of the present invention, first head position and first Body position includes the position of multiple first key points, and the second head position and the second body position include multiple second key points Position, first key point and the second key point may include face key point and skeletal joint point.The terminal is according to determination First subject area and the second subject area have the position of the identical key point of crucial piont mark;For each target image, The terminal has the position of the key point of identical crucial piont mark, the target image at least one target image according to this Display order, the first weight of first image and the second weight of second image, determine the pass in the target image The first object point position of key piont mark.
It should be noted that it is identical that the position of the first object point is used to indicate this in the first image in the second way Second face key point of the identical crucial piont mark of the first face key point of crucial piont mark and this in the second image is in target It should in first skeletal joint point of the identical crucial piont mark of this in the display position and the first image in image and the second image The display position of second skeletal joint point of identical key piont mark in the target image.
It should be noted that determining first object point based on the first key point and the second key point in the second way The process set, similarly with above-mentioned first way, details are not described herein again.
205, the terminal is based at least one first object position, first subject area and second subject area, Generate the target cranial region and target body region of at least one target image.
The head zone and body region of first image and the second image further include having non-key point, and the first non-key point is Point in first subject area in addition to the first key point, the second non-key point be the second subject area in except the second key point with Outer point.First subject area is including the first head zone and including head zone and body region.Second subject area Including the second head zone and the second body region.The terminal can be according to first object position, multiple first non-key point With the multiple second non-key point, target cranial region and target body region at least one target image are generated.
Correspondingly, for any one target image, which can generate this any one by following steps 2051-2053 The target cranial region and target body region of target image.
2051, the terminal determines that the first non-key point exists according to the first non-key point and the first object point position The first display position in the target image.
In this step, which can determine the positional relationship between the first key point and the first non-key point, according to this Positional relationship and the first object point position, determine first display position.
It should be noted that the positional relationship can between first key point and the first non-key point distance it is remote Closely, alternatively, the terminal, which is also based on the first key point, divides multiple deformation regions for the first subject area, the positional relationship is also It can positional relationship between multiple subregion and the first non-key point.Correspondingly, step 2051 may include following two Kind implementation.
According to the distance between first key point and the first non-key point, determining should for the first implementation, the terminal The weight of first non-key point determines the first non-pass according to the weight of the first non-key point and the first object point position First display position of key point.
Wherein, which is used to indicate the display position of the first non-key point in the target image.The end End obtains the distance between the first non-key point and the first key point and determines the power of the first non-key point according to this distance Weight, the weight are used to indicate the distance of the position of the position to the first key point from the first non-key point.The terminal is according to this First key point determines the first non-pass to the change in location feature of first object point position and the weight of the first non-key point First display position of key point.The variation characteristic of first key point to first object point position includes but is not limited to: from this The position of one key point is to first object point positional distance, direction etc..
In a kind of possible embodiment, which can be respectively according to from the position of first key point to the first mesh Punctuate positional distance, direction obtain the weight and the multiplying at a distance from point position from the position of first key point to first object Product, for the terminal according to the product for being somebody's turn to do the direction of point position and the distance from the position of the first key point to first object, determining should First display position of the first non-key point.
Second of implementation, the terminal are with first boundary point in first key point and the first subject area boundary Vertex determines multiple deformation units in first subject area, according to the first object point position and the first boundary point Position determines the first display position of each deformation unit includes first non-key point.
The terminal is divided into multiple first shapes using the first key point and first boundary point as vertex, by first subject area Become unit (becoming the first deformation unit, herein to distinguish the second deformation unit of background area below), which determines each First deformation unit each of includes the first key point, and the terminal is first non-according to this as unit of each first deformation unit The first key point and first object point corresponding with first key point that the position of key point, the first deformation unit include It sets, determines the first display position of the first non-key point.Wherein, which can be triangular element.One In the possible embodiment of kind, which can be according to the variation characteristic between the first key point and first object point position, base In the mode of triangle affine linear transformations, the first display position of the first non-key point is determined.
In a kind of possible embodiment, if first key point includes face key point, the terminal by this first Head zone is divided into multiple first deformation units;In alternatively possible embodiment, if first key point includes First head zone and the first body region are divided into multiple first shapes by face key point and skeletal joint point, the terminal Become unit.
As shown in figure 5, including in first subject area for head zone to be divided into multiple first deformation units There are multiple first key points, which can be divided into facial image multiple triangular elements, according to each triangular element In the position on three vertex and the first object point position of three vertex correspondences, determine first in each triangular element The display position of non-key point.
2052, the terminal determines that the second non-key point exists according to the second non-key point and the first object point position The second display position in the target image.
Positional relationship between available second key point of the terminal and the second non-key point, according to the positional relationship and The first object point position, determines the third aiming spot.Wherein, the implementation of this step and above-mentioned steps 2051 are same Reason, details are not described herein again.
2053, the terminal according to the pixel value of each point in first subject area and second subject area and this The weight of first weight of one image and second image, to the first object position, first display position and this is second aobvious Show that position carries out assignment, generates the target cranial region and target body region of the target image.
In this step, which can be according to the picture of each key point in first subject area and second subject area Element value, the second weight of the first weight of first image and second image carry out assignment to the first object point position;It should Terminal is according to the pixel value of each non-key point in first subject area and second subject area and first image Second weight of the first weight and second image carries out assignment to second aiming spot, obtains the target image.Its In, which is used to indicate in the target image for showing the position of the first non-key point and the second non-key point It sets.
It should be noted that the number of the first object position can be to be multiple, each first object position is one corresponding Target image, the terminal is according to the display order of the target image, the pixel value of first key point and second key point Pixel value determines the pixel value of each first object position region.The target image display order of the first object position It is more forward, pixel value of the pixel value of key point closer to first key point in the first object position region;It should The target image display order of first object position more rearward, get in the first object position region by the pixel value of key point Close to the pixel value of second key point.
For the assignment procedure of first object point position, which can be according to the pixel value of first key point, second Second weight of the pixel value of key point, the first weight of the first image and the second image determines first by following formula four Pixel value, and first pixel value is assigned a value of to the pixel value of the first object point position:
Formula four: M1[i] [k]=S1[k]*P(i)+E1[k]*Q(i);
Wherein, M1[i] [k] is used to indicate the pixel of key point of the key point marked as k in i-th target image of display Value, i is for indicating current target image in the display order of at least one target image, and k is for indicating key point marked as k, S [k] is used to indicate the position of first key point of the key point marked as k in the first image, and E [k] is closed in the second image for indicating Key piont mark is the pixel value of the second key point of k, and P (i) is the first weight of the first image, and Q (i) is for indicating second power Weight, that is to say (1.0-P (i)).
In a kind of possible embodiment, first weight and the second weight can with even variation, that is to say if from To the second subject area even transition, which can close first subject area according to the pixel value of first key point, second Second weight of the pixel value of key point, the first weight of the first image and the second image, by following formula five, determine this first Pixel value, and first pixel value is assigned a value of to the pixel value of the first object point position:
Formula five: M1[i] [k]=(S1[k]*(N+1-i)+E1[k]*(i))/(N+1);
Wherein, N is used to indicate the destination number of target image between the first image and the second image, and 0 < i≤N, i and N are equal For positive integer.
For the assignment procedure of the second aiming spot, which can be according to the first display position of the first non-key point Whether it is overlapped with the second display position of the second non-key point, assignment is carried out to the display position of target image.The process can be with Are as follows: on any display position of the target image, when the non-key point from different images is not overlapped, which is respectively adopted The pixel value of corresponding first key point in the display position or the second non-key point carries out assignment;It that is to say, which can incite somebody to action The pixel value of the corresponding first non-key point in the display position or the second non-key point, is assigned a value of the pixel of the display position Value.On any display position of the target image, when non-key point from different images is overlapped, the terminal according to this first First non-key point and second corresponding to first weight of image and the second weight of second image and the display position The pixel value of non-key point carries out assignment to the display position.Wherein, available display position corresponding first of the terminal The pixel value of non-key point and the first product of the first weight obtain the pixel value of the corresponding second non-key point in the display position The corresponding pixel value of first second sum of products of sum of products is assigned a value of the display position with the second product of the second weight Pixel value.
It should be noted that it is corresponding to have partial pixel point when terminal generates target image, in target image Pixel value that is to say, there may be white space in the target image, which can be by the way of back mapping, the end The position according to the blank pixel o'clock in the first image is held, by blank pixel point in first image in the point of corresponding position Pixel value, be assigned a value of the pixel value of the blank pixel point.Certainly, the terminal also white space in the available target image In each blank pixel point and the first non-key point nearest apart from the blank pixel point pixel value, by the first non-key point Pixel value be assigned a value of the pixel value of the blank pixel point.
As shown in fig. 6, a figure is the first image in Fig. 6, b figure is to show according to first object point position to the first image The intermediate image shown, c figure are the second image, and d figure is the middle graph shown according to first object point position to the second image Picture.Wherein, in b figure and d figure, corresponding display position is the in the target image for the first subject area and the second subject area One target position.And b figure and the display position of the first key point in the first subject area in d figure and the second subject area are First object point position.It is found that a figure is the image that head is tilted to the right, c figure is the upright image in head for analysis, is based on first After target position is shown, visually a figure is head and c figure is that head is respectively tilted towards the position of other side, so that after When continuous generation image, the head zone of two users can be merged to be shown on first object position, and body region As the inclined direction of head zone is changed.Wherein, Fig. 7 is the corresponding practical interface schematic diagram of Fig. 6, can be with from Fig. 7 It is clearer to show the image change situation based on first object position display front and back.It should be noted that first way The determination process of first object position is mainly carried out by key point in image, as shown in figure 8, Fig. 8 is using 2051 steps Middle first way carries out the process that position determines, the position based on key point in the humanoid image carries out display position really It is fixed, visually show change procedure from left to right in Fig. 8.
In the embodiment of the present invention, the region that has powerful connections is also possible that in first image and the second image, then can be passed through Following step 206, the background area generated in target image obtain mesh alternatively, the terminal can also be directly based upon step 205 Logo image.
206, the terminal generates at least one target figure based on the background area in first image and second image The background area of picture.
It can also include background area, the first background area in addition to head zone in first image and the second image It include multiple first background dots in the first background area for the region in the first image in addition to the first subject area, this Two background areas are the region in the second image in addition to the second subject area, include multiple second back in the second background area Sight spot.The terminal can be according to first object point position, the position of multiple first background dot and multiple second background dot Position generates the background area of at least one target image.
Correspondingly, for any one target image, which can generate this any one by following steps 2061-2063 The background area of target image.
2061, the terminal is according to of first background area in the first object position of the target image and first image One background positions determine third display position of the first background area in the target image.
Similarly with above-mentioned steps 2051, which can determine in the first image between the first key point and the first background dot Positional relationship, according between first key point and the first background dot positional relationship and the first object point position, determine The third display position, the positional relationship can between first key point and the first background dot distance distance.Alternatively, should First background area can also be divided multiple sub-districts according to each boundary point of the first subject area and the first image by terminal Domain determines the third display position based on multiple subregion.
Correspondingly, step 2061 may include following two implementation.
The first implementation, the terminal according to the distance between first key point and first background dot, determine this The weight of one background dot determines first background dot at this according to the weight of first background dot and the first object point position Third display position in target image.
Wherein, which is the pixel in the first background area;The third display position is used to indicate this The display position of first background dot in the target image.The terminal obtain between first background dot and the first key point away from From according to this distance, determining the weight of first background dot.The terminal is according to first key point to first object point position The weight of change in location feature and first background dot determines the third display position of first background dot.First key point Variation characteristic to first object point position includes but is not limited to: from the position of first key point to first object point position away from From, direction etc..
In a kind of possible embodiment, which can be respectively according to from the position of first key point to the first mesh Punctuate positional distance, direction obtain the weight and the multiplying at a distance from point position from the position of first key point to first object Product, for the terminal according to the product for being somebody's turn to do the direction of point position and the distance from the position of the first key point to first object, determining should The third display position of first background dot.
Second of implementation, the terminal determine first background area using first boundary point and second boundary point as vertex Multiple second deformation units in domain, according to position, first boundary of the first background dot that each second deformation unit includes The position of point and the position of the second boundary point, determine third display position of first background dot in the target image.
Wherein, which is the borderline pixel of first subject area, the second boundary point be this The borderline pixel of one image.Wherein, which may include the image in the image boundary of first image The midpoint on vertex and each head zone boundary.The first boundary point may include on the zone boundary of first subject area Region vertex, certainly, the second boundary point also may include the midpoint on each region boundary, and the embodiment of the present invention does not make this to have Body limits.
The first background area is divided into multiple second using first boundary point and the second boundary point as vertex by the terminal Deformation unit, the terminal as unit of each second deformation unit, according to first background dot the first image position, second The position of first boundary point and second boundary point that deformation unit includes and first boundary point and second boundary point are in target figure The corresponding display position as in, determines the third display position of first background dot.It should be noted that the second boundary point is Image vertex or midpoint in first image boundary, the second boundary point may be the target in the display position of target image Image vertex or midpoint in image boundary.
2062, the terminal is according to of second background area in the first object position of the target image and second image Four display positions determine fourth display position of the second background area in the target image;
The implementation of this step is the process with above-mentioned steps 2062 similarly, and details are not described herein again.
2063, the terminal according to the pixel value of point each in the first background area and the second background area and this First weight of one image and the weight of second image carry out assignment to the third display position and the 4th display position, raw At the target background region of the target image.
In this step, which includes multiple first background dots corresponding display position in the target image, For the assignment procedure of third display position, which can be according to the first background dot in the display position of target image and second Whether background dot is overlapped in the display position of target image, and the third display position and the 4th display position to target image carry out Assignment.The process be in above-mentioned steps 2053 to the second aiming spot carry out assignment process similarly, details are not described herein again.
As shown in figure 9, the terminal can be based on multiple triangular element, the according to included by each triangular element One boundary point or the corresponding display position of second boundary point, determine the display position of the background dot inside each triangular element It sets, that is to say, third aiming spot and the 4th aiming spot.As shown in Figure 10, Figure 10 is using in step 2,052 second Kind mode, the result that the first subject area and the second subject area are merged.As shown in figure 11, it is based on the second way, Head zone, body region and background area are merged, complete blending image is obtained, that is to say, generation includes mesh Mark the target image of head zone, target body region and background area.
In conjunction with the process of above-mentioned steps 204 to 206, as shown in figure 12, the embodiment of the present invention actually first determines first The first object position of subject area and the second subject area is based on the first object position, to the first image and the second image Deformation process is carried out, the first subject area and the second subject area are respectively displayed on first object position, among Figure 12 Shown in image, then, during generating target image, then by corresponding two middle graphs of the first image and the second image Picture carries out image co-registration based on pixel value, obtains target image, that is to say, far right images.
It should be noted that the principle of the first way used in first way and step 206 in step 205 It is identical, also, the second way used in step 205 is identical with the principle of the second way used in step 205, under Face compares first way and the second way by the following table 1.
Table 1
From upper table 1 it is found that if in the case where not considering details, the effect of first way is more excellent, as shown in figure 13, on Half portion, which is divided into, generates target image using first way is corresponding, and lower half portion is to generate target figure using the second way is corresponding Picture.As shown in figure 13, because first way carries out generation target image based on the weight of non-key point, the second way is compared In use based on triangle carry out linear deformation when, the deformation of each triangle interior is linear, part details position Understand excessive deformation and be not inconsistent with objective law, the weaker effect of remoter deformation may be implemented in first way, more naturally.So And the second way carries out linear deformation by being then based on triangle, the display position of each non-key point not will receive surrounding The influence of multiple key point positions, it is as shown in figure 13, true at a distance from non-key point based on key point compared in first way Determine to will appear when weight the influence around the display position of non-key point by the key point of multiple surroundings, and leads to deformation degree Inconsistent, second way details effect is more preferable.Further, since the second way is based on using the principle of OpenGL rendering Deformation unit is that a unit carries out deformation, is calculated compared to needing to be adjusted each pixel in first way Amount is very big, and the calculation amount of the second way is smaller, and the requirement to the operational capability of terminal is relatively low, can be good on GPU It realizes.But in the second way, when human face region is especially big, triangular apex can exceed picture, cause effect abnormal.Cause This
In addition, the embodiment of the present invention relative to the scene for scratching figure Face Changing in the related technology, scratches figure Face Changing master in the related technology What is wanted is adjusted between the face inside head zone, without similarly being adjusted to Background.And it is of the invention In embodiment, face inside head integral position and head zone have carried out transition change.It is from the first image on position In display position, be gradually moved to the process of the display position in the second image.
In a kind of possible implement scene, the process of above-mentioned steps 201-206 can also be executed by server, then In the embodiment of the present invention, the terminal can also to server send acquisition instruction, the acquisition instruction be used to indicate based on this first Image and second image return to the target image;The terminal receives the target image of server transmission.
In the embodiment of the present invention, above-mentioned steps 204-206 be step " terminal according to first object's position and this second Object's position generates at least one target image " a kind of specific implementation.Above-mentioned steps are indeed through first based on the An object position, the second object's position determine first object position, then are based on first object position progress pixel value and obtain assignment, To generate target image.In alternatively possible embodiment, the terminal can also be directly based upon the first object's position and Second object's position carries out the assignment mode based on pixel value in pre-set image to the first subject area and the second subject area The first mesh for being merged, obtaining target cranial region, however determined again based on the first object's position and the second object's position Cursor position carries out position adjustment etc. to target cranial region, and the embodiment of the present invention is not specifically limited in this embodiment.Above-mentioned steps 204- 206 be actually to generate target cranial region in target image, target body region and generate background area, obtains target image , in alternatively possible embodiment, which can also be directly based upon step 204-205, generate target cranial region, Target body region, obtains target image.The embodiment of the present invention is not specifically limited in this embodiment.
207, the terminal, will according to first image, target image display order corresponding with second image First image, the target image and second package images are file destination.
The terminal can also be according to first image, destination number target image and the corresponding display of the second image Sequentially, the first image, destination number target image and the second image and corresponding display order are indicated into information, It is encapsulated in file destination.Wherein, which can be video file, alternatively, the file destination may be the dynamic of GIF Draw file.Display order instruction information is used to indicate the display order of the first image, the second image or target image.
It should be noted that this step 207 is the optional step of the embodiment of the present invention, it that is to say, which executes step After 206, first not execute encapsulation is the process of file destination, but directly execution step 208, and certainly, which can also be by The process for generating target image, determining file destination and switching over display, this hair are successively executed according to the sequence of step 206-208 Bright embodiment is not specifically limited in this embodiment.
208, the terminal shows at least one target figure during switching from first image to second image Picture.
In this hair embodiment, when receiving switching command, which shows first image;When first image is shown At the end of, which according to the display order of target image every in the target image of destination number, successively shows the mesh respectively Mark the target image of quantity;At the end of the target image of the destination number is shown, which shows second image.Wherein, The play instruction can be triggered based on the broadcast button in application interface, alternatively, the terminal may be used also when generating the target image With the display reminding message on application interface, which is used to prompt the user whether that preview image to switch display process, when When receiving the confirmation operation of user, which receives the play instruction.
In the embodiment of the present invention, by according to the first object's position and the first object's position, generating at least one target figure Picture, during switching display from first the second image of image phase, by showing at least one target image, to show first Object in image is being moved to the second object's position from the first object's position and is fading to the effect of the object in the second image, An object movement is visually formed to another object and fades to the visual effect of another object, image is improved and cuts Change the transition effect during display.
Figure 14 is a kind of structural schematic diagram of image shifting display provided in an embodiment of the present invention.It, should referring to Figure 14 Device includes: to obtain module 1401, generation module 1402 and display module 1403.
Module 1401 is obtained, for obtaining the first image and as the second image of switching target;
The acquisition module 1401 is also used to obtain second in first image in the first object's position and second image Object's position, first object's position are used to indicate the display position of first objects in images, which is used for Indicate the display position of second objects in images;
Generation module 1402, for generating at least one target according to first object's position and second object's position Image, at least one target image be used to show object in first image from first object's position be moved to this Two object's positions and the effect for fading to the object in second image;
Display module 1402, for from first image to second image switch during, show this at least one A target image.
Optionally, which includes:
Determination unit, for determining at least one first object according to first object's position and second object's position Position, at least one first object position be used to indicate at least one target image for displaying target head zone and The position in target body region;
Production unit, for being based at least one first object position, first object's position and the second object position It sets, generates at least one target image.
Optionally, the determination unit is also used to according to destination number, first object's position and second object's position, Determine the first object position of destination number, the corresponding first object position of every target image;
Wherein, the display order of the target image is more forward, and the first object positional distance first object's position is closer, More rearward, the first object positional distance second object's position is closer for the display order of the target image.
Optionally, the determination unit is also used to obtain the second weight of the first weight of destination number and destination number, should First weight is the weight of first image, which is the weight of second image;According to the destination number, this first The second weight of object's position, second object's position and the destination number the first weight and the destination number, this The first object position of destination number is determined between an object position and second object's position, every target image is one corresponding First weight and second weight;With the target image display order more and more rearward, every target image is corresponding The first weight be gradually reduced, the second weight is gradually increased.
Optionally, the determination unit is also used to obtain destination number first according to the destination number and following formula one The second weight of weight and destination number, corresponding first weight of this every target image uniformly reduces, the second weight uniformly increases Greatly;
Wherein, P (i) is for indicating first weight, and Q (i) is for indicating second weight, and N is for indicating the first image And second target image between image destination number, 0 < i≤N, i are for indicating target image at least one target image Display order, i and N are positive integer.
Optionally, the determination unit is also used to determine in first image in the first subject area and second image the Two subject areas have the position of the key point of identical crucial piont mark;
For each target image, existed according to the position of the key point with identical crucial piont mark, the target image First weight of display order, first image at least one target image and the second weight of second image, Determine the first object point position of the key piont mark in the target image.
Optionally, which includes face key point, alternatively, the key point includes face key point and skeletal joint Point.
Optionally, the generation unit is also used to for any one target image, according to the first non-key point and first mesh Punctuate position, determines first display position of the first non-key point in the target image, the first non-key point be this Point in an object region in addition to first key point;According to the second non-key point and the first object point position, determining should Second display position of the second non-key point in the target image, the second non-key point are to remove to be somebody's turn to do in second subject area Point other than second key point;According to the pixel value of each point in first subject area and second subject area, this first First weight of image and the second weight of second image, to the first object position, first display position and this Two display positions carry out assignment, generate the target cranial region and target body region of the target image.
Optionally, the generation unit is also used to following any implementations:
According to the distance between first key point and the first non-key point, the weight of the first non-key point, root are determined Weight and the first object point position according to the first non-key point, determine the first display position of the first non-key point;
With second in first boundary point and first image boundary on first key point, the first subject area boundary Boundary point is vertex, determines multiple deformation units in first subject area, according to the first object point position, first side The position of boundary's point and the position of the second boundary point determine the first display position of each deformation unit includes first non-key point It sets.
Optionally, the generation unit, be also used to according to the pixel value of each key point in first subject area, this first First weight of image and the second weight of second image assign the first object point position by following formula four Value;
Formula four: M1[i] [k]=S1[k]*P(i)+E1[k]*Q(i);
Wherein, M1[i] [k] is used to indicate that the pixel value of key point of the key point marked as k in i-th target image, i to be used In indicating target image in the display order of at least one target image, k is for indicating that key point is used for table marked as k, S [k] Show the position of first key point of the key point marked as k in first image, E [k] is for indicating key point in second image The pixel value of the second key point marked as k, P (i) are the first weights of the first image, and Q (i) is for indicating second weight;
According to the pixel value and first weight of non-key point each in first subject area, second subject area With second weight, assignment is carried out to the second aiming spot, which is used to indicate the first non-key point and second Non-key point is in the display position of target image.
Optionally, the generation module is also used to send acquisition instruction to server, which, which is used to indicate, is based on being somebody's turn to do First image and second image return at least one target image;Receive at least one target figure of server transmission Picture.
In the embodiment of the present invention, by according to the first object's position and the first object's position, generating at least one target figure Picture, during switching display from first the second image of image phase, by showing at least one target image, to show first Object in image is being moved to the second object's position from the first object's position and is fading to the effect of the object in the second image, An object movement is visually formed to another object and fades to the visual effect of another object, image is improved and cuts Change the transition effect during display.
All the above alternatives can form the alternative embodiment of the disclosure, herein no longer using any combination It repeats one by one.
It should be understood that image shifting display provided by the above embodiment image switch show when, only more than The division progress of each functional module is stated for example, can according to need and in practical application by above-mentioned function distribution by difference Functional module complete, i.e., the internal structure of electronic equipment is divided into different functional modules, it is described above complete to complete Portion or partial function.In addition, image shifting display provided by the above embodiment and image switch display methods embodiment Belong to same design, specific implementation process is detailed in embodiment of the method, and which is not described herein again.
Figure 15 is a kind of structural schematic diagram of terminal provided in an embodiment of the present invention.The terminal 1500 may is that intelligent hand (Moving Picture Experts Group Audio Layer III, dynamic image are special for machine, tablet computer, MP3 player Family's compression standard audio level 3), MP4 (Moving Picture Experts Group Audio Layer IV, dynamic image Expert's compression standard audio level 4) player, laptop or desktop computer.Terminal 1500 is also possible to referred to as user and sets Other titles such as standby, portable terminal, laptop terminal, terminal console.
In general, terminal 1500 includes: processor 1501 and memory 1502.
Processor 1501 may include one or more processing cores, such as 4 core processors, 8 core processors etc..Place Reason device 1501 can use DSP (Digital Signal Processing, Digital Signal Processing), FPGA (Field- Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array, may be programmed Logic array) at least one of example, in hardware realize.Processor 1501 also may include primary processor and coprocessor, master Processor is the processor for being handled data in the awake state, also referred to as CPU (Central Processing Unit, central processing unit);Coprocessor is the low power processor for being handled data in the standby state.? In some embodiments, processor 1501 can be integrated with GPU (Graphics Processing Unit, image processor), GPU is used to be responsible for the rendering and drafting of content to be shown needed for display screen.In some embodiments, processor 1501 can also be wrapped AI (Artificial Intelligence, artificial intelligence) processor is included, the AI processor is for handling related machine learning Calculating operation.
Memory 1502 may include one or more computer readable storage mediums, which can To be non-transient.Memory 1502 may also include high-speed random access memory and nonvolatile memory, such as one Or multiple disk storage equipments, flash memory device.In some embodiments, the non-transient computer in memory 1502 can Storage medium is read for storing at least one instruction, at least one instruction performed by processor 1501 for realizing this Shen Please in embodiment of the method provide image switch display methods.
In some embodiments, terminal 1500 is also optional includes: peripheral device interface 1503 and at least one periphery are set It is standby.It can be connected by bus or signal wire between processor 1501, memory 1502 and peripheral device interface 1503.It is each outer Peripheral equipment can be connected by bus, signal wire or circuit board with peripheral device interface 1503.Specifically, peripheral equipment includes: In radio circuit 1504, touch display screen 1505, camera 1506, voicefrequency circuit 1507, positioning component 1508 and power supply 1509 At least one.
Peripheral device interface 1503 can be used for I/O (Input/Output, input/output) is relevant outside at least one Peripheral equipment is connected to processor 1501 and memory 1502.In some embodiments, processor 1501, memory 1502 and periphery Equipment interface 1503 is integrated on same chip or circuit board;In some other embodiments, processor 1501, memory 1502 and peripheral device interface 1503 in any one or two can be realized on individual chip or circuit board, this implementation Example is not limited this.
Radio circuit 1504 is for receiving and emitting RF (Radio Frequency, radio frequency) signal, also referred to as electromagnetic signal. Radio circuit 1504 is communicated by electromagnetic signal with communication network and other communication equipments.Radio circuit 1504 is by telecommunications Number being converted to electromagnetic signal is sent, alternatively, the electromagnetic signal received is converted to electric signal.Optionally, radio circuit 1504 include: antenna system, RF transceiver, one or more amplifiers, tuner, oscillator, digital signal processor, volume solution Code chipset, user identity module card etc..Radio circuit 1504 can by least one wireless communication protocol come with it is other Terminal is communicated.The wireless communication protocol includes but is not limited to: Metropolitan Area Network (MAN), each third generation mobile communication network (2G, 3G, 4G and 5G), WLAN and/or WiFi (Wireless Fidelity, Wireless Fidelity) network.In some embodiments, radio frequency electrical Road 1504 can also include NFC (Near Field Communication, wireless near field communication) related circuit, the application This is not limited.
Display screen 1505 is for showing UI (User Interface, user interface).The UI may include figure, text, Icon, video and its their any combination.When display screen 1505 is touch display screen, display screen 1505 also there is acquisition to exist The ability of the touch signal on the surface or surface of display screen 1505.The touch signal can be used as control signal and be input to place Reason device 1501 is handled.At this point, display screen 1505 can be also used for providing virtual push button and/or dummy keyboard, it is also referred to as soft to press Button and/or soft keyboard.In some embodiments, display screen 1505 can be one, and the front panel of terminal 1500 is arranged;Another In a little embodiments, display screen 1505 can be at least two, be separately positioned on the different surfaces of terminal 1500 or in foldover design; In still other embodiments, display screen 1505 can be flexible display screen, is arranged on the curved surface of terminal 1500 or folds On face.Even, display screen 1505 can also be arranged to non-rectangle irregular figure, namely abnormity screen.Display screen 1505 can be with Using LCD (Liquid Crystal Display, liquid crystal display), OLED (Organic Light-Emitting Diode, Organic Light Emitting Diode) etc. materials preparation.
CCD camera assembly 1506 is for acquiring image or video.Optionally, CCD camera assembly 1506 includes front camera And rear camera.In general, the front panel of terminal is arranged in front camera, the back side of terminal is arranged in rear camera.? In some embodiments, rear camera at least two is that main camera, depth of field camera, wide-angle camera, focal length are taken the photograph respectively As any one in head, to realize that main camera and the fusion of depth of field camera realize background blurring function, main camera and wide Pan-shot and VR (Virtual Reality, virtual reality) shooting function or other fusions are realized in camera fusion in angle Shooting function.In some embodiments, CCD camera assembly 1506 can also include flash lamp.Flash lamp can be monochromatic temperature flash of light Lamp is also possible to double-colored temperature flash lamp.Double-colored temperature flash lamp refers to the combination of warm light flash lamp and cold light flash lamp, can be used for Light compensation under different-colour.
Voicefrequency circuit 1507 may include microphone and loudspeaker.Microphone is used to acquire the sound wave of user and environment, and It converts sound waves into electric signal and is input to processor 1501 and handled, or be input to radio circuit 1504 to realize that voice is logical Letter.For stereo acquisition or the purpose of noise reduction, microphone can be separately positioned on the different parts of terminal 1500 to be multiple. Microphone can also be array microphone or omnidirectional's acquisition type microphone.Loudspeaker is then used to that processor 1501 or radio frequency will to be come from The electric signal of circuit 1504 is converted to sound wave.Loudspeaker can be traditional wafer speaker, be also possible to piezoelectric ceramics loudspeaking Device.When loudspeaker is piezoelectric ceramic loudspeaker, the audible sound wave of the mankind can be not only converted electrical signals to, can also be incited somebody to action Electric signal is converted to the sound wave that the mankind do not hear to carry out the purposes such as ranging.In some embodiments, voicefrequency circuit 1507 may be used also To include earphone jack.
Positioning component 1508 is used for the current geographic position of positioning terminal 1500, to realize navigation or LBS (Location Based Service, location based service).Positioning component 1508 can be the GPS (Global based on the U.S. Positioning System, global positioning system), the dipper system of China, Russia Gray receive this system or European Union The positioning component of Galileo system.
Power supply 1509 is used to be powered for the various components in terminal 1500.Power supply 1509 can be alternating current, direct current Electricity, disposable battery or rechargeable battery.When power supply 1509 includes rechargeable battery, which can support wired Charging or wireless charging.The rechargeable battery can be also used for supporting fast charge technology.
In some embodiments, terminal 1500 further includes having one or more sensors 1510.One or more sensing Device 1510 includes but is not limited to: acceleration transducer 1511, gyro sensor 1512, pressure sensor 1513, fingerprint sensing Device 1514, optical sensor 1515 and proximity sensor 1516.
Acceleration transducer 1511 can detecte the acceleration in three reference axis of the coordinate system established with terminal 1500 Size.For example, acceleration transducer 1511 can be used for detecting component of the acceleration of gravity in three reference axis.Processor The 1501 acceleration of gravity signals that can be acquired according to acceleration transducer 1511, control touch display screen 1505 with transverse views Or longitudinal view carries out the display of user interface.Acceleration transducer 1511 can be also used for game or the exercise data of user Acquisition.
Gyro sensor 1512 can detecte body direction and the rotational angle of terminal 1500, gyro sensor 1512 Acquisition user can be cooperateed with to act the 3D of terminal 1500 with acceleration transducer 1511.Processor 1501 is according to gyro sensors The data that device 1512 acquires, following function may be implemented: action induction (for example changing UI according to the tilt operation of user) is clapped Image stabilization, game control and inertial navigation when taking the photograph.
The lower layer of side frame and/or touch display screen 1505 in terminal 1500 can be set in pressure sensor 1513.When When the side frame of terminal 1500 is arranged in pressure sensor 1513, user can detecte to the gripping signal of terminal 1500, by Reason device 1501 carries out right-hand man's identification or prompt operation according to the gripping signal that pressure sensor 1513 acquires.Work as pressure sensor 1513 when being arranged in the lower layer of touch display screen 1505, is grasped by processor 1501 according to pressure of the user to touch display screen 1505 Make, realization controls the operability control on the interface UI.Operability control include button control, scroll bar control, At least one of icon control, menu control.
Fingerprint sensor 1514 is used to acquire the fingerprint of user, is collected by processor 1501 according to fingerprint sensor 1514 Fingerprint recognition user identity, alternatively, by fingerprint sensor 1514 according to the identity of collected fingerprint recognition user.Knowing Not Chu the identity of user when being trusted identity, authorize the user to execute relevant sensitive operation by processor 1501, which grasps Make to include solving lock screen, checking encryption information, downloading software, payment and change setting etc..Fingerprint sensor 1514 can be set Set the front, the back side or side of terminal 1500.When being provided with physical button or manufacturer Logo in terminal 1500, fingerprint sensor 1514 can integrate with physical button or manufacturer Logo.
Optical sensor 1515 is for acquiring ambient light intensity.In one embodiment, processor 1501 can be according to light The ambient light intensity that sensor 1515 acquires is learned, the display brightness of touch display screen 1505 is controlled.Specifically, work as ambient light intensity When higher, the display brightness of touch display screen 1505 is turned up;When ambient light intensity is lower, the aobvious of touch display screen 1505 is turned down Show brightness.In another embodiment, the ambient light intensity that processor 1501 can also be acquired according to optical sensor 1515, is moved The acquisition parameters of state adjustment CCD camera assembly 1506.
Proximity sensor 1516, also referred to as range sensor are generally arranged at the front panel of terminal 1500.Proximity sensor 1516 for acquiring the distance between the front of user Yu terminal 1500.In one embodiment, when proximity sensor 1516 is examined When measuring the distance between the front of user and terminal 1500 and gradually becoming smaller, by processor 1501 control touch display screen 1505 from Bright screen state is switched to breath screen state;When proximity sensor 1516 detect the distance between front of user and terminal 1500 by When gradual change is big, touch display screen 1505 is controlled by processor 1501 and is switched to bright screen state from breath screen state.
It, can be with it will be understood by those skilled in the art that the restriction of the not structure paired terminal 1500 of structure shown in Figure 15 Including than illustrating more or fewer components, perhaps combining certain components or being arranged using different components.
Figure 16 is a kind of structural schematic diagram of server provided in an embodiment of the present invention, the server 1600 can because of configuration or Performance is different and generates bigger difference, may include one or more processors (central processing Units, CPU) 1601 and one or more memory 1602, wherein at least one is stored in the memory 1602 Instruction, at least one instruction are loaded by the processor 1601 and are executed the image to realize above-mentioned each embodiment of the method offer Switch display methods.Certainly, which can also have wired or wireless network interface, keyboard and input/output interface etc. Component, to carry out input and output, which can also include other for realizing the component of functions of the equipments, not do herein superfluous It states.
In the exemplary embodiment, a kind of computer readable storage medium is additionally provided, the memory for example including instruction, Above-metioned instruction can be executed by the processor in terminal to complete the switching display methods of the image in above-described embodiment.For example, the meter Calculation machine readable storage medium storing program for executing can be ROM (Read-Only Memory, read-only memory), RAM (random access Memory, random access memory), it is CD-ROM (Compact Disc Read-Only Memory, CD-ROM), tape, soft Disk and optical data storage devices etc..
Those of ordinary skill in the art will appreciate that realizing that all or part of the steps of above-described embodiment can pass through hardware It completes, relevant hardware can also be instructed to complete by program, the program being somebody's turn to do can store computer-readable deposits in a kind of In storage media, storage medium mentioned above can be read-only memory, disk or CD etc..
The foregoing is merely presently preferred embodiments of the present invention, is not intended to limit the invention, it is all in spirit of the invention and Within principle, any modification, equivalent replacement, improvement and so on be should all be included in the protection scope of the present invention.

Claims (15)

1. a kind of image switches display methods, which is characterized in that the described method includes:
Obtain the first image and the second image as switching target;
Obtain the second object's position in the first image in the first object's position and second image, first object Position is used to indicate the display position of object in the first image, and second object's position is for indicating second image The display position of middle object;
According to first object's position and second object's position, generate at least one target image, it is described at least one Target image is for showing that the object in the first image is being moved to second object position from first object's position Set and fade to the effect of the object in second image;
During switching from the first image to second image, at least one described target image is shown.
2. the method according to claim 1, wherein described according to first object's position and second pair described As position, generating at least one target image includes:
According to first object's position and second object's position, determine at least one first object position, it is described at least One first object position is used to indicate at least one described target image for displaying target head zone and target body The position in region;
Based at least one described first object position, first object's position and second object's position, described in generation At least one target image.
3. according to the method described in claim 2, it is characterized in that, the first object position according to first subject area The second object's position with second subject area is set, determines that at least one first object position includes:
According to destination number, first object's position and second object's position, the first object position of destination number is determined It sets, the corresponding first object position of every target image;
Wherein, the display order of the target image is more forward, and the first object's position described in the first object positional distance is got over Closely, more rearward, the second object's position described in the first object positional distance is closer for the display order of the target image.
4. according to the method described in claim 3, it is characterized in that, it is described according to destination number, first object's position and Second object's position determines that the first object position of destination number includes:
The second weight of the first weight of destination number and destination number is obtained, first weight is the power of the first image Weight, second weight are the weight of second image;
According to the destination number, first object's position, second object's position and the destination number a first The second weight of weight and the destination number, determines target between first object's position and second object's position The first object position of quantity, every corresponding first weight of target image and second weight;
With the target image display order more and more rearward, corresponding first weight of every target image gradually subtracts Small, the second weight is gradually increased.
5. according to the method described in claim 4, it is characterized in that, the first weight of the acquisition destination number and destination number A second weight includes:
According to the destination number and following formula one, the second weight of the first weight of destination number and destination number is obtained, Corresponding first weight of every target image uniformly reduces, the second weight uniformly increases;
Formula one:
Wherein, P (i) is for indicating first weight, and Q (i) is for indicating second weight, and N is for indicating the first image And second target image between image destination number, 0 < i≤N, i are for indicating target image at least one target image Display order, i and N are positive integer.
6. according to the method described in claim 2, it is characterized in that, described according to first object's position and second pair described As position, determine that at least one first object position includes:
Determine that the second subject area has identical key point in the first subject area and second image in the first image The position of the key point of label;
For each target image, existed according to the position of the key point with identical crucial piont mark, the target image The second of first weight of display order, the first image at least one described target image and second image Weight determines the first object point position of key piont mark described in the target image.
7. according to the method described in claim 6, it is characterized in that, the key point includes face key point, alternatively, the pass Key point includes face key point and skeletal joint point.
8. according to the method described in claim 6, it is characterized in that, described based on described at least one first object position, institute The first object's position and second object's position are stated, generating at least one described target image includes:
The described first non-pass is determined according to the first non-key point and first object point position for any one target image First display position of the key point in the target image, the first non-key point are in first subject area except described Point other than first key point;
According to the second non-key point and first object point position, determine the described second non-key point in the target image The second display position, the second non-key point is point in addition to second key point in second subject area;
According to the pixel value of each point in first subject area and second subject area, the first image first Second weight of weight and second image, to the first object position, first display position and described second Display position carries out assignment, generates the target cranial region and target body region of the target image.
9. according to the method described in claim 8, it is characterized in that, described according to the described first non-key point and first mesh Punctuate position determines that first display position of the described first non-key point in the target image includes following any realizations Mode:
According to the distance between first key point and the first non-key point, the weight of the described first non-key point, root are determined Weight and first object point position according to the described first non-key point determine the first display position of the described first non-key point It sets;
With first boundary point on first key point, first subject area boundary and the first image borderline Two boundary points are vertex, multiple deformation units in first subject area are determined, according to first object point position, institute The position of first boundary point and the position of the second boundary point are stated, determines each deformation unit includes first non-key point First display position.
10. according to the method described in claim 8, it is characterized in that, described according to first subject area and described second The second weight of the pixel value of each point, the first weight of the first image and second image, right in subject area The first object position, first display position and second display position carry out assignment, generate the target image Target cranial region and target body region include:
According to the pixel value of each key point, the first weight of the first image and described second in first subject area Second weight of image carries out assignment to the first object point position by following formula four;
Formula four: M1[i] [k]=S1[k]*P(i)+E1[k]*Q(i);
Wherein, M1[i] [k] is used to indicate that the pixel value of key point of the key point marked as k in i-th target image, i to be used for table Show target image in the display order of at least one target image, k is for indicating key point marked as k, and S [k] is for indicating institute The position of first key point of the key point marked as k in the first image is stated, E [k] is for indicating key point in second image The pixel value of the second key point marked as k, P (i) are the first weights of the first image, and Q (i) is for indicating second power Weight;
According to the pixel value of non-key point each in first subject area, second subject area and first power Weight and second weight carry out assignment to the second aiming spot, and second target point is used to indicate the first non-key point With the second non-key point in the display position of target image.
11. the method according to claim 1, wherein described according to first object's position and described second Object's position, generating at least one target image includes:
Acquisition instruction is sent to server, the acquisition instruction is used to indicate to be returned based on the first image and second image Return at least one described target image;
Receive at least one described target image that the server is sent.
12. the method according to claim 1, wherein described according to first object's position and described second Object's position, after generating at least one target image, the method also includes:
It, will according to the first image, at least one described target image and the corresponding display order of second image The first image, at least one described target image and second package images are file destination.
13. a kind of image shifting display, which is characterized in that described device includes:
Module is obtained, for obtaining the first image and as the second image of switching target;
The acquisition module is also used to obtain second pair in the first image in the first object's position and second image As position, first object's position is used to indicate the display position of object in the first image, second object's position For indicating the display position of second objects in images;
Generation module, for generating at least one target image according to first object's position and second object's position, At least one described target image is for showing that the object in the first image is being moved to institute from first object's position It states the second object's position and fades to the effect of the object in second image;
Display module, for during switch from the first image to second image, described in display at least one Target image.
14. a kind of electronic equipment, which is characterized in that the electronic equipment includes that one or more processors and one or more are deposited Reservoir is stored at least one instruction in one or more of memories, and at least one instruction is by one or more A processor is loaded and is executed to realize such as claim 1 to claim 12 described in any item image switchings display methods institute The operation of execution.
15. a kind of computer readable storage medium, which is characterized in that be stored at least one instruction, institute in the storage medium At least one instruction is stated to be loaded by processor and executed to realize such as claim 1 to the described in any item images of claim 12 Switch operation performed by display methods.
CN201910224190.8A 2019-03-22 2019-03-22 Image switching display method and device, electronic equipment and storage medium Active CN109947338B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910224190.8A CN109947338B (en) 2019-03-22 2019-03-22 Image switching display method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910224190.8A CN109947338B (en) 2019-03-22 2019-03-22 Image switching display method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109947338A true CN109947338A (en) 2019-06-28
CN109947338B CN109947338B (en) 2021-08-10

Family

ID=67011005

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910224190.8A Active CN109947338B (en) 2019-03-22 2019-03-22 Image switching display method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109947338B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110942501A (en) * 2019-11-27 2020-03-31 深圳追一科技有限公司 Virtual image switching method and device, electronic equipment and storage medium
CN111209050A (en) * 2020-01-10 2020-05-29 北京百度网讯科技有限公司 Method and device for switching working mode of electronic equipment
CN112887699A (en) * 2021-01-11 2021-06-01 京东方科技集团股份有限公司 Image display method and device
CN113018855A (en) * 2021-03-26 2021-06-25 完美世界(北京)软件科技发展有限公司 Action switching method and device for virtual role
CN113973189A (en) * 2020-07-24 2022-01-25 荣耀终端有限公司 Display content switching method, device, terminal and storage medium
US20220415226A1 (en) * 2021-06-23 2022-12-29 Samsung Electronics Co., Ltd. Electronic device, method, and computer-readable storage medium for reducing afterimage in display area

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1681000A (en) * 2004-04-05 2005-10-12 精工爱普生株式会社 Dynamic cross fading method and apparatus
CN102449664A (en) * 2011-09-27 2012-05-09 华为技术有限公司 Gradual-change animation generating method and apparatus
US20150331558A1 (en) * 2012-11-29 2015-11-19 Tencent Technology (Shenzhen) Company Limited Method for switching pictures of picture galleries and browser
CN108769361A (en) * 2018-04-03 2018-11-06 华为技术有限公司 A kind of control method and terminal of terminal wallpaper
CN109068053A (en) * 2018-07-27 2018-12-21 乐蜜有限公司 Image special effect methods of exhibiting, device and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1681000A (en) * 2004-04-05 2005-10-12 精工爱普生株式会社 Dynamic cross fading method and apparatus
CN102449664A (en) * 2011-09-27 2012-05-09 华为技术有限公司 Gradual-change animation generating method and apparatus
US20150331558A1 (en) * 2012-11-29 2015-11-19 Tencent Technology (Shenzhen) Company Limited Method for switching pictures of picture galleries and browser
CN108769361A (en) * 2018-04-03 2018-11-06 华为技术有限公司 A kind of control method and terminal of terminal wallpaper
CN109068053A (en) * 2018-07-27 2018-12-21 乐蜜有限公司 Image special effect methods of exhibiting, device and electronic equipment

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110942501A (en) * 2019-11-27 2020-03-31 深圳追一科技有限公司 Virtual image switching method and device, electronic equipment and storage medium
CN111209050A (en) * 2020-01-10 2020-05-29 北京百度网讯科技有限公司 Method and device for switching working mode of electronic equipment
CN113973189A (en) * 2020-07-24 2022-01-25 荣耀终端有限公司 Display content switching method, device, terminal and storage medium
CN112887699A (en) * 2021-01-11 2021-06-01 京东方科技集团股份有限公司 Image display method and device
CN112887699B (en) * 2021-01-11 2023-04-18 京东方科技集团股份有限公司 Image display method and device
CN113018855A (en) * 2021-03-26 2021-06-25 完美世界(北京)软件科技发展有限公司 Action switching method and device for virtual role
CN113018855B (en) * 2021-03-26 2022-07-01 完美世界(北京)软件科技发展有限公司 Action switching method and device for virtual role
WO2022198971A1 (en) * 2021-03-26 2022-09-29 完美世界(北京)软件科技发展有限公司 Virtual character action switching method and apparatus, and storage medium
US20220415226A1 (en) * 2021-06-23 2022-12-29 Samsung Electronics Co., Ltd. Electronic device, method, and computer-readable storage medium for reducing afterimage in display area
US11741870B2 (en) * 2021-06-23 2023-08-29 Samsung Electronics Co., Ltd. Electronic device, method, and computer-readable storage medium for reducing afterimage in display area

Also Published As

Publication number Publication date
CN109947338B (en) 2021-08-10

Similar Documents

Publication Publication Date Title
CN109947338A (en) Image switches display methods, device, electronic equipment and storage medium
CN110992493B (en) Image processing method, device, electronic equipment and storage medium
CN108401124B (en) Video recording method and device
CN110062269A (en) Extra objects display methods, device and computer equipment
CN109767487A (en) Face three-dimensional rebuilding method, device, electronic equipment and storage medium
CN110233976A (en) The method and device of Video Composition
CN110139142A (en) Virtual objects display methods, device, terminal and storage medium
CN110244998A (en) Page layout background, the setting method of live page background, device and storage medium
CN109978936A (en) Parallax picture capturing method, device, storage medium and equipment
CN109712224A (en) Rendering method, device and the smart machine of virtual scene
CN108595239A (en) image processing method, device, terminal and computer readable storage medium
CN110243386A (en) Navigation information display methods, device, terminal and storage medium
CN109982102A (en) The interface display method and system and direct broadcast server of direct broadcasting room and main broadcaster end
CN110427110A (en) A kind of live broadcasting method, device and direct broadcast server
CN112287852B (en) Face image processing method, face image display method, face image processing device and face image display equipment
CN109729411A (en) Living broadcast interactive method and device
CN110064200A (en) Object construction method, device and readable storage medium storing program for executing based on virtual environment
CN110956580B (en) Method, device, computer equipment and storage medium for changing face of image
CN108563377A (en) The method and apparatus that switching shows the page
CN110136236A (en) Personalized face&#39;s display methods, device, equipment and the storage medium of three-dimensional character
CN110335224A (en) Image processing method, device, computer equipment and storage medium
CN109922356A (en) Video recommendation method, device and computer readable storage medium
CN108897597A (en) The method and apparatus of guidance configuration live streaming template
CN110121094A (en) Video is in step with display methods, device, equipment and the storage medium of template
CN108900925A (en) The method and apparatus of live streaming template are set

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant