CN114972586A - Image processing method, device, equipment and computer readable storage medium - Google Patents

Image processing method, device, equipment and computer readable storage medium Download PDF

Info

Publication number
CN114972586A
CN114972586A CN202210527655.9A CN202210527655A CN114972586A CN 114972586 A CN114972586 A CN 114972586A CN 202210527655 A CN202210527655 A CN 202210527655A CN 114972586 A CN114972586 A CN 114972586A
Authority
CN
China
Prior art keywords
animation effect
time length
current state
animation
image processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210527655.9A
Other languages
Chinese (zh)
Inventor
孙永建
陈旻
邢刚
冯亚楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
MIGU Culture Technology Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
MIGU Culture Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, MIGU Culture Technology Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202210527655.9A priority Critical patent/CN114972586A/en
Publication of CN114972586A publication Critical patent/CN114972586A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses an image processing method, an image processing device, image processing equipment and a computer readable storage medium, relates to the technical field of image processing, and aims to solve the problems that the existing image transformation effect is too monotonous, is similar to the effect graph generated by image processing through one key, has no storyline, is lack of interest and cannot be used for guiding explanation. The method comprises the following steps: identifying a first object and a second object in a video capture interface; acquiring the current state quantity of the first object; determining a target animation effect according to the current state quantity and the timing duration; adding, in the video capture interface, a transitional animation effect that changes to the target animation effect for the second object. The embodiment of the invention generates the target animation effect for the second object based on the current state quantity of the first object, thereby realizing that the state change of the first object is guided by the animation image change of the second object, simultaneously increasing the story performance and the interest and enriching the image transformation mode.

Description

Image processing method, device, equipment and computer readable storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, an image processing device, and a computer-readable storage medium.
Background
In the field of human face or human body graph transformation application, in the prior art, a source picture or video is firstly imported into editing software according to the existing intention of a user, and a target picture or video is generated through the output of the editing software. For example, the beauty effect is that after the picture is imported into the video editing software, each organ part of the face is beautified and modified, and finally a new picture which is more beautiful and better in appearance than the source picture is generated.
The image transformation effect realized by the existing Artificial Intelligence (AI) and AI detection technology is too monotonous, is similar to generating an effect map by performing image processing through one key, and cannot be used for guiding description.
Disclosure of Invention
The embodiment of the invention provides an image processing method, an image processing device, an image processing equipment and a computer readable storage medium, which aim to solve the problem that the existing image transformation effect is too monotonous, is similar to the generation of an effect graph through image processing by one key and cannot be used for guiding description.
In a first aspect, an embodiment of the present invention provides an image processing method, including:
identifying a first object and a second object in a video capture interface;
acquiring the current state quantity of the first object;
determining a target animation effect according to the current state quantity and the timing duration;
adding, in the video capture interface, a transitional animation effect that changes to the target animation effect for the second object.
Optionally, the determining a target animation effect according to the current state quantity and the timing duration includes:
determining a first time length required by the occurrence of the maximum state change according to the current state quantity and the timing duration;
and determining the target animation effect according to the first time length and the time length threshold.
Optionally, the determining the target animation effect according to the first time length and the time length threshold includes:
if the first time length is less than or equal to a first time length threshold value, taking a first animation effect as the target animation effect;
if the first time length is greater than or equal to a second time length threshold, taking a second animation effect as the target animation effect;
wherein the second duration threshold is greater than the first duration threshold.
Optionally, the adding, in the video shooting interface, a transition animation effect that changes to the target animation effect for the second object includes:
under the condition that the first time length is smaller than or equal to a first time length threshold value, determining a first transition animation according to the current state quantity;
adding the first transition animation for the second object in the video shooting interface;
the first animation effect comprises N characteristic images in total, the first transition animation comprises at least one of the N characteristic images, and the number of the characteristic images and/or the display size of the characteristic images in the first transition animation are positively correlated with the current state change amount; the current state variation is a difference value between the current state quantity and an initial state quantity of the first object at the initial timing; n belongs to a positive integer.
Optionally, after adding the first transition animation to the second object in the video shooting interface, the method further includes:
determining a second transition animation according to the timing duration under the condition that the timing duration is greater than a first duration threshold and smaller than a second duration threshold;
updating the first transition animation to the second transition animation in the video shooting interface;
the first animation effect comprises N characteristic images in total, the second transition animation comprises at least one of the N characteristic images, and the number of the characteristic images and/or the display size of the characteristic images in the second transition animation are in negative correlation with the timing duration; n belongs to a positive integer.
Optionally, the adding, in the video shooting interface, a transition animation effect that changes to the target animation effect for the second object includes:
determining a third transition animation according to the timing duration under the condition that the first time length is greater than or equal to a second duration threshold;
adding the third transition animation for the second object in the video shooting interface;
the second animation effect comprises M characteristic images, the third transition animation comprises at least one of the M characteristic images, and the number of the characteristic images and/or the display size of the characteristic images in the third transition animation are positively correlated with the timing duration; m is a positive integer.
Optionally, the determining, according to the current state quantity and the timing duration, a first time length required for occurrence of a maximum state change quantity includes:
determining the current state variation according to the current state quantity and the initial state quantity of the first object at the initial timing;
and determining a first time length required by the occurrence of the maximum state variation according to the current state variation and the timing duration.
In a second aspect, an embodiment of the present invention further provides an image processing apparatus, including:
the identification module is used for identifying a first object and a second object in the video shooting interface;
the acquisition module is used for acquiring the current state quantity of the first object;
the determining module is used for determining a target animation effect according to the current state quantity and the timing duration;
and the generating module is used for adding a transition animation effect which is changed to the target animation effect to the second object in the video shooting interface.
In a third aspect, an embodiment of the present invention further provides an image processing apparatus, including: a transceiver, a memory, a processor, and a computer program stored on the memory and executable on the processor; the processor is used for reading the program in the memory to realize the steps in the image processing method.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps in the image processing method as described above.
In the embodiment of the invention, the first object and the second object in the video shooting interface are identified; acquiring the current state quantity of the first object; further determining a target animation effect according to the current state quantity and the timing duration; and adding a transitional animation effect changing to the target animation effect to the second object in the video shooting interface. Therefore, by using the scheme of the embodiment of the invention, the target animation effect can be generated for the second object based on the current state of the first object, so that the state change of the first object can be guided by the animation image of the second object, meanwhile, the story performance and the interest are increased, and the image transformation mode is enriched.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the description below are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive labor.
FIG. 1 is a flow chart of an image processing method provided by an embodiment of the invention;
FIG. 2 is one of the schematic diagrams of a video capture interface provided by an embodiment of the present invention;
FIG. 3 is a second schematic diagram of a video capture interface provided by an embodiment of the present invention;
FIG. 4 is a third schematic diagram of a video capture interface provided by an embodiment of the present invention;
FIG. 5 is a fourth schematic view of a video capture interface provided by an embodiment of the present invention;
FIG. 6 is a fifth schematic view of a video capture interface provided by an embodiment of the present invention;
FIG. 7 is a sixth schematic view of a video capture interface provided by an embodiment of the present invention;
FIG. 8 is a seventh schematic view of a video capture interface provided by an embodiment of the present invention;
fig. 9 is a structural diagram of an image processing apparatus provided in an embodiment of the present invention;
fig. 10 is a schematic diagram of a hardware structure of an image processing apparatus according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without inventive step based on the embodiments of the present invention, are within the scope of protection of the present invention.
Referring to fig. 1, fig. 1 is a flowchart of an image processing method according to an embodiment of the present invention, and as shown in fig. 1, the method includes the following steps:
in step 101, a first object and a second object in a video capture interface are identified.
In the step, an image picture acquired by the camera is displayed on a video shooting interface of the electronic device, a first object in the image picture can be a person or an object, and a second object can be a person or an object.
As an implementation, the user presents the first object and the second object in front of the camera, so that the camera recognizes and acquires the first object and the second object.
For example, the user shows a bowl and a child in front of the camera, and the camera recognizes the bowl and the child as a first object and the child as a second object. When the rice bowl is displayed, the user can be guided to perform display behaviors, such as: changing different angles, the distance between the camera and the like.
As a further implementation manner, the image frames acquired by the camera include children, bowls, spoons and the like, wherein a guide identifier is displayed on the video shooting interface and used for prompting the user to select the first object and the second object; further, based on the user's input, the first object and the second object are determined.
For example, in a case where a guide mark displayed on a video shooting interface indicates that there are a plurality of objects, a user indicates that a rice bowl is a first object and a child is a second object through voice input, text description input, or touch input.
And 102, acquiring the current state quantity of the first object.
In this step, the current state quantity may be determined based on an image feature of the first object currently displayed on the video capture interface. The current state quantity may include at least one of: the size of the image feature of the first object, how many of the image features of the first object, the location of the image feature of the first object, the color of the image feature of the first object, the shape of the image feature of the first object, and the like.
Illustratively, when the first object is a rice bowl, the amount of rice in the rice bowl is taken as the current state amount of the first object.
Illustratively, when the first object is a written text, the text amount on the written text is taken as the current state amount of the first object.
Illustratively, when the first object is a toy storage box, the number of toys in the toy storage box is taken as the current state quantity of the first object.
And 103, determining a target animation effect according to the current state quantity and the timing duration.
In the step, the state change rate of the first object can be estimated according to the current state quantity and the timing duration; thus, based on the state change rate, the time length actually required for completing the set target is obtained; further, the target animation effect can be determined according to the corresponding relation between the duration and the animation effect.
The time length actually required for completing the set target can be understood as the time length required for changing the initial image feature of the user to the image feature desired by the user.
If so, estimating the eating speed of the child according to the rice amount in the current rice bowl; and further estimating the meal duration; and adding a target animation effect corresponding to the eating time for the child according to the corresponding relation between the eating time and the animation effect.
And 104, adding a transition animation effect which is changed to the target animation effect to the second object in the video shooting interface.
In this step, as an implementation manner, the image of the second object and the transitional animation effect may be synthesized to obtain an animated image of the second object, and the animated image is displayed in the video shooting interface.
In the above embodiment, two persons may be selected as the first object and the second object respectively, two objects may be selected as the first object and the second object respectively, one object and one person may be selected as the first object and the second object respectively, and a target animation effect is generated for the other object based on the current state quantity of the reference object and the logical change relationship between the two objects, so that the state change of the first object can be guided by the animated image of the second object, that is, the plot of the story can be described by the image change with consistent and causal logic, a story plot technical display is formed, meanwhile, the interestingness is enhanced, and the image transformation mode is enriched.
In an embodiment, before step 102, the method further includes:
acquiring a first input of a user;
determining the first animation effect and the second animation effect according to the first input.
The first input may be a voice input, a text input, or a touch selection input.
Illustratively, the first input is a voice input or a text input, and the input content may include: good effect of children is that the angel of the fairy beautiful god carries wings, has a halo on the head and also has a magic stick which cannot be used; the poor effect is to become a pig bandy with big ears and big noses.
Illustratively, in the case where an animation effect selection control is displayed, the first input is a touch input to the first animation effect and the second animation effect corresponding control.
Optionally, in an embodiment, the first animation effect is a desired positive animation effect of the second object, and the second animation effect is an undesired negative animation effect of the second object. In an embodiment, the first animation effect is a positive animation effect that acts as an incentive or a forward guide, and the second animation effect is a negative animation effect that acts as a punishment or a supervisor.
Optionally, the output effect customized by the human rule can be changed into a blind box mode. Such as a person not becoming beautiful or ugly, but becoming an unpredictable product. For example, a pig, rather than a fixed pig, may become a tree, a mouse.
In an embodiment, the step 103 includes:
determining a first time length required by the occurrence of the maximum state change according to the current state quantity and the timing duration;
and determining the target animation effect according to the first time length and the time length threshold.
In this embodiment, the first time length is an estimated time length, and the first time length can reflect a change rate of the first object under the condition that the maximum state change amount is constant, and different change rates correspond to different target animation effects. In this way, it is possible to guide the state change of the first object by the change of the target animation effect.
In a specific embodiment, the determining, according to the current state quantity and the timing duration, a first time length required for occurrence of a maximum state change amount includes:
determining the current state variation according to the current state quantity and the initial state quantity of the first object at the initial timing;
and determining a first time length required by the occurrence of the maximum state variation according to the current state variation and the timing duration.
For example, the first object is a bowl, the first object has a maximum capacity attribute maxbencmarkaprop, a minimum minbencamarkaprop of 0, the current state quantity is mBenchmarkAProp, and initially the mBenchmarkAProp of maxbencamarkaprop. The first time length is the time consumed when the rice amount of the rice bowl is changed from MaxBenchmarkAProp to 0.
Then, the first time length is:
Figure BDA0003644982610000071
Figure BDA0003644982610000072
wherein, Δ eventprime is a first time length, and T is a current timing duration.
In a specific embodiment, the determining the target animation effect according to the first time length and the time length threshold includes:
if the first time length is less than or equal to a first time length threshold value, taking the first animation effect as the target animation effect;
if the first time length is greater than or equal to a second time length threshold, taking the second animation effect as the target animation effect;
wherein the second duration threshold is greater than the first duration threshold.
Illustratively, the first animation effect is a beautiful angel and the second animation effect is a hog ring. The particular animation effect may be selected based on user requirements, such as may be selected based on preferences of the second object.
In this embodiment, the target animation effect can be determined based on the first length of time.
Further, in one embodiment, the step 104 includes:
under the condition that the first time length is smaller than or equal to a first time length threshold value, determining a first transition animation according to the current state quantity; adding the first transition animation for the second object in the video shooting interface;
the first animation effect comprises N characteristic images in total, the first transition animation comprises at least one of the N characteristic images, and the number of the characteristic images and/or the display size of the characteristic images in the first transition animation are positively correlated with the current state change; the current state variation is a difference value between the current state quantity and an initial state quantity of the first object at the initial timing; n belongs to a positive integer.
In this embodiment, the number of image features in the first transition animation is at least some of the N image features of the first animation effect, and the sizes of the image features are all less than or equal to the size of the image features in the first animation effect. In this way, it is possible to realize a display device that reflects the intermediate change process from the current effect to the first animation effect, or the current degree of change of the first animation effect. Finally, gradually growing all image features of the first animation effect by using the step size of n1 image features along with the change of the current state quantity; or the partial image features of the first animation effect are gradually grown with a step size of n2 image features.
In one implementation, in the case that the first time length is less than or equal to the third time length threshold, and the third time length threshold is less than the first time length threshold, gradually growing all image features of the first animation effect by using n1 image features as step sizes; and under the condition that the first time length is greater than the third time length threshold and less than or equal to the first time length threshold, gradually growing the partial image features of the first animation effect by taking the step length as n2 image features.
In an embodiment, after adding the first transition animation for the second object in the video capture interface, the method further comprises:
determining a second transition animation according to the timing duration under the condition that the timing duration is greater than a first duration threshold and smaller than a second duration threshold; updating the first transition animation to the second transition animation in the video shooting interface;
the first animation effect comprises N characteristic images in total, the second transition animation comprises at least one of the N characteristic images, and the number of the characteristic images and/or the display size of the characteristic images in the second transition animation are in negative correlation with the timing duration; n belongs to a positive integer.
In this embodiment, the second transitional animation is used to reflect the intermediate change process from the first transitional animation to the no-animation effect. And controlling the quantity of the image features which are grown to be gradually reduced and/or the sizes of the image features to be gradually reduced under the condition that the timing duration exceeds the third duration threshold but does not exceed the first duration threshold. In this way, an animation effect is achieved that gradually degrades the grown image features.
Further, in one embodiment, the step 104 includes:
determining a third transition animation according to the timing duration under the condition that the first time length is greater than or equal to a second duration threshold;
adding the third transition animation for the second object in the video shooting interface;
the second animation effect comprises M characteristic images, the third transition animation comprises at least one of the M characteristic images, and the number of the characteristic images and/or the display size of the characteristic images in the third transition animation are positively correlated with the timing duration; m is a positive integer.
In this embodiment, the number of image features in the third transition animation is at least part of the M image features of the second animation effect, and the sizes of the image features are all smaller than or equal to the size of the image feature in the second animation effect. In this way, it is possible to realize a display device that reflects the intermediate change process from the current effect to the second animation effect, or the current degree of change of the second animation effect. Finally, gradually growing all image features of the second animation effect by using the step length of m1 image features along with the increase of the timing duration; or the partial image features of the second animation effect are gradually grown by the step size of m2 image features.
The above step 104 will be described with reference to fig. 2 to 8, taking the first animation effect as an angel and the second animation effect as an example of a pig ring.
The angel comprises the following three characteristic images: wings k11, crown k12, magic stick k 13. The Zhu Bajie comprises the following two characteristic images: the great ear k21 and the great nose k 22. The maximum attribute MaxBenchmarkBKmn is provided for the wings, aureole, magic wand, pig banderole nose and mouth, the current actual effect value is mBenchmarkBProp, and the initial value is 0. It is understood that the maximum effect values of the angel are: MaxBenchmarkBProp ═ MaxBk11+ MaxBk12+ MaxBk 13; the maximum effect value of the Zhu Bajie is as follows: MaxBenchmarkBProp ═ MaxBk21+ MaxBk 22.
As shown in fig. 2 to 4, the frontal angel animation effect change is that the user eats a bite of rice, which is a little bit longer, and does not eat a long food, i.e., the number of feature images and/or the display size of the feature images included in the angel animation effect is positively correlated with the current amount of rice eaten.
Specifically, if the current remaining meal amount x and the current actual usage time timer Δ eventpromime in the first object are monitored in real time, (1-x/maxbenchmark aprop) is a proportion of the consumed meal amount, and in the case where the first time length is less than or equal to the first time length threshold, the number of image features of the angel and/or the display size of the image features are gradually displayed as the consumed meal amount increases.
In one embodiment, the initial state of the first object is as shown in fig. 2, when the first time length is less than the third time threshold and the third time length threshold is less than the first time length threshold, gradually displaying all the image features of the first animation effect with a step size of n1 image features (i.e. the first transitional animation, as shown in fig. 4, has grown the wings and the halo of the angel, as shown in fig. 3, has grown the wings, the halo, and the magic wand of the angel); beyond the third time threshold but within the first time threshold, the partial image features of the first animation effect are gradually displayed in steps of n2 image features (i.e. the first transition animation, shown in fig. 5, with only gradually growing wings). When the timing duration exceeds the first time threshold but does not exceed the second time threshold, all image features which are grown are gradually degraded along with the increase of the timing duration (second transition animation).
It should be noted that when the first animation effect has only one image feature, the lengths of n1 and n2 are 1, and if there are positive image features, the lengths of n1 and n2 may be greater than 1.
Illustratively, the time (first time length) consumed when the rice amount of the bowl is changed from maxbenchmark aprop to 0, i.e., the time when the user actually eats the rice: delta EventProTime<=△T 1 ×D1[0]In the meantime, the angel wings appeared, wearing the crown and holding the magic stick (see fig. 4). And the wings, the halo and the magic stick have the following properties: mBenchmarkBProp ═ MaxBk11+ MaxBk12+ MaxBk 13. times.lim x→0 (1-x/MaxBenchmarkAProp); wherein, MaxBk11, MaxBk12 and MaxBk13 are maximum attribute values (i.e. maximum display sizes) of the wings, the aureoles and the magic sticks respectively; delta T 1 ×D1[0]A third duration threshold; delta T 1 Is a set total duration; d1[0]]Is a first weight index; Δ eventprime is the first length of time.
Illustratively, the time (first time length) consumed when the rice amount of the bowl is changed from maxbenchmark aprop to 0, i.e., the time when the user actually eats the rice: delta EventProTime<=△T 1 ×D1[1]When the time exceeds the critical value and the third time length threshold value, the magic stick and the crown disappear, only the attribute of the angel wing (as shown in figure 5) exists, and the change is gradually carried out according to the change property of the original attribute. And the wing attributes are: mBenchmarkBProp ═ (MaxBk11) × lim x→0 (1-x/MaxBenchmarkAProp); wherein MaxBk11 is the maximum attribute value (i.e. maximum display size) of the wing; delta T 1 ×D1[1]Is a first time length threshold, Δ T 1 Is a set total duration; d1[1]]Is a second weight index; Δ eventprime is the first length of time.
Illustratively, the time (first time length) consumed when the rice amount of the bowl is changed from maxbenchmark aprop to 0, i.e., the time when the user actually eats the rice: delta EventProTime<=△T 1 2, when the first time length threshold value is exceeded, the wings disappear, namely mBenchmarkBProp equals 0; wherein, Delta T 1 A second duration threshold, Δ T 1 For a set total duration, Δ eventprimitime is a first duration.
As shown in fig. 7 and 8, the animation effect of the negative pig's sika is changed in such a way that the effect is worse as the time is longer no matter you eat or eat, that is, the number of the feature images and/or the display size of the feature images included in the animation effect of the pig's sika is positively correlated with the time length.
In a specific example, when the first time length is greater than the second time length threshold but not greater than the fourth time length threshold and the fourth time length threshold is greater than the second time length threshold as shown in fig. 2, the partial image features of the second animation effect are gradually displayed with the step size of m1 image-specific features as time increases (as in fig. 7, the pig nose starts to grow but the pig ear does not grow no matter eating no meal); when the first length of time exceeds the fourth length of time threshold and the fourth length of time threshold is greater than the second length of time threshold, all image features of the second animation effect are gradually displayed (as in fig. 8, ears and nose are grown) with a step size of m2 image features. And when the fifth time length threshold value is exceeded, the nose and the big ear of the pig are completely displayed all the time.
It should be noted that when the second animation effect has only one image feature, the lengths of m1 and m2 are 1, and if the second animation effect has a plurality of image features, the lengths of m1 and m2 may be greater than 1.
Illustratively, the time (first time length) consumed when the rice amount of the bowl is changed from maxbenchmark aprop to 0, i.e., the time when the user actually eats the rice: delta EventProTime>△T 1 ×D2[0]And Δ EventProTime<=△T 1 ×D2[1]In time, the pig had a garring nose, and the attributes of the second subject at this time were: mBenchmarkBProp ═ (MaxBk21+ MaxBk 22X 0) × (. DELTA.EventProTime-. DELTA.T) 1 /2)/(△T 1 ×D2[0]-△T 1 /2))。
Illustratively, the bowl rice amount mBenchmarkAProp is the time (first time length) consumed when the MaxBenchmarkAProp becomes 0, i.e., the user's actual rice amountThe time of rice finish: delta EventProTime>△T 1 ×D2[0]In time, the pig bandy nose and ears appear, and the attributes of the pig bandy are as follows:
mBenchmarkBProp=(MaxBk22)×(△EventProTime-△T 1 /2)/(△T 1 ×D2[0]-△T 1 /2)) + MaxBk 21. When Δ EventProTime>=△T 1 ×D2[1]In time, the complete nose and ears of the Zhuyajie appeared directly.
Wherein, the above-mentioned DeltaT 1 ×D2[0]Is, Delta T 1 ×D2[1]For example, MaxBk21 is the maximum attribute (maximum display size) of the nose, MaxBk22 is the maximum attribute (maximum display size) of the ear, and Δ T 1 The delta eventprimitime is a first time length for the set total time length; delta T 1 /2 is a second duration threshold Δ T 1 ×D2[0]Is the fourth time length threshold, Δ T 1 ×D2[1]For the fifth duration threshold, D2[0]Is a third weight value, D2[1]Is the fourth weight value.
It should be noted that the first to fourth weight values may be default values. Alternatively, it may be obtained from voice input, for example, the user sets the total time length to 60 minutes by voice, becomes a complete angel after eating the first 10 minutes, eats only angel wings after eating 20 minutes, eats a long pig nose after eating 40 minutes, eats a long pig nose and pig ears after eating 50 minutes. Then D1[0] ═ 0.17 (i.e. 10/60), D1[1] ═ 0.33 (i.e. 20/60), D2[0] ═ 0.67(40/60), and D2[1] ═ 0.83 (50/60).
In an embodiment, before determining the target animation effect according to the first time length and the time length threshold, the method further includes:
acquiring a second input by the user;
and determining the first time length threshold value and the second time length threshold value according to the second input. The second input may be a voice input, a text input, or a touch selection input.
Illustratively, taking the first object as a rice bowl as an example, the input content of the second input may include: the total meal time, and a first animation effect transformation threshold value (first time threshold value) and a second animation effect transformation threshold value (second time threshold value). For example, the total time is 60 minutes, the time of eating within 30 minutes becomes a angel, and the time of eating more than 50 minutes becomes a Zhu Bajie.
For example, the second input may be an input operation on the touch input interface to set the first duration threshold and the second duration threshold.
In the above example, the image processing method provided by the present application is described by taking a child eating scene as an example, in the example, the child is detected to eat through the dynamic change of the interesting AI image, the change of the portrait of the child is automatically caused according to how fast the child eats, the process of the change is not only vivid and interesting and brings about quickness to the child, but also the problem that the child does not eat is guided and solved through the process.
The following is introduced in connection with the application scenario:
step a, installing an application program with an interesting AI detection image change dynamic mechanism for a mobile phone or a computer, and inputting bowls which are commonly used by children and have different rice volumes, including empty bowls, into the mobile phone for machine learning. Similarly, a picture set of children sitting in front of the reference dining table is input into the mobile phone for deep learning. Then, the system can judge whether the child sits in front of the dining table or not by collecting pictures, and whether the child is like to eat or not.
And b, fixing the mobile phone or the computer on the dining table, and acquiring the composition in the visual field to well contain the bowls of children and the activity range of the children. After the mobile phone or the application program is set and fixed, the starting system is started, and at the moment, children and bowl-shaped states can be observed through video preview. The AI detection system starts to automatically judge through the states of the collected pictures and the analyzed pictures, and judges that the bowl is full of rice in the bowl, and children sit in front of the bowl, and automatically collect a section of pictures for deep learning at the moment.
And step c, when the AI monitors that the child lifts the spoon or fills the food into the mouth of the child, the system automatically starts to time and initializes.
And d, after the initialization and the automatic start are finished, the AI detection system continues to monitor the picture, when the rice in the bowl is monitored to be gradually reduced and the bowl does not move and rotate reversely greatly, the child on the picture starts to grow wings and gradually appears a crown and a magic stick, and when the child eats the rice within 10 minutes, the picture of the child in the mobile phone or the computer gradually becomes a beautiful girl with a white angel wing, a crown and a magic stick, which is just like an angel. When the child eats the rice within 20 minutes, the picture of the child in the mobile phone or the computer gradually becomes to have a white wing, but the crown and the magic stick disappear after more than 10 minutes. When the child finishes eating the rice within 30 minutes, the picture is judged to be normal, and the original wings disappear after 20 minutes. When the child eats for more than 30 minutes, the pig noses gradually appear on the faces of the child in the pictures of the mobile phone or the computer. When the eating time of the children exceeds 40 minutes, the children in the mobile phone gradually grow the ears of the pig eight-ring and gradually become the pig eight-ring.
In the above-described aspect, by using the first object and the second object as a source and a target, respectively, the map change is generated for the other object by detecting the change in the amount of one of the objects as an event source of the change in the AI using the AI; the AI detection variation is used in conjunction with certain rules to produce the effect of the second object through the first object guidance. The dynamic continuous change can be realized, the device is more vivid and interesting, and the experience feeling is better. And through vivid and interesting positive and negative comparison effects, the method can be used as a high-efficiency knowledge spreading and popularizing means, and some practical problems in life are solved.
For example, through a vivid and interesting show, a user (e.g., a child) is greatly interested in actively participating in activities, and interesting interaction enables the user (e.g., the child) to immerse the user therein, so that the user can recognize the association relationship between two objects through a dynamic change effect, and even solve problems that some technologies and people cannot solve, such as the child does not like to eat, and cannot solve the problems no matter how parents love or what high-technology products exist, but the interest generated through case change in the invention causes the child to have great interest, feel that eating is a happy event, and then become actively eating. And the generated video has more ornamental and vivid properties, so that people can obtain better pleasure and experience.
The embodiment of the invention also provides an image processing device. Referring to fig. 9, fig. 9 is a block diagram of an image processing apparatus according to an embodiment of the present invention. Since the principle of the image processing apparatus for solving the problem is similar to the image processing method in the embodiment of the present invention, the implementation of the video processing apparatus can refer to the implementation of the method, and repeated descriptions are omitted.
As shown in fig. 9, the image processing apparatus 900 includes:
the identification module 901 is used for identifying a first object and a second object in a video shooting interface;
a first obtaining module 902, configured to obtain a current state quantity of the first object;
a first determining module 903, configured to determine a target animation effect according to the current state quantity and the timing duration;
a generating module 904, configured to add, in the video shooting interface, a transition animation effect that changes to the target animation effect for the second object.
Optionally, the apparatus 900 further comprises:
the second acquisition module is used for acquiring the first input of the user;
a second determination module to determine the first animation effect and the second animation effect according to the first input.
Optionally, the first determining module 903 includes:
the first determining submodule is used for determining a first time length required by the occurrence of the maximum state variation according to the current state quantity and the timing duration;
and the second determining submodule is used for determining the target animation effect according to the first time length and the time length threshold.
Optionally, the second determining sub-module includes:
a first determining unit, configured to take the first animation effect as the target animation effect if the first time length is less than or equal to a first time length threshold;
a second determining unit, configured to take the second animation effect as the target animation effect if the first time length is greater than or equal to a second time length threshold;
wherein the second duration threshold is greater than the first duration threshold.
Optionally, the generating module 904 comprises:
the first generation submodule is used for determining a first transition animation according to the current state quantity under the condition that the first time length is smaller than or equal to a first time length threshold value;
the second generation sub-module is used for adding the first transition animation to the second object in the video shooting interface;
the first animation effect comprises N characteristic images in total, the first transition animation comprises at least one of the N characteristic images, and the number of the characteristic images and/or the display size of the characteristic images in the first transition animation are positively correlated with the current state change; the current state variation is a difference value between the current state quantity and an initial state quantity of the first object at the initial timing; n belongs to a positive integer.
Optionally, the generating module 904 further comprises:
the third generation submodule is used for determining a second transition animation according to the timing duration under the condition that the timing duration is greater than the first duration threshold and smaller than the second duration threshold;
the fourth generation submodule is used for updating the first transition animation into the second transition animation in the video shooting interface;
the first animation effect comprises N characteristic images in total, the second transition animation comprises at least one of the N characteristic images, and the number of the characteristic images and/or the display size of the characteristic images in the second transition animation are in negative correlation with the timing duration; n belongs to a positive integer.
Optionally, the generating module 904 comprises:
the fifth generation submodule is used for determining a third transition animation according to the timing duration under the condition that the first time length is greater than or equal to a second duration threshold;
a sixth generation submodule, configured to add the third transition animation to the second object in the video shooting interface;
the second animation effect comprises M characteristic images, the third transition animation comprises at least one of the M characteristic images, and the number of the characteristic images and/or the display size of the characteristic images in the third transition animation are positively correlated with the timing duration; m is a positive integer.
Optionally, the first determining sub-module includes:
a third determining unit, configured to determine a current state change amount according to the current state amount and an initial state amount of the first object at the time of initial timing;
and the fourth determining unit is used for determining the first time length required by the occurrence of the maximum state variation according to the current state variation and the timing duration.
Optionally, the apparatus 900 further comprises:
the third acquisition module is used for acquiring a second input by the user;
a third determining module, configured to determine the first duration threshold and the second duration threshold according to the second input.
The apparatus provided in the embodiment of the present invention may implement the method embodiments, and the implementation principle and technical effects are similar, which are not described herein again.
As shown in fig. 10, the image processing apparatus of the embodiment of the present invention includes: a transceiver 1030, a memory 1020, a processor 1000 and a computer program stored on the memory 1020 and executable on the processor 1000; the processor 1000 is configured to read a program in the memory 1020, and execute the following processes:
identifying a first object and a second object in a video capture interface;
acquiring the current state quantity of the first object;
determining a target animation effect according to the current state quantity and the timing duration;
in the video shooting interface, adding a transitional animation effect changing to the target animation effect for the second object.
A transceiver 1010 for receiving and transmitting data under the control of the processor 1000.
Where in fig. 10, the bus architecture may include any number of interconnected buses and bridges, with various circuits being linked together, particularly one or more processors represented by processor 1000 and memory represented by memory 1020. The bus architecture may also link together various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. The bus interface provides an interface. The transceiver 1010 may be a number of elements including a transmitter and a transceiver providing a means for communicating with various other apparatus over a transmission medium. The processor 1000 is responsible for managing the bus architecture and general processing, and the memory 1020 may store data used by the processor 1000 in performing operations.
The processor 1000 is responsible for managing the bus architecture and general processing, and the memory 1020 may store data used by the processor 1000 in performing operations.
Optionally, the processor 1000 is further configured to read the computer program, and execute the following steps:
acquiring a first input of a user;
determining the first animation effect and the second animation effect according to the first input.
Optionally, the processor 1000 is further configured to read the computer program, and execute the following steps:
determining a first time length required by the occurrence of the maximum state change according to the current state quantity and the timing duration;
and determining the target animation effect according to the first time length and the time length threshold.
Optionally, the processor 1000 is further configured to read the computer program, and execute the following steps:
if the first time length is less than or equal to a first time length threshold value, taking the first animation effect as the target animation effect;
if the first time length is greater than or equal to a second time length threshold, taking the second animation effect as the target animation effect;
wherein the second duration threshold is greater than the first duration threshold.
Optionally, the processor 1000 is further configured to read the computer program, and execute the following steps:
under the condition that the first time length is smaller than or equal to a first time length threshold value, determining a first transition animation according to the current state quantity;
adding the first transition animation for the second object in the video shooting interface;
the first animation effect comprises N characteristic images in total, the first transition animation comprises at least one of the N characteristic images, and the number of the characteristic images and/or the display size of the characteristic images in the first transition animation are positively correlated with the current state change amount; the current state variation is a difference value between the current state quantity and an initial state quantity of the first object at the initial timing; n belongs to a positive integer.
Optionally, the processor 1000 is further configured to read the computer program, and execute the following steps:
determining a second transition animation according to the timing duration under the condition that the timing duration is greater than a first duration threshold and smaller than a second duration threshold;
updating the first transition animation to the second transition animation in the video shooting interface;
the first animation effect comprises N characteristic images in total, the second transition animation comprises at least one of the N characteristic images, and the number of the characteristic images and/or the display size of the characteristic images in the second transition animation are in negative correlation with the timing duration; n belongs to a positive integer.
Optionally, the processor 1000 is further configured to read the computer program, and execute the following steps:
determining a third transition animation according to the timing duration under the condition that the first time length is greater than or equal to a second duration threshold;
adding the third transition animation for the second object in the video shooting interface;
the second animation effect comprises M characteristic images, the third transition animation comprises at least one of the M characteristic images, and the number of the characteristic images and/or the display size of the characteristic images in the third transition animation are positively correlated with the timing duration; m is a positive integer.
Optionally, the processor 1000 is further configured to read the computer program, and execute the following steps:
determining the current state variation according to the current state quantity and the initial state quantity of the first object at the initial timing;
and determining a first time length required by the occurrence of the maximum state variation according to the current state variation and the timing duration.
Optionally, the processor 1000 is further configured to read the computer program, and execute the following steps:
acquiring a second input by the user;
and determining the first time length threshold value and the second time length threshold value according to the second input.
The device provided by the embodiment of the present invention may implement the above method embodiment, and the implementation principle and technical effect are similar, which are not described herein again.
Furthermore, a computer-readable storage medium of an embodiment of the present invention stores a computer program that is executable by a processor to implement the steps of the image processing method as described above.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be physically included alone, or two or more units may be integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer-readable storage medium. The software functional unit is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) to execute some steps of the transceiving method according to various embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. An image processing method, comprising:
identifying a first object and a second object in a video capture interface;
acquiring the current state quantity of the first object;
determining a target animation effect according to the current state quantity and the timing duration;
adding, in the video capture interface, a transitional animation effect that changes to the target animation effect for the second object.
2. The image processing method according to claim 1, wherein the determining a target animation effect according to the current state quantity and a timing duration comprises:
determining a first time length required by the occurrence of the maximum state change according to the current state quantity and the timing duration;
and determining the target animation effect according to the first time length and the time length threshold.
3. The image processing method of claim 2, wherein the determining the target animation effect according to the first time length and the duration threshold comprises:
if the first time length is less than or equal to a first time length threshold value, taking a first animation effect as the target animation effect;
if the first time length is greater than or equal to a second time length threshold, taking a second animation effect as the target animation effect;
wherein the second duration threshold is greater than the first duration threshold.
4. The image processing method according to claim 3, wherein the adding, in the video capture interface, a transitional animation effect that changes to the target animation effect to the second object includes:
under the condition that the first time length is smaller than or equal to a first time length threshold value, determining a first transition animation according to the current state quantity;
adding the first transition animation for the second object in the video shooting interface;
the first animation effect comprises N characteristic images in total, the first transition animation comprises at least one of the N characteristic images, and the number of the characteristic images and/or the display size of the characteristic images in the first transition animation are positively correlated with the current state change amount; the current state variation is a difference value between the current state quantity and an initial state quantity of the first object at the initial timing; n belongs to a positive integer.
5. The method of image processing according to claim 4, wherein after adding the first transition animation for the second object in the video capture interface, the method further comprises:
determining a second transition animation according to the timing duration under the condition that the timing duration is greater than a first duration threshold and smaller than a second duration threshold;
updating the first transition animation to the second transition animation in the video shooting interface;
the first animation effect comprises N characteristic images in total, the second transition animation comprises at least one of the N characteristic images, and the number of the characteristic images and/or the display size of the characteristic images in the second transition animation are in negative correlation with the timing duration; n belongs to a positive integer.
6. The image processing method according to claim 3, wherein the adding, in the video capture interface, a transitional animation effect that changes to the target animation effect to the second object includes:
determining a third transition animation according to the timing duration under the condition that the first time length is greater than or equal to a second duration threshold;
adding the third transition animation for the second object in the video shooting interface;
the second animation effect comprises M characteristic images, the third transition animation comprises at least one of the M characteristic images, and the number of the characteristic images and/or the display size of the characteristic images in the third transition animation are positively correlated with the timing duration; m is a positive integer.
7. The image processing method according to claim 2, wherein the determining a first time length required for occurrence of a maximum state change amount according to the current state quantity and the timing length comprises:
determining the current state variation according to the current state quantity and the initial state quantity of the first object at the initial timing;
and determining a first time length required by the occurrence of the maximum state variation according to the current state variation and the timing duration.
8. An image processing apparatus characterized by comprising:
the identification module is used for identifying a first object and a second object in the video shooting interface;
the acquisition module is used for acquiring the current state quantity of the first object;
the determining module is used for determining a target animation effect according to the current state quantity and the timing duration;
and the generating module is used for adding a transition animation effect which is changed to the target animation effect to the second object in the video shooting interface.
9. An image processing apparatus comprising: a transceiver, a memory, a processor, and a computer program stored on the memory and executable on the processor; characterized in that the processor, for reading the program in the memory, implements the steps in the image processing method according to any one of claims 1 to 7.
10. A computer-readable storage medium for storing a computer program, wherein the computer program, when executed by a processor, implements the steps in the image processing method according to any one of claims 1 to 7.
CN202210527655.9A 2022-05-16 2022-05-16 Image processing method, device, equipment and computer readable storage medium Pending CN114972586A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210527655.9A CN114972586A (en) 2022-05-16 2022-05-16 Image processing method, device, equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210527655.9A CN114972586A (en) 2022-05-16 2022-05-16 Image processing method, device, equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN114972586A true CN114972586A (en) 2022-08-30

Family

ID=82982777

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210527655.9A Pending CN114972586A (en) 2022-05-16 2022-05-16 Image processing method, device, equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN114972586A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116112761A (en) * 2023-04-12 2023-05-12 海马云(天津)信息技术有限公司 Method and device for generating virtual image video, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116112761A (en) * 2023-04-12 2023-05-12 海马云(天津)信息技术有限公司 Method and device for generating virtual image video, electronic equipment and storage medium
CN116112761B (en) * 2023-04-12 2023-06-27 海马云(天津)信息技术有限公司 Method and device for generating virtual image video, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
Blumberg Old tricks, new dogs: ethology and interactive creatures
CN102473320B (en) Bringing a visual representation to life via learned input from the user
US11819751B2 (en) Video rebroadcasting with multiplexed communications and display via smart mirrors
Arnold et al. " You Better Eat to Survive" Exploring Cooperative Eating in Virtual Reality Games
US7836437B2 (en) Semantic annotations for virtual objects
US9108110B2 (en) Information processing apparatus, information processing method, and program to allow conversation between a plurality of appliances and a user
TWI351298B (en) Game apparatus, message display method, and inform
JP7424285B2 (en) Information processing system, information processing method, and recording medium
US9898850B2 (en) Support and complement device, support and complement method, and recording medium for specifying character motion or animation
EP1420366A2 (en) System and method for modifying a portrait image in response to a stimulus
JP2012086059A (en) Method of providing animated viewing companion on display and virtual creature generator
US11554315B2 (en) Communication with augmented reality virtual agents
Dubrofsky A vernacular of surveillance: Taylor Swift and Miley Cyrus perform white authenticity
US20210312167A1 (en) Server device, terminal device, and display method for controlling facial expressions of a virtual character
CN109872800B (en) Diet accompanying system and diet accompanying method
CN114972586A (en) Image processing method, device, equipment and computer readable storage medium
CN112528936B (en) Video sequence arrangement method, device, electronic equipment and storage medium
KR101996973B1 (en) System and method for generating a video
WO2021230295A1 (en) Development stage determination device, development stage determination method, program, and terminal device
Richardson Strategies of enfreakment: Representations of contemporary bodybuilding
Phillips Like Gnats to a Forklift Foot
CN114712862A (en) Virtual pet interaction method, electronic device and computer-readable storage medium
Picart The third shadow and hybrid genres: horror, humor, gender, and race in Alien Resurrection
Doroski Thoughts of Spirits in Madness: Virtual Production Animation and Digital Technologies for the Expansion of Independent Storytelling
Smith A tall tale: The myth of sexual dimorphism and the queering heterosexual bodies in aya nakahara’s lovely complex

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination