CN114063860A - Image display method and device - Google Patents

Image display method and device Download PDF

Info

Publication number
CN114063860A
CN114063860A CN202111327624.0A CN202111327624A CN114063860A CN 114063860 A CN114063860 A CN 114063860A CN 202111327624 A CN202111327624 A CN 202111327624A CN 114063860 A CN114063860 A CN 114063860A
Authority
CN
China
Prior art keywords
image
role
exterior
contour point
character
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111327624.0A
Other languages
Chinese (zh)
Inventor
胡静婕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Weiwo Software Technology Co ltd
Original Assignee
Xi'an Weiwo Software Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xi'an Weiwo Software Technology Co ltd filed Critical Xi'an Weiwo Software Technology Co ltd
Priority to CN202111327624.0A priority Critical patent/CN114063860A/en
Publication of CN114063860A publication Critical patent/CN114063860A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The application discloses an image display method and an image display device, and belongs to the technical field of image processing. The method comprises the following steps: receiving a first input for determining a first image and a second image; displaying a target image based on the first image and the second image in response to the first input; the first image comprises a first role, the second image comprises a second role, and the target image comprises a third role; the third role is determined based on reference elements including a first exterior element of the first role and a second action element of the second role.

Description

Image display method and device
Technical Field
The present application belongs to the field of image processing technology, and in particular, relates to an image display method and apparatus.
Background
In the process of making the video image, the special effect can be made through a green curtain, for example, a shot object with point location information is shot to collect materials, and then the special effect is synthesized.
In the prior art, a video image production process needs professional shooting and collection, and the obtained material has irreversible conversion. The whole video image production process consumes large resources, and the design scheme cannot be changed at will after the materials are collected, so that the production can be carried out only by professional users. It can be seen that video image production has great difficulty for common users, and the operation difficulty is very high.
Disclosure of Invention
An object of the embodiments of the present application is to provide an image display method and an apparatus thereof, which can solve the problem that video image production is difficult for a common user to operate.
In a first aspect, an embodiment of the present application provides an image display method, including:
receiving a first input for determining a first image and a second image;
displaying a target image based on the first image and the second image in response to the first input;
the first image comprises a first role, the second image comprises a second role, and the target image comprises a third role;
the third role is determined based on reference elements including a first exterior element of the first role and a second action element of the second role.
In a second aspect, an embodiment of the present application provides an apparatus for displaying an image, the apparatus including:
an input module to receive a first input, the first input to determine a first image and a second image;
a display module to display a target image based on the first image and the second image in response to the first input;
the first image comprises a first role, the second image comprises a second role, and the target image comprises a third role;
the third role is determined based on reference elements including a first exterior element of the first role and a second action element of the second role.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the application, a first image and a second image selected by a user are determined by receiving a first input, the first image comprises a first appearance element expected to be used by the user, the first appearance element can comprise a special effect element, the second image comprises a second action element expected to be used by the user, and further, based on the first image and the second image, a target image expected by the user can be conveniently generated and displayed, so that a common user can conveniently make a video image.
Drawings
Fig. 1 is a schematic flowchart of an image display method provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of an image display interface provided in an embodiment of the present application;
FIG. 3 is a second schematic diagram of an image display interface provided in an embodiment of the present application;
FIG. 4 is a schematic diagram of the location of a contour point of a character provided by an embodiment of the present application;
FIG. 5 is a third schematic diagram of an image display interface provided in an embodiment of the present application;
fig. 6 is a schematic structural diagram of an image display device provided in an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
fig. 8 is a hardware configuration diagram of an electronic device implementing an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The image display method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
Fig. 1 is a schematic flowchart of an image display method provided in an embodiment of the present application, and as shown in fig. 1, the method includes:
a step 101 of receiving a first input, wherein the first input is used for determining a first image and a second image;
a step 102 of displaying a target image based on the first image and the second image in response to the first input;
the first image comprises a first role, the second image comprises a second role, and the target image comprises a third role;
the third role is determined based on reference elements including a first exterior element of the first role and a second action element of the second role.
Optionally, the image display method provided by the embodiment of the present application may be applied to a scene in which a user uses an electronic device to make a video image, where the electronic device may be a smart phone, a personal computer, a tablet computer, or a wearable smart device.
Optionally, the image display method provided in the embodiment of the present application may be applied to a scene in which a user uses image display software in an electronic device to display an image, where the image display software may be a mobile version application program, a desktop version application program, a web version application program, or the like.
Optionally, the electronic device may have the first image pre-stored therein, or the first image may be acquired by a server.
Optionally, the first image may include part or all of an image frame in a first video of the first character, and the first exterior element of the first character may be included in the first video.
Alternatively, the first character may be a person, a living being, an object, or a virtual character, and the first character is not limited in this embodiment of the application.
Optionally, the first appearance element may be used to characterize at least one of: the exterior of the first persona and the environment in which the first persona is located.
Optionally, the first exterior element may be a body surface feature element of the first character, a wearing element of the first character, or an environment element in which the first character is located.
For example, the body surface feature elements of the first character may include skin elements, hair elements, eyebrow elements, or the like.
For example, the wear elements of the first character may include clothing elements, hat elements, or eyeglass elements, among others.
For example, the environment elements of the first character may include a starry sky element, a landscape element, a building element, or the like.
Alternatively, the first exterior element may be a special effect element regarding the first character, such as a body surface feature element with a special effect, a wearing element with a special effect, or an environment element with a special effect.
For example, the body surface feature element with special effects may include a skin element with transparent special effects.
For example, the special-effect wearing element may include a hat element with a luminous effect.
For example, an environmental element with special effects may include a building element with suspended special effects.
Optionally, the electronic device may have the second image pre-stored therein, or the second image may be acquired by a server, or the second image may be generated by the electronic device.
Alternatively, in the case where the second image is generated by the electronic device, the electronic device may capture the generated second image in a green screen background.
Optionally, the second image may include some or all of the image frames in a second video of the second character, in which the second action elements of the second character may be included.
Alternatively, the second persona may be a character, a living being, an object, or a virtual persona, and the second persona is not limited herein.
Optionally, a second action element may be used to characterize the action of the second role.
Alternatively, the second action element may include an action of the second character, such as a walking action, a running action, or a jumping action of the second character, and so forth.
Therefore, since the first exterior element desired to be used by the user may be included in the first image, the first exterior element may include a special effect element, and the action element desired to be used by the user may be included in the second image, the reception of the first input may determine the exterior element and the action element desired to be used by the user.
Optionally, receiving the first input may be selecting one video from a set of alternative first videos (some or all image frames in the video may be as the first image) and selecting one video from a set of alternative second videos (some or all image frames in the video may be as the second image), wherein the first exterior elements are included in the first videos and the second action elements are included in the second videos.
Alternatively, receiving the first input may be determining a video including a first exterior element (some or all of image frames in the video may be as a first image) based on a keyword search of the user, and determining a video including a second action element (some or all of image frames in the video may be as a second image) based on the keyword search of the user.
Optionally, after receiving the first input, the electronic device may generate and display the target image based on the first image and the second image.
Optionally, the electronic device may generate a third character in the target image based on the reference element.
Alternatively, the reference element may include a first exterior element of the first character and a second action element of the second character.
Therefore, since the first exterior element in which the user desires to join the target image is included in the first image, the special effect element may be included in the first exterior element, and the second action element in which the user desires to join the target image is included in the second image, the target image with the special effect may be conveniently generated and displayed based on the first image and the second image.
Fig. 2 is a schematic view of an image display interface provided in an embodiment of the present application, and as shown in fig. 2, an electronic device may be a mobile phone terminal, and two columns of videos may be displayed on a display interface of the mobile phone terminal, where the first column of videos may be a set of alternative first videos, the first videos include a first exterior element, the second column of videos may be a set of alternative second videos, and the second videos include a second action element.
Alternatively, as shown in fig. 2, the first column of videos may include video a and video C, where the first character in video a may be character a, and the first exterior element of character a includes exterior element a1, exterior element a2, and exterior element A3; the first character in video C may be character C, the first exterior elements of character C including exterior element C1, exterior element C2, and exterior element C3.
Alternatively, as shown in fig. 2, the second column of videos may include video B and video D, where the second character in video B may be character B, the second action element of character B may include action element B0, and action element B0 may be the action of character B raising both hands; the second character in video D may be character D, the second action element of character D may include action element D0, and action element D0 may be the action that character D takes.
Alternatively, as shown in fig. 2, receiving the first input may be selecting one video from a first list of videos (some or all of the image frames in the video may be used as the first image) and selecting one video from a second list of videos (some or all of the image frames in the video may be used as the second image).
Alternatively, as shown in fig. 2, the receiving of the first input may be receiving an operation of clicking a circular check box at the upper right corner of the first column of videos and the second column of videos by the user, and in the case that the user is determined to check the videos, a check mark may be displayed in the circular check box at the upper right corner of the checked videos. For example, in fig. 2, videos C and D selected by the user are determined, and then a check mark may be displayed in a circular check box at the upper right corner of the videos C and D, and then it may be determined that the video C is the first image and the video D is the second image.
Fig. 3 is a second schematic view of an image display interface provided in the embodiment of the present application, as shown in fig. 3, the electronic device may be a mobile phone terminal, and a video E is displayed on the display interface of the mobile phone terminal, the target image may be part or all of image frames in the video E, and a role E in the video E may be a third role.
Alternatively, in the case where the first input is received, part or all of the image frames in the video C in fig. 2 are determined to be the first image and part or all of the image frames in the video D in fig. 2 are determined to be the second image, the video E may be generated and displayed based on the video C and the video D.
Alternatively, as shown in fig. 3, a character E may be included in the video E, the exterior elements of the character E may be an exterior element E1, an exterior element E2, and an exterior element E3, and the action element of the character E may be an action element E0, wherein the exterior element E1, the exterior element E2, and the exterior element E3 may respectively correspond to the exterior element C1, the exterior element C2, and the exterior element C3 in the video C in fig. 2, and the action element E0 may correspond to the action element D0 in the video D in fig. 2.
Alternatively, receiving the first input may determine a first image and a second image, wherein the first image may include a first exterior element of the first character, the second image may include a second action element of the second character, and further based on the first image and the second image, a target image for display may be generated, wherein a third character in the target image is determined based on the first exterior element and the second action element.
It can be understood that the electronic device may be a mobile phone terminal, the mobile phone terminal may receive a first input of a user, and further determine a first image and a second image, the first image may include a first appearance element, such as a skin element a of a first role, a clothes element B with a special effect of the first role, a starry sky element C, and the like, the second image may include a second action element, such as a walking action D of a second role, and further may generate and display a target image based on the first image and the second image, such as a skin element a of the third role, a clothes element B with a special effect of the third role, a starry sky element C, and a walking action D of the third role, so that the user may conveniently and concisely create a video with a special effect on the mobile phone terminal, and no new mobile phone hardware needs to be added, thereby improving user experience of mobile phone video creation.
In the embodiment of the application, a first image and a second image selected by a user are determined by receiving a first input, the first image comprises a first appearance element expected to be used by the user, the first appearance element can comprise a special effect element, the second image comprises a second action element expected to be used by the user, and further, based on the first image and the second image, a target image expected by the user can be conveniently generated and displayed, so that a common user can conveniently make a video image.
Optionally, the displaying a target image based on the first image and the second image comprises:
displaying the target image based on the first exterior element and the second action element;
wherein a third exterior element of the third character is the same as the first exterior element, and a third action element of the third character is the same as the second action element.
Optionally, a third exterior element of a third character in the target image may be determined based on the first exterior element of the first character in the first image.
For example, the first exterior elements of the first character in the first image may be a wearing element a with special effects and an environment element B with special effects, and it may be determined that the third exterior elements of the third character in the target image include the wearing element a and the environment element B.
Optionally, the third appearance element may be used to characterize at least one of: the appearance of the third character and the environment in which the third character is located.
Optionally, a third action element of a third character in the target image may be determined based on a second action element of the second character in the second image.
Optionally, a third action element may be used to characterize the action of the third role.
For example, the second action element of the second character in the second image may be a jumping action C, and it may be determined that the third action element of the third character in the target image includes action C.
Therefore, since the first exterior element is an exterior element desired to be used by the user and the second action element is an action element desired to be used by the user, the target image desired by the user can be conveniently generated based on the first exterior element and the second action element.
Optionally, the displaying the target image based on the first exterior element and the second action element includes:
acquiring a first contour point position of the first role and a second contour point position of the second role;
adjusting the relative distance of the midpoint of the second contour point based on the relative distance of the midpoint of the first contour point;
point location fusion is carried out on the first contour point location and the adjusted second contour point location, and a third contour point location of the third role is obtained;
and adding the first exterior elements to the third contour points to generate the target image.
Optionally, the electronic device may identify a contour (3d outline) point location of the first character of the first image, and may further obtain the first contour point location of the first character.
Optionally, the electronic device may identify the contour point of the second character of the second image, and may further obtain the second contour point of the second character.
Optionally, the electronic device may identify a contour point of each action in the second action element of the second role, and the second contour point may include contour point information of the action of the second role.
Fig. 4 is a schematic diagram of the contour points of the character provided in the embodiment of the application, as shown in fig. 4, the character in the first image or the second image may be a character, the electronic device may identify the contour points of the character, and may further obtain the contour points of the character, a triangle in fig. 4 represents one contour point of the character, and the contour points may include contour points of a head of the character, contour points of a trunk of the character, and contour points of limbs of the character.
Optionally, the electronic device may adjust the relative distance of the midpoint of the second contour point based on the relative distance X1 of the midpoint of the first contour point, such that the adjusted relative distance X2 of the midpoint of the second contour point is the same as or similar to the relative distance X1.
For example, in the case where the first character and the second character are character characters, if the relative distance between the two contour points P1 and P2 of the head of the first character is 10 unit distances, the relative distance between the corresponding two contour points P3 and P4 of the head of the second character may be adjusted to be 10 unit distances so that the relative distance between P1 and P2 is equal to the relative distance between P3 and P4.
Optionally, the electronic device may adjust the relative distance of the midpoint of the contour point for each action in the second action element based on the relative distance of the midpoint of the first contour point.
For example, the second action element may include action M1, action M2, and action M3, and further may include relative distances to the midpoint of the contour points of action M1, action M2, and action M3, respectively, based on the relative distance to the midpoint of the contour point of the first contour point.
Optionally, the electronic device may perform point location fusion on the first contour point location and the adjusted second contour point location, and further may obtain a third contour point location of a third role, so that the contour point location information of the third role is the same as the corresponding contour information of the first role, and meanwhile, the action of the third role is the same as the action of the second role.
Optionally, the electronic device may add the first exterior element to the third contour point, and may further generate the target image, so that the target image may include the first exterior element of the first character and may include the second action element of the second character.
Therefore, the contour point location information of the third character is the same as the corresponding contour information of the first character, the action of the third character is the same as the action of the second character, and the generated target image includes the first appearance element of the first character and the second action element of the second character.
Optionally, the reference element further includes a preset shape element;
the displaying a target image based on the first image and the second image includes:
displaying the target image based on the preset exterior element, the first exterior element and the second action element;
wherein a third exterior element of the third character is determined based on the first exterior element and the preset exterior element, and a third action element of the third character is the same as the second action element.
Optionally, the preset exterior elements may be used to characterize at least one of: a preset shape and a preset environment.
Optionally, the preset exterior elements may be preset body surface feature elements, preset wearing elements or preset environment elements.
For example, the preset body surface feature elements may include preset skin elements, preset hair elements, preset eyebrow elements, or the like.
For example, the preset wearing elements may include preset clothing elements, preset hat elements, preset glasses elements, or the like.
For example, the preset environment elements may include a preset starry sky element, a preset landscape element, a preset building element, or the like.
Alternatively, the preset exterior elements may be special effect elements, such as preset body surface feature elements with special effects, preset wearing elements with special effects, or preset environment elements with special effects.
For example, the body surface feature elements with the preset special effects can comprise skin elements with the preset transparent special effects.
For example, the wearing element with the preset special effect can comprise a hat element with the preset luminous special effect.
For example, the preset special-effect environment elements may include preset floating special-effect building elements.
Alternatively, a third exterior element of a third character in the target image may be determined based on the preset exterior element and the first exterior element.
For example, the first exterior elements may be a skin element a1 with a transparent special effect and a building element B1 with a floating special effect, the preset exterior elements may be a preset hat element a2 and a preset landscape element B2, and the third exterior elements that may determine the third character in the target image include a skin element a1 with a transparent special effect, a preset hat element a2, a building element B1 with a floating special effect and a preset landscape element B2.
Optionally, a third action element of a third character in the target image may be determined based on the second action element.
For example, the second action element of the second character in the second image may be a jump action C, and it may be determined that the third action element of the third character in the target image includes the jump action C.
Fig. 5 is a third schematic view of an image display interface provided in the embodiment of the present application, as shown in fig. 5, an electronic device may be a mobile phone terminal, a video F displayed on the display interface of the mobile phone terminal, a target image may be a part or all of image frames in the video F, and a character F in the video F may be a third character.
Alternatively, in the case where some or all of the image frames in the video C in fig. 2 are determined to be the first image and some or all of the image frames in the video D in fig. 2 are determined to be the second image based on receiving the first input, the video F may be generated and displayed based on the videos C and D.
Alternatively, as shown in fig. 5, a character F may be included in the video F, the exterior elements of the character F may be an exterior element F1, an exterior element F2, an exterior element F3 and a preset exterior element F4, and the action element of the character F may be an action element F0, wherein the exterior element F1, the exterior element F2 and the exterior element F3 may respectively correspond to the exterior element C1, the exterior element C2 and the exterior element C3 in the video C in fig. 2, the preset exterior element F4 may be a preset environment element (e.g., white cloud in sky), and the action element F0 may correspond to the action element D0 in the video D in fig. 2.
Therefore, since the first exterior element and the preset exterior element are exterior elements desired to be used by the user and the second action element is an action element desired to be used by the user, the target image desired by the user can be conveniently generated based on the first exterior element, the preset exterior element and the second action element.
Optionally, the displaying the target image based on the preset exterior element, the first exterior element, and the second action element includes:
acquiring a first contour point position of the first role and a second contour point position of the second role;
adjusting the relative distance of the midpoint of the second contour point based on the relative distance of the midpoint of the first contour point;
point location fusion is carried out on the first contour point location and the adjusted second contour point location, and a third contour point location of the third role is obtained;
and adding the first exterior elements and the preset exterior elements to the third contour points to generate the target image.
Optionally, the electronic device may identify a contour (3d outline) point location of the first character of the first image, and may further obtain the first contour point location of the first character.
Optionally, the electronic device may identify the contour point of the second character of the second image, and may further obtain the second contour point of the second character.
Optionally, the electronic device may identify a contour point of each action in the second action element of the second role, and the second contour point may include contour point information of the action of the second role.
Optionally, the electronic device may adjust the relative distance of the midpoint of the second contour point based on the relative distance X1 of the midpoint of the first contour point, such that the adjusted relative distance X2 of the midpoint of the second contour point is the same as or similar to the relative distance X1.
For example, in the case where the first character and the second character are character characters, if the relative distance between the two contour points P1 and P2 of the head of the first character is 10 unit distances, the relative distance between the corresponding two contour points P3 and P4 of the head of the second character may be adjusted to be 10 unit distances so that the relative distance between P1 and P2 is equal to the relative distance between P3 and P4.
Optionally, the electronic device may adjust the relative distance of the midpoint of the contour point for each action in the second action element based on the relative distance of the midpoint of the first contour point.
For example, the second action element may include action M1, action M2, and action M3, and further may include relative distances to the midpoint of the contour points of action M1, action M2, and action M3, respectively, based on the relative distance to the midpoint of the contour point of the first contour point.
Optionally, the electronic device may perform point location fusion on the first contour point location and the adjusted second contour point location, and further may obtain a third contour point location of a third role, so that the contour point location information of the third role is the same as the corresponding contour information of the first role, and meanwhile, the action of the third role is the same as the action of the second role.
Optionally, the electronic device may add the first exterior element and the preset exterior element to the third contour point, and may further generate the target image, so that the target image may include the preset exterior element and the first exterior element of the first role, and may include the second action element of the second role.
Therefore, the contour point location information of the third role is the same as the corresponding contour information of the first role, the action of the third role is the same as the action of the second role, and the generated target image comprises the preset appearance element, the first appearance element of the first role and the second action element of the second role.
In the embodiment of the application, a first image and a second image selected by a user are determined by receiving a first input, the first image comprises a first appearance element expected to be used by the user, the first appearance element can comprise a special effect element, the second image comprises a second action element expected to be used by the user, and further, based on the first image and the second image, a target image expected by the user can be conveniently generated and displayed, so that a common user can conveniently make a video image.
It should be noted that, in the image display method provided in the embodiment of the present application, the execution subject may be an image display apparatus, or a control module in the image display apparatus for executing the method for displaying an image. In the embodiment of the present application, a method for performing image display by an image display device is taken as an example, and an image display device provided in the embodiment of the present application is described.
Fig. 6 is a schematic structural diagram of an image display device according to an embodiment of the present application, and as shown in fig. 6, the image display device 600 includes: an input module 601 and a display module 602, wherein:
the input module 601 is configured to receive a first input, where the first input is used to determine a first image and a second image;
a display module 602, configured to display a target image based on the first image and the second image in response to the first input;
the first image comprises a first role, the second image comprises a second role, and the target image comprises a third role;
the third role is determined based on reference elements including a first exterior element of the first role and a second action element of the second role.
Alternatively, the apparatus may receive a first input from a user to determine a first image and a second image, the first image may include a first exterior element, the first exterior element may include a special effect element, the second image may include a second action element, and the target image may be generated and displayed based on the first image and the second image.
In the embodiment of the application, a first image and a second image selected by a user are determined by receiving a first input, the first image comprises a first appearance element expected to be used by the user, the first appearance element can comprise a special effect element, the second image comprises a second action element expected to be used by the user, and further, based on the first image and the second image, a target image expected by the user can be conveniently generated and displayed, so that a common user can conveniently make a video image.
Optionally, the display module is further configured to:
displaying the target image based on the first exterior element and the second action element;
wherein a third exterior element of the third character is the same as the first exterior element, and a third action element of the third character is the same as the second action element.
Optionally, the display module is further configured to:
acquiring a first contour point position of the first role and a second contour point position of the second role;
adjusting the relative distance of the midpoint of the second contour point based on the relative distance of the midpoint of the first contour point;
point location fusion is carried out on the first contour point location and the adjusted second contour point location, and a third contour point location of the third role is obtained;
and adding the first exterior elements to the third contour points to generate the target image.
Optionally, the reference element further includes a preset shape element;
the display module is further configured to:
displaying the target image based on the preset exterior element, the first exterior element and the second action element;
wherein a third exterior element of the third character is determined based on the first exterior element and the preset exterior element, and a third action element of the third character is the same as the second action element.
Optionally, the display module is further configured to:
acquiring a first contour point position of the first role and a second contour point position of the second role;
adjusting the relative distance of the midpoint of the second contour point based on the relative distance of the midpoint of the first contour point;
point location fusion is carried out on the first contour point location and the adjusted second contour point location, and a third contour point location of the third role is obtained;
and adding the first exterior elements and the preset exterior elements to the third contour points to generate the target image.
In the embodiment of the application, a first image and a second image selected by a user are determined by receiving a first input, the first image comprises a first appearance element expected to be used by the user, the first appearance element can comprise a special effect element, the second image comprises a second action element expected to be used by the user, and further, based on the first image and the second image, a target image expected by the user can be conveniently generated and displayed, so that a common user can conveniently make a video image.
The image display device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The image display device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The image display device provided in the embodiment of the present application can implement each process implemented by the method embodiments of fig. 1 to fig. 5, and is not described herein again to avoid repetition.
Optionally, fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application, and as shown in fig. 7, an electronic device 700 is further provided in an embodiment of the present application and includes a processor 701, a memory 702, and a program or an instruction stored in the memory 702 and executable on the processor 701, where the program or the instruction is executed by the processor 701 to implement each process of the above-described embodiment of the image display method, and can achieve the same technical effect, and details are not repeated here to avoid repetition.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 8 is a hardware configuration diagram of an electronic device implementing an embodiment of the present application.
The electronic device 800 includes, but is not limited to: a radio frequency unit 801, a network module 802, an audio output unit 803, an input unit 804, a sensor 805, a display unit 806, a user input unit 807, an interface unit 808, a memory 809, and a processor 810.
Those skilled in the art will appreciate that the electronic device 800 may further comprise a power source (e.g., a battery) for supplying power to the various components, and the power source may be logically connected to the processor 810 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system. The electronic device structure shown in fig. 8 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The input unit 804 is configured to:
receiving a first input for determining a first image and a second image;
the processor 810 is configured to:
displaying a target image based on the first image and the second image in response to the first input;
the first image comprises a first role, the second image comprises a second role, and the target image comprises a third role;
the third role is determined based on reference elements including a first exterior element of the first role and a second action element of the second role.
In the embodiment of the application, a first image and a second image selected by a user are determined by receiving a first input, the first image comprises a first appearance element expected to be used by the user, the first appearance element can comprise a special effect element, the second image comprises a second action element expected to be used by the user, and further, based on the first image and the second image, a target image expected by the user can be conveniently generated and displayed, so that a common user can conveniently make a video image.
Optionally, the processor 810 is further configured to:
displaying the target image based on the first exterior element and the second action element;
wherein a third exterior element of the third character is the same as the first exterior element, and a third action element of the third character is the same as the second action element.
Optionally, the processor 810 is further configured to:
acquiring a first contour point position of the first role and a second contour point position of the second role;
adjusting the relative distance of the midpoint of the second contour point based on the relative distance of the midpoint of the first contour point;
point location fusion is carried out on the first contour point location and the adjusted second contour point location, and a third contour point location of the third role is obtained;
and adding the first exterior elements to the third contour points to generate the target image.
Optionally, the processor 810 is further configured to:
the reference elements also comprise preset shape elements;
displaying the target image based on the preset exterior element, the first exterior element and the second action element;
wherein a third exterior element of the third character is determined based on the first exterior element and the preset exterior element, and a third action element of the third character is the same as the second action element.
Optionally, the processor 810 is further configured to:
acquiring a first contour point position of the first role and a second contour point position of the second role;
adjusting the relative distance of the midpoint of the second contour point based on the relative distance of the midpoint of the first contour point;
point location fusion is carried out on the first contour point location and the adjusted second contour point location, and a third contour point location of the third role is obtained;
and adding the first exterior elements and the preset exterior elements to the third contour points to generate the target image.
In the embodiment of the application, a first image and a second image selected by a user are determined by receiving a first input, the first image comprises a first appearance element expected to be used by the user, the first appearance element can comprise a special effect element, the second image comprises a second action element expected to be used by the user, and further, based on the first image and the second image, a target image expected by the user can be conveniently generated and displayed, so that a common user can conveniently make a video image.
It should be understood that in the embodiment of the present application, the input Unit 804 may include a Graphics Processing Unit (GPU) 8041 and a microphone 8042, and the Graphics Processing Unit 8041 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 806 may include a display panel 8061, and the display panel 8061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 807 includes a touch panel 8071 and other input devices 8072. A touch panel 8071, also referred to as a touch screen. The touch panel 8071 may include two portions of a touch detection device and a touch controller. Other input devices 8072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 809 may be used to store software programs as well as various data including, but not limited to, application programs and operating systems. The processor 810 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 810.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above-mentioned embodiment of the image display method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the above image display method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. An image display method, comprising:
receiving a first input for determining a first image and a second image;
displaying a target image based on the first image and the second image in response to the first input;
the first image comprises a first role, the second image comprises a second role, and the target image comprises a third role;
the third role is determined based on reference elements including a first exterior element of the first role and a second action element of the second role.
2. The image display method according to claim 1, wherein the displaying a target image based on the first image and the second image includes:
displaying the target image based on the first exterior element and the second action element;
wherein a third exterior element of the third character is the same as the first exterior element, and a third action element of the third character is the same as the second action element.
3. The image display method according to claim 2, wherein the displaying the target image based on the first exterior element and the second action element includes:
acquiring a first contour point position of the first role and a second contour point position of the second role;
adjusting the relative distance of the midpoint of the second contour point based on the relative distance of the midpoint of the first contour point;
point location fusion is carried out on the first contour point location and the adjusted second contour point location, and a third contour point location of the third role is obtained;
and adding the first exterior elements to the third contour points to generate the target image.
4. The image display method according to claim 1, wherein the reference element further comprises a preset exterior element;
the displaying a target image based on the first image and the second image includes:
displaying the target image based on the preset exterior element, the first exterior element and the second action element;
wherein a third exterior element of the third character is determined based on the first exterior element and the preset exterior element, and a third action element of the third character is the same as the second action element.
5. The image display method according to claim 4, wherein the displaying the target image based on the preset exterior element, the first exterior element, and the second action element includes:
acquiring a first contour point position of the first role and a second contour point position of the second role;
adjusting the relative distance of the midpoint of the second contour point based on the relative distance of the midpoint of the first contour point;
point location fusion is carried out on the first contour point location and the adjusted second contour point location, and a third contour point location of the third role is obtained;
and adding the first exterior elements and the preset exterior elements to the third contour points to generate the target image.
6. An image display apparatus, comprising:
an input module to receive a first input, the first input to determine a first image and a second image;
a display module to display a target image based on the first image and the second image in response to the first input;
the first image comprises a first role, the second image comprises a second role, and the target image comprises a third role;
the third role is determined based on reference elements including a first exterior element of the first role and a second action element of the second role.
7. The image display device of claim 6, wherein the display module is further configured to:
displaying the target image based on the first exterior element and the second action element;
wherein a third exterior element of the third character is the same as the first exterior element, and a third action element of the third character is the same as the second action element.
8. The image display device of claim 7, wherein the display module is further configured to:
acquiring a first contour point position of the first role and a second contour point position of the second role;
adjusting the relative distance of the midpoint of the second contour point based on the relative distance of the midpoint of the first contour point;
point location fusion is carried out on the first contour point location and the adjusted second contour point location, and a third contour point location of the third role is obtained;
and adding the first exterior elements to the third contour points to generate the target image.
9. The image display device according to claim 6, wherein the reference element further comprises a preset exterior element;
the display module is further configured to:
displaying the target image based on the preset exterior element, the first exterior element and the second action element;
wherein a third exterior element of the third character is determined based on the first exterior element and the preset exterior element, and a third action element of the third character is the same as the second action element.
10. The image display device of claim 9, wherein the display module is further configured to:
acquiring a first contour point position of the first role and a second contour point position of the second role;
adjusting the relative distance of the midpoint of the second contour point based on the relative distance of the midpoint of the first contour point;
point location fusion is carried out on the first contour point location and the adjusted second contour point location, and a third contour point location of the third role is obtained;
and adding the first exterior elements and the preset exterior elements to the third contour points to generate the target image.
CN202111327624.0A 2021-11-10 2021-11-10 Image display method and device Pending CN114063860A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111327624.0A CN114063860A (en) 2021-11-10 2021-11-10 Image display method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111327624.0A CN114063860A (en) 2021-11-10 2021-11-10 Image display method and device

Publications (1)

Publication Number Publication Date
CN114063860A true CN114063860A (en) 2022-02-18

Family

ID=80274668

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111327624.0A Pending CN114063860A (en) 2021-11-10 2021-11-10 Image display method and device

Country Status (1)

Country Link
CN (1) CN114063860A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102156810A (en) * 2011-03-30 2011-08-17 北京触角科技有限公司 Augmented reality real-time virtual fitting system and method thereof
US20130239057A1 (en) * 2012-03-06 2013-09-12 Apple Inc. Unified slider control for modifying multiple image properties
CN104077094A (en) * 2013-03-25 2014-10-01 三星电子株式会社 Display device and method to display dance video
CN111756995A (en) * 2020-06-17 2020-10-09 维沃移动通信有限公司 Image processing method and device
CN113343950A (en) * 2021-08-04 2021-09-03 之江实验室 Video behavior identification method based on multi-feature fusion

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102156810A (en) * 2011-03-30 2011-08-17 北京触角科技有限公司 Augmented reality real-time virtual fitting system and method thereof
US20130239057A1 (en) * 2012-03-06 2013-09-12 Apple Inc. Unified slider control for modifying multiple image properties
CN104077094A (en) * 2013-03-25 2014-10-01 三星电子株式会社 Display device and method to display dance video
CN111756995A (en) * 2020-06-17 2020-10-09 维沃移动通信有限公司 Image processing method and device
CN113343950A (en) * 2021-08-04 2021-09-03 之江实验室 Video behavior identification method based on multi-feature fusion

Similar Documents

Publication Publication Date Title
CN110147231B (en) Combined special effect generation method and device and storage medium
CN111726536B (en) Video generation method, device, storage medium and computer equipment
CN110276840B (en) Multi-virtual-role control method, device, equipment and storage medium
CN111556278B (en) Video processing method, video display device and storage medium
CN111541907B (en) Article display method, apparatus, device and storage medium
US20210312695A1 (en) Hair rendering method, device, electronic apparatus, and storage medium
CN110262788B (en) Page configuration information determination method and device, computer equipment and storage medium
CN111701238A (en) Virtual picture volume display method, device, equipment and storage medium
CN111489378A (en) Video frame feature extraction method and device, computer equipment and storage medium
CN112581571B (en) Control method and device for virtual image model, electronic equipment and storage medium
CN109495616A (en) A kind of photographic method and terminal device
CN103257703B (en) A kind of augmented reality device and method
CN108525306B (en) Game implementation method and device, storage medium and electronic equipment
CN112188103A (en) Image processing method and device and electronic equipment
CN112118397A (en) Video synthesis method, related device, equipment and storage medium
CN111353946B (en) Image restoration method, device, equipment and storage medium
CN114040248A (en) Video processing method and device and electronic equipment
CN112511743A (en) Video shooting method and device
CN105892663A (en) Information processing method and electronic device
CN114063860A (en) Image display method and device
KR101305725B1 (en) Augmented reality of logo recognition and the mrthod
CN104035560A (en) Human-computer real-time interaction method based on camera
CN114339051A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN112348965A (en) Imaging method, imaging device, electronic equipment and readable storage medium
CN113674396A (en) Wallpaper generation method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination