CN113989421A - Image generation method, apparatus, device and medium - Google Patents

Image generation method, apparatus, device and medium Download PDF

Info

Publication number
CN113989421A
CN113989421A CN202111249208.3A CN202111249208A CN113989421A CN 113989421 A CN113989421 A CN 113989421A CN 202111249208 A CN202111249208 A CN 202111249208A CN 113989421 A CN113989421 A CN 113989421A
Authority
CN
China
Prior art keywords
interface information
interface
input
information
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111249208.3A
Other languages
Chinese (zh)
Inventor
赖勇高
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202111249208.3A priority Critical patent/CN113989421A/en
Publication of CN113989421A publication Critical patent/CN113989421A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

Abstract

The application discloses an image generation method, an image generation device, image generation equipment and an image generation medium, and belongs to the technical field of image processing. The image generation method comprises the following steps: displaying first interface information, wherein the first interface information comprises elements in a first interface and layout information of the elements; receiving a first input of first interface information by a user; responding to the first input, updating the content of the first interface information, and obtaining second interface information; and generating a first target image according to the second interface information.

Description

Image generation method, apparatus, device and medium
Technical Field
The present application belongs to the field of image processing technologies, and in particular, to an image generation method, apparatus, device, and medium.
Background
With the development of technology, intelligent terminals are gradually popularized. The intelligent terminal has many functions, such as screen capture, by which a user can save the contents of the screen for subsequent use.
In the related art, the captured picture is generally directly stored. Sometimes, some information which the user does not want to keep exists in the screenshot picture, and for the information, the user often needs to process the information through professional image processing software, so that the operation is complicated.
Disclosure of Invention
The embodiment of the application aims to provide an image generation method, an image generation device, image generation equipment and an image generation medium, and the technical problems that a user needs to process a screenshot through professional image processing software and the operation is complex in the related art can be solved.
In a first aspect, an embodiment of the present application provides an image generation method, where the method includes:
displaying first interface information, wherein the first interface information comprises elements in a first interface and layout information of the elements;
receiving a first input of first interface information by a user;
responding to the first input, updating the content of the first interface information, and obtaining second interface information;
and generating a first target image according to the second interface information.
In a second aspect, an embodiment of the present application provides an image generating apparatus, including:
the first display module is used for displaying first interface information, and the first interface information comprises elements in a first interface and layout information of the elements;
the first receiving module is used for receiving first input of a user on first interface information;
the updating module is used for responding to the first input and updating the content of the first interface information to obtain second interface information;
and the first generating module is used for generating a first target image according to the second interface information.
In a third aspect, embodiments of the present application provide an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, where the program or instructions, when executed by the processor, implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the application, a first input of a user to first interface information is received, the content of the first interface information is updated in response to the first input, second interface information is obtained, and a first target image is generated according to the second interface information. The first interface information comprises elements and layout information of the elements in the first interface, and a user can edit and update the elements and the layout information of the elements in the first interface, namely the user can modify the content in the first interface information through first input to generate a corresponding image, so that professional image processing software is not needed, and the operation is simple.
Drawings
Fig. 1 is a schematic flowchart of an image generation method provided in an embodiment of the present application;
FIG. 2 is a schematic flow chart diagram of another image generation method provided in the embodiments of the present application;
FIG. 3 is a schematic diagram of a process for adjusting an interface layout according to an embodiment of the present application;
FIG. 4 is a schematic process diagram of replacing an avatar provided by an embodiment of the present application;
FIG. 5 is a schematic flow chart diagram illustrating a further image generation method provided in an embodiment of the present application;
6-14 are display interface schematic diagrams of an electronic device provided by an embodiment of the application;
fig. 15 is a block diagram of an image generating apparatus according to an embodiment of the present application;
fig. 16 is a block diagram of an electronic device according to an embodiment of the present application;
fig. 17 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The image generation method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings by specific embodiments and application scenarios thereof.
As described in the background section, sometimes some information that the user does not want to keep exists in the picture generated by the screenshot, and for the information, the user often needs to process the information through professional image processing software, which is cumbersome to operate.
Based on this, the embodiment of the application provides an image generation method, an image generation device, an image generation apparatus and an image generation medium, a user can modify the content in the first interface information through a first input, and then a corresponding image is generated, so that professional image processing software is not required, and the operation is simple.
The image generation method of the embodiment of the present application may be applied to an electronic device, and an execution subject of the image generation method may be, but is not limited to, at least one of a user terminal, such as a mobile phone, a tablet computer, and a wearable device, which can be configured to execute the image generation method provided in the embodiment of the present application, or the execution subject of the method may also be a client itself which can execute the method.
For convenience of description, the following description will be made of an embodiment of the method taking as an example that an execution subject of the method is a terminal device capable of executing the method. It is understood that the implementation of the method by the terminal device is only an exemplary illustration, and should not be construed as a limitation of the method.
Fig. 1 shows a schematic flowchart of an image generation method provided in an embodiment of the present application.
As shown in fig. 1, the image generation method may include the steps of:
step 110, displaying first interface information, wherein the first interface information comprises elements in a first interface and layout information of the elements;
step 120, receiving a first input of a first interface information by a user;
step 130, responding to the first input, updating the content of the first interface information to obtain second interface information;
and 140, generating a first target image according to the second interface information.
In the embodiment of the application, a first input of a user to first interface information is received, the content of the first interface information is updated in response to the first input, second interface information is obtained, and a first target image is generated according to the second interface information. That is to say, the user can modify the content in the first interface information through the first input, so as to generate a corresponding image, and a professional image processing software is not required, so that the operation is simple.
The above steps are described in detail below, specifically as follows:
the "element" in the above step may be information or content displayed by the display interface, for example, information or content such as graphics and text in the display interface.
Here, the elements may be classified into attributes according to whether they belong to images or characters, or may be classified into attributes according to the meanings they represent. For the attributes of the elements, please refer to the following embodiments, which are not described herein again.
The layout information of an element may be a position parameter of the element on the interface where the element is located, and the position parameter may be an arrangement layout relationship of a plurality of elements in the interface. It should be noted that, when the size of the interface is adjusted, the elements may be adaptively adjusted according to the corresponding proportion according to the adjustment of the size. However, the layout relationship of the elements, that is, the positional relationship between the elements does not change.
Here, the first interface may be a communication session interface. The communication session interface often has some private information that is not readily displayed in the generated image and requires processing so that the private information is not displayed in the first target image. In order to further improve the convenience of the image generation method for processing the communication session interface, the elements can be divided according to the communication session interface in the image generation method, so that a user can conveniently perform corresponding processing on different elements.
The element above may comprise one of session information, session object name, session object avatar.
The communication session interface can be a session interface of instant messaging software, and can also be a session interface of short messages, mailboxes, private messages and the like. The session interface typically includes at least two session users. The conversation between the conversation users generates the conversation information. A conversation user may also be referred to as a conversation object or communication object, and a conversation object may have a name, avatar, or the like.
The session information can be a chat record in a communication session interface, and the chat record can comprise characters, images, red packages, account transfer records and the like. In the element identification process, a release map having a chat background can be identified as session information or a session interface.
A session object name, a session object may be multiple objects or one object that communicate with the end device user. The conversation object can have its own nickname or remark name. Both the nickname and the remark name may be the session object name here. In the name identification process of the session object, the text at the top of the session interface may be identified as the session object name.
The conversation object can have an own head portrait, and the head portrait can also be a head portrait carried by communication software. In the process of identifying the avatar, the picture elements with fixed size and position can be identified as the avatar.
A first input, which may be: the click input of the user on the element in the first interface, or the voice instruction input by the user, or the specific gesture input by the user may be specifically determined according to the actual use requirement, which is not limited in the embodiment of the present application.
The specific gesture in the embodiment of the application can be any one of a single-click gesture, a sliding gesture, a dragging gesture, a pressure identification gesture, a long-press gesture, an area change gesture, a double-press gesture and a double-click gesture; the click input in the embodiment of the application can be click input, double-click input, click input of any number of times and the like, and can also be long-time press input or short-time press input.
The first input may be an input of a user editing the processing target element. The editing process may be editing, modifying, deleting, etc. of the text information. The editing process may also be deletion, replacement, enlargement, reduction, or the like of a graphic or an image. Further, the editing process may also be a process of adding an element, such as adding a character, a graphic, or the like.
The target element can be an element in the first interface information and/or an element added based on the attribute of the element. That is, the target element may be the identified element or may be a newly added element.
As an example, the added element may be the same as the element in the first interface information, for example, by copying the element in the first interface information.
As another example, the added element may be different from the element in the first interface information, for example, the added element may be an element added by the user on the display interface, such as text input by the user or an added graphic. The newly added elements can also be doodles, mosaics and the like.
The element attributes corresponding to the elements are the same as the element attributes in the preceding paragraphs. That is, the attributes of an element may be divided according to the meaning of the element at the interface where it is located.
Note that the editing processing here is processing directly on an element, not processing on an image, and not adding a mask on an image. Therefore, obvious modification traces are not reserved on the edited image, and mosaic shielding information is not needed.
Wherein the element of the editing process may be private information. For example, private information such as a telephone number, memo information, and the like.
In step 130, in response to the first input, the content of the first interface information is updated to obtain the second interface information.
In this embodiment, the second interface information is interface information obtained by modifying the first interface information. The second interface information here may include elements and layout information of the elements. It should be noted that the interface here may be an interface displayed on the terminal device, and is not an image saved on the terminal device.
And 140, generating a first target image according to the second interface information. Here, the first target image may be generated by image rendering the elements based on the layout information of the elements.
Here, the elements and layout information in the second interface information are updated by the user editing process.
In the image generation method provided by the embodiment of the application, a user can edit the target element, and the generated image is generated on the basis of the edited target element, so that the generated image does not need to be additionally modified, and the user experience is improved.
In some embodiments, to enhance the user experience, on the basis of the above embodiments, before displaying the first interface information in step 110, as shown in fig. 2, the method may further include:
step 210, receiving a second input of the user to the first interface;
in response to the second input, the elements and layout information for the elements in the first interface are copied, step 220.
In step 210, the second input may be: the click input of the user on the control in the first interface, or the voice instruction input by the user, or the specific gesture input by the user may be specifically determined according to the actual use requirement, which is not limited in the embodiment of the present application.
The specific gesture in the embodiment of the application can be any one of a single-click gesture, a sliding gesture, a dragging gesture, a pressure identification gesture, a long-press gesture, an area change gesture, a double-press gesture and a double-click gesture; the click input in the embodiment of the application can be click input, double-click input, click input of any number of times and the like, and can also be long-time press input or short-time press input.
The second input may be understood as an input for causing the terminal device to perform an image generating operation, and may be, for example, an operation for screen capture. The second input may be a touch operation applied to the screen by the user, the terminal device may be preconfigured with an operation control for image generation, and the second input may be an input operation of the user on the operation control.
It should be understood that, in practical applications, in order to facilitate the user to perform the input of the image generation operation, the terminal device may provide the user with shortcut keys for the image generation input, such as a "screen capture" control, a menu key, a volume key, and the like in a pull-down menu, and the shortcut keys may be a virtual control or a physical key. User activation of these shortcuts may be used as a second input in embodiments of the present application. That is, the second input in the embodiment of the present application may be realized by activating the shortcut key in the above.
In the embodiment of the application, a user can independently select one interface to generate the image and copy the elements and the layout information of the elements in the selected interface, so that the elements and the layout information of the elements in the interface can be edited and updated when the image is generated, the image of the interface is obtained, and the use experience of the user is improved.
In order to further improve user experience, the image generation method provided by the embodiment of the application can also perform various editing processes on the elements.
In some embodiments, the above embodiment step 130, in response to the first input, updating the content of the first interface information may include at least one of:
deleting a first element in the first interface information; adding a second element in the first interface information; replacing a third element in the first interface information; updating the content of the fourth element in the first interface information; the layout information is updated.
The first element, the second element, the third element and the fourth element may be different elements or may be the same element, and are not limited herein.
In one example, the first interface may be the communication session interface described above, where the first, second, third, and fourth elements may all be session information, such as a chat log as described above, which may include text, images, red envelope, transfer records, and the like.
That is, the "addition", "replacement", and "update" may be editing operations such as addition, replacement, and update of characters, images, red parcels, and transfer records in the first interface, or may be updating operations of layout information.
In the embodiment of the application, the user can perform various editing processing operations on the elements in the first interface information and the layout information thereof, so that the user can modify the elements in the processing interface conveniently.
The image generation method provided by the embodiment of the application can further adjust the layout information of each element in the second interface when the first interface information is updated.
In some embodiments, the updating the layout information in step 130 may include:
under the condition that the first target element at the first position in the first interface information is deleted, controlling the second target element at the second position in the first interface information to move to the third position;
under the condition that a third target element is added to a fourth position in the first interface information, controlling an original fourth target element of the fourth position in the first interface information to move to a fifth position;
and exchanging the positions of the fifth target element and the sixth target element in the first interface information.
Here, the updated layout information may be layout information of elements in the adjustment interface, and the adjustment may specifically be a change in a layout position of the elements.
The third position may be the same as or different from the first position.
As shown in fig. 3, a target element red packet record 301 is included in diagram a of fig. 3, and a first input to the red packet record 301 may be to delete the red packet record.
A control 302 for deleting red envelope records may be displayed in the first interface, and the first input may be an input to the control 302. After deleting the red envelope record, the interface may be as shown in the b diagram of fig. 3, the chat records below the original red envelope record are automatically adjusted, the record is moved upwards, and the intervals between each chat record after the record is moved upwards are the same, so as to meet the layout requirement of the communication interface.
In order to further improve the element editing processing efficiency of the user, the user experience is improved.
In some embodiments, on the basis of the above embodiments, the method may further include:
displaying a first replacement control, wherein the first replacement control is used for replacing at least two third elements in the first interface information;
the first input comprises a first sub-input and a second sub-input;
receiving a first input of first interface information by a user may include:
receiving a first sub-input of a user to a first replacement control;
updating the first interface information in response to the first input may include:
displaying at least one replacement element in response to the first sub-input;
receiving a second sub-input of the user to a target replacement element of the at least one replacement element;
and in response to the second sub-input, controlling at least two third elements in the first interface information to be replaced by the target replacement element.
As one example, the third element may be a conversation object avatar.
Based on the method in the embodiment of the application, the user can replace all the associated head portraits at one time without replacing the head portraits one by one, so that the editing processing efficiency of the user is improved, and the user experience is improved.
As shown in fig. 4, the first alternative control may be an avatar alternative control, and the avatar alternative control may include an "alternative left avatar" control 401 and an "alternative right avatar" control 402 in fig. 4. In the interface of the diagram a in fig. 4, the user clicks the left avatar replacement control, the bottom pops up a plurality of preset avatars 403 for selection, namely the diagram b in fig. 4, and the user can also customize the avatar, mosaic function and the like, and when the user clicks the preset avatar, the left avatar is replaced by the preset avatar as shown in the diagram c in fig. 4.
According to the embodiment of the application, when the head portrait of the communication interface is replaced, replacement is not needed one by one, the alternative preset head portrait can be provided, the operation of a user is facilitated, the operation efficiency is improved, and the user experience is improved.
In some embodiments, in order to meet the image generation requirements of the user for multiple interfaces and improve the use experience of the user, on the basis of the above embodiments, after the step 110 of displaying the first interface information, as shown in fig. 5, the method may further include:
step 510, receiving a third input of the user;
step 520, responding to a third input, and under the condition that the first interface information is displayed, simultaneously displaying third interface information;
step 530, generating a second target image according to the first interface information and the third interface information.
Here, the implementation process of the third input is similar to that of the first input in the foregoing, and is not described in detail here.
In addition, the foregoing update process for the first interface information is also applicable to update the third interface information, and specific reference is made to the foregoing, which is not described herein again.
It should be noted that, when the first interface information is displayed and the third interface information is simultaneously displayed, the interface sizes corresponding to the first interface information and the third interface information respectively may be adaptively adjusted according to the size of the display interface of the terminal device and the preset layout rule, so that the first interface information and the third interface information may be displayed on the same display interface.
The layout rule may be preset, or the user may adjust an element or an interface therein based on the preset layout rule. That is, the elements in the first interface information and the third interface information in the above steps are adjustable.
In the embodiment of the application, the user can generate the target image based on a plurality of different interfaces, a plurality of choices of image generation are provided for the user, the user does not need to combine two images after respectively generating the two images, the processing efficiency of the user is improved, and the use experience of the user is improved.
In order to further improve the user experience, the user can conveniently know that each element in the interface information can be edited, and on the basis of the above embodiment, the corresponding operation control can be further displayed when the first interface information is displayed.
In some embodiments, the method may further comprise:
displaying a first editing control in the area where the first interface information is located, and displaying a second editing control in the area where the third interface information is located, wherein the first editing control is used for updating the content of the first interface information, and the second editing control is used for updating the content of the third interface information;
or displaying a third editing control, wherein the third editing control is used for updating the content of the target interface information, and the target interface information comprises at least one of the following items: first interface information and third interface information.
Here, the user may edit and update the elements in each interface individually, or may edit and update the elements in the first interface and the third interface collectively.
In some embodiments, an "edit control" may include multiple edit controls, each for editing a different element, e.g., a target operation control corresponding to a target element property may be displayed.
Correspondingly, receiving a first input of a user specifically includes: and receiving the input of the user to the target operation control corresponding to the target element attribute.
The display form of the target operation control can be operation identification, icons, characters and the like. The functions corresponding to the target operation control can include editing, deleting, replacing, resizing and the like.
The attributes of the target elements are different, and the functions of the corresponding target operation controls can be different.
As one example, the attributes of the target element may include text or graphics. In the case that the attribute of the target element is a character, the function of the target operation control may be editing, deleting, font adjusting, font size adjusting, and the like. In the case that the attribute of the target element is a graphic, the function of the target operation control may be deletion, replacement, graphic color adjustment, graphic size adjustment, and the like.
As another example, the properties of the target element may include editable or non-editable. In the case that the attribute of the target element is editable, the function of the target operation control can be editing, deleting, resizing and the like. In the case that the attribute of the target element is not editable, the function of the target operation control can be deletion, replacement, resizing and the like.
As shown in fig. 6 a-6 c, the target element may be the session object name, i.e., "contact 1" in fig. 6 a. The target operation control corresponding to the session object name may be the "modify name remark" control 601 in fig. 6 a. After the user clicks the "modify name remarks" control 601, an input box may pop up as shown in fig. 6b, where the name to be modified may be entered, for example, "alias contact 1" may be entered, and after confirmation, "contact 1" is modified to "alias contact 1" as shown in fig. 6 c.
As shown in fig. 7 a-7 c, the target element may be red packet information, i.e., red packet information 701 in fig. 7 a. The target operation control corresponding to the red packet information 701 may be an "edit red packet header" control 702 in fig. 7 a. After clicking the "edit red envelope title" control 702, the user may pop up an input box as shown in fig. 7b, where the name to be modified may be entered, e.g., "baby i's wrong. ", after confirmation," May you be happy and prosperous! "modified to" baby i missed ".
As shown in fig. 8 a-8 b, when the target element is red packet information, i.e. red packet information 801 in fig. 8 a. The target operation control corresponding to the red packet information 801 may be a "delete red packet record" control 802 in fig. 8 a. After the user clicks the "delete red envelope record" control 802, the red envelope information 801 is deleted as shown in fig. 8b, and the session information below the red envelope information 801 is moved up as described above in step 130 by updating the introduction in the layout information and adjusting the interface layout.
In the case where the target element is transfer information, the function of the target operation control may be a transfer deletion function. And clicking the transfer deleting control by the user to delete the transfer message. For the transfer information, the function of the target operation control can also be a function of hiding the transfer amount. As shown in FIGS. 9 a-9 b, the target element may be a transfer record, namely transfer record 901 in FIG. 9 a. The target operation control corresponding to the transfer record 901 may be a "hidden transfer amount" control 902 in fig. 9 a. When the user clicks the "hide transfer amount" control 902, the transfer amount in the transfer record 901 is hidden. Wherein the hidden form may be shown in fig. 9b, the transfer amount is blocked using a graphic 903. Therefore, the user can protect the privacy of the user and prevent the information which should not be leaked from being leaked.
As shown in fig. 10 a-10 b, when the target element is session information, that is, session information 1001 in fig. 10 a. The target operation control corresponding to the session information 1001 may be a "delete this piece of information" control 1002 in fig. 10 a. When the user clicks the "delete this piece of information" control 1002, the session information 1001 is deleted as shown in fig. 10b, and the session information below the session information 1001 is moved up as described above in step 130 by updating the introduction in the layout information, adjusting the interface layout.
Referring to fig. 11 a-11 c, the second input in the foregoing may be an input to the "add screenshot scene one" control 1101 in fig. 11 a. In step 510, a third input from the user is received, where the third input may be an input to the "add screenshot scene two" control 1103 in fig. 11 b. After the third input is input, in the case of displaying the first interface information as shown in fig. 11c, the third interface information may be simultaneously displayed. In addition, after the third input is input, a fourth input may be input, where the fourth input may be an input to the "add complete" control 1102 in fig. 11b, and in response to the fourth input, the third interface information is displayed simultaneously with the display of the first interface information.
In the case where the interface includes the first interface information and the third interface information, the processing procedure is similar to that in the above-described embodiment.
As shown in fig. 12, on the basis of fig. 11c in the above embodiment, the target element may be a session object name, that is, "contact 1" or "contact 2" in fig. 12. The target operation control corresponding to the session object name may be a "modify name remark" control 1201 or a "modify name remark" control 1202 in fig. 12. After the user clicks the "modify name remarks" control 1201, an input box may pop up as shown in fig. 13a, where the name to be modified may be input, for example, "alias contact 1" may be input, and after confirmation, "contact 1" is modified to "alias contact 1" 1301 as shown in fig. 13 b.
As shown in fig. 14 a-14 b, when the target element is red packet information, that is, red packet information 1401 in fig. 14 a. The target operation control corresponding to the red packet information 1401 may be a "delete red packet record" control 1402 in fig. 14 a. After clicking the "delete red packet record" control 1402, the red packet information 1401 is deleted as shown in fig. 14b, and the session information below the red packet information 1401 is moved up by updating the introduction in the layout information and adjusting the interface layout as described in the above step 130.
In the embodiment of the application, the editing controls can be displayed simultaneously when the interface information is displayed, and a user can edit and update the elements in each interface independently or uniformly edit and update the elements in the first interface and the third interface. Therefore, the user can conveniently know that each element can be edited, and the user experience is further improved.
In addition, in the embodiment of the application, the elements can be divided according to the attributes of the elements in the interface, and different elements can correspond to different editing operations, so that a user can conveniently perform corresponding processing on different elements.
In the image generation method provided in the embodiment of the present application, the execution subject may also be an image generation apparatus, or a control module in the image generation apparatus for executing the image generation method. The image generating device provided by the embodiment of the present application is described by taking a screen capture device as an example to execute an image generating method.
Fig. 15 shows a schematic structural diagram of an image generation apparatus provided in an embodiment of the present application.
As shown in fig. 15, the image generation apparatus 1500 may include:
a first display module 1510 configured to display first interface information, where the first interface information includes elements and layout information of the elements in the first interface;
a first receiving module 1520, configured to receive a first input of the first interface information by the user;
the updating module 1530, configured to update the content of the first interface information in response to the first input, to obtain second interface information;
the first generating module 1540 is configured to generate the first target image according to the second interface information.
In the embodiment of the application, a first input of a user to first interface information is received, the content of the first interface information is updated in response to the first input, second interface information is obtained, and a first target image is generated according to the second interface information. The first interface information comprises elements and layout information of the elements in the first interface, and a user can edit and update the elements and the layout information of the elements in the first interface, namely the user can modify the content in the first interface information through first input to generate a corresponding image, so that professional image processing software is not needed, and the operation is simple.
In some embodiments, the image generation apparatus 1500 may further include:
the second receiving module may be configured to receive a second input of the first interface by the user before displaying the first interface information;
a copy module may be to copy the elements and layout information of the elements in the first interface in response to the second input.
In the embodiment of the application, a user can independently select one interface to generate the image and copy the elements and the layout information of the elements in the selected interface, so that the elements and the layout information of the elements in the interface can be edited and updated when the image is generated, a corresponding image is obtained, and the use experience of the user is improved.
In some embodiments, updating the content of the first interface information may include at least one of:
deleting a first element in the first interface information; adding a second element in the first interface information; replacing a third element in the first interface information; updating the content of the fourth element in the first interface information; the layout information is updated.
In the embodiment of the application, the user can perform various editing processing operations on the elements in the first interface information and the layout information thereof, so that the user can modify the elements in the processing interface conveniently.
In some embodiments, updating the layout information may include:
under the condition that the first target element at the first position in the first interface information is deleted, controlling the second target element at the second position in the first interface information to move to the third position;
under the condition that a third target element is added to a fourth position in the first interface information, controlling an original fourth target element of the fourth position in the first interface information to move to a fifth position;
and exchanging the positions of the fifth target element and the sixth target element in the first interface information.
In the embodiment of the application, the interface after the layout information is updated better meets the layout requirement of the communication interface.
In some embodiments, the image generation apparatus may further include:
the second display module may be configured to display a first replacement control, where the first replacement control is used to replace at least two third elements in the first interface information;
the first input may include a first sub-input and a second sub-input;
a first receiving module 1520, which may be specifically configured to receive a first sub-input of the first replacement control by the user;
the update module 1530 may include:
a display unit operable to display at least one alternative element in response to the first sub-input;
a receiving unit, configured to receive a second sub-input of a target replacement element of the at least one replacement element by a user;
and the replacing unit can be used for responding to the second sub-input and controlling at least two third elements in the first interface information to be replaced by the target replacing elements.
According to the embodiment of the application, when the head portrait of the communication interface is replaced, replacement is not needed one by one, the alternative preset head portrait can be provided, the operation of a user is facilitated, the operation efficiency is improved, and the user experience is improved.
In some embodiments, the image generation apparatus may further include:
the third receiving module may be configured to receive a third input of the user after the first interface information is displayed;
the third display module can be used for responding to a third input and simultaneously displaying third interface information under the condition of displaying the first interface information;
the second generating module may be configured to generate a second target image according to the first interface information and the third interface information.
In the embodiment of the application, the user can generate the target image based on a plurality of different interfaces, a plurality of choices of image generation are provided for the user, the user does not need to combine two images after respectively generating the two images, the processing efficiency of the user is improved, and the use experience of the user is improved.
In some embodiments, the image generation apparatus may further include:
the fourth display module may be configured to display a first editing control in a region where the first interface information is located, and display a second editing control in a region where the third interface information is located, where the first editing control is used to update content of the first interface information, and the second editing control is used to update content of the third interface information;
or displaying a third editing control, wherein the third editing control is used for updating the content of the target interface information, and the target interface information comprises at least one of the following items: first interface information and third interface information.
In the embodiment of the application, the editing controls can be displayed simultaneously when the interface information is displayed, and a user can edit and update the elements in each interface independently or uniformly edit and update the elements in the first interface and the third interface. Therefore, the user can conveniently know that each element can be edited, and the user experience is further improved.
In addition, in the embodiment of the application, the elements can be divided according to the attributes of the elements in the interface, and different elements can correspond to different editing operations, so that a user can conveniently perform corresponding processing on different elements.
The image generation device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The image generation apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an IOS operating system, or other possible operating systems, which is not specifically limited in the embodiments of the present application.
The image generation apparatus provided in the embodiment of the present application can implement each process implemented by the method embodiments in fig. 1 to 14, and is not described here again to avoid repetition.
Optionally, as shown in fig. 16, an electronic device 1600 further provided in an embodiment of the present application includes a processor 1601, a memory 1602, and a program or an instruction stored in the memory 1602 and executable on the processor 1601, where the program or the instruction is executed by the processor 1601 to implement each process of the above-mentioned embodiment of the image generation method, and can achieve the same technical effect, and no further description is provided here to avoid repetition.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 17 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 1700 includes, but is not limited to: radio frequency unit 1701, network module 1702, audio output unit 1703, input unit 1704, sensor 1705, display unit 1706, user input unit 1707, interface unit 1708, memory 1709, and processor 1710.
Those skilled in the art will appreciate that the electronic device 1700 may also include a power supply (e.g., a battery) for powering the various components, and that the power supply may be logically coupled to the processor 1710 via a power management system to manage charging, discharging, and power consumption management functions via the power management system. The electronic device structure shown in fig. 17 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description thereof is omitted.
The display unit 1706 is configured to display first interface information, where the first interface information includes elements and layout information of the elements in the first interface;
a user input unit 1707 configured to receive a first input of first interface information by a user;
a processor 1710, configured to update content of the first interface information in response to the first input, to obtain second interface information;
the processor 1710 is further configured to generate a first target image according to the second interface information.
In the embodiment of the application, a first input of a user to first interface information is received, the content of the first interface information is updated in response to the first input, second interface information is obtained, and a first target image is generated according to the second interface information. The first interface information comprises elements and layout information of the elements in the first interface, and a user can edit and update the elements and the layout information of the elements in the first interface, namely the user can modify the content in the first interface information through first input to generate a corresponding image, so that professional image processing software is not needed, and the operation is simple.
Optionally, the user input unit 1707 may be further configured to receive a second input of the first interface from the user before displaying the first interface information;
the processor 1710, may also be configured to copy elements and layout information for the elements in the first interface in response to the second input.
In the embodiment of the application, a user can independently select one interface to generate the image and copy the elements and the layout information of the elements in the selected interface, so that the elements and the layout information of the elements in the interface can be edited and updated when the image is generated, a satisfactory image is obtained, and the use experience of the user is improved.
Optionally, the processor 1710 may be further specifically configured to perform the following operations:
under the condition that the first target element at the first position in the first interface information is deleted, controlling the second target element at the second position in the first interface information to move to the third position;
under the condition that a third target element is added to a fourth position in the first interface information, controlling an original fourth target element of the fourth position in the first interface information to move to a fifth position;
and exchanging the positions of the fifth target element and the sixth target element in the first interface information.
In the embodiment of the application, the interface after the layout information is updated better meets the layout requirement of the communication interface.
Optionally, the display unit 1706 may be further configured to display a first replacement control, where the first replacement control is used to replace at least two third elements in the first interface information;
the first input comprises a first sub-input and a second sub-input;
a user input unit 1707, which may be specifically configured to receive a first sub-input of the first replacement control by the user;
processor 1710, which may be specifically configured to perform the following operations:
displaying at least one replacement element in response to the first sub-input;
a user input unit 1707, specifically configured to receive a second sub-input of a target replacement element in the at least one replacement element by a user;
the processor 1710, in response to the second sub-input, controls at least two third elements in the first interface information to be replaced with target replacement elements.
According to the embodiment of the application, when the head portrait of the communication interface is replaced, replacement is not needed one by one, the alternative preset head portrait can be provided, the operation of a user is facilitated, the operation efficiency is improved, and the user experience is improved.
Optionally, the user input unit 1707 may be further configured to receive a third input from the user after the first interface information is displayed;
the display unit 1706 may be further configured to, in response to a third input, simultaneously display third interface information in a case where the first interface information is displayed;
the processor 1710 may be further configured to generate a second target image according to the first interface information and the third interface information.
In the embodiment of the application, the user can generate the target image based on a plurality of different interfaces, a plurality of choices of image generation are provided for the user, the user does not need to combine two images after respectively generating the two images, the processing efficiency of the user is improved, and the use experience of the user is improved.
Optionally, the display unit 1706 may be further configured to display a first editing control in a region where the first interface information is located, and display a second editing control in a region where the third interface information is located, where the first editing control is used to update content of the first interface information, and the second editing control is used to update content of the third interface information;
or displaying a third editing control, wherein the third editing control is used for updating the content of the target interface information, and the target interface information comprises at least one of the following items: first interface information and third interface information.
In the embodiment of the application, the editing controls can be displayed simultaneously when the interface information is displayed, and a user can edit and update the elements in each interface independently or uniformly edit and update the elements in the first interface and the third interface. Therefore, the user can conveniently know that each element can be edited, and the user experience is further improved.
In addition, in the embodiment of the application, the elements can be divided according to the attributes of the elements in the interface, and different elements can correspond to different editing operations, so that a user can conveniently perform corresponding processing on different elements.
It should be understood that in the embodiment of the present application, the input Unit 1704 may include a Graphics Processing Unit (GPU) 17041 and a microphone 17042, and the Graphics Processing Unit 17041 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1706 may include a display panel 17061, and the display panel 17061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. User input unit 1707 includes a touch panel 17071 and other input devices 17072. A touch panel 17071, also referred to as a touch screen. The touch panel 17071 may include two parts, a touch detection device and a touch controller. Other input devices 17072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 1709 may be used to store software programs as well as various data, including but not limited to application programs and an operating system. The processor 1710 can integrate an application processor, which primarily handles operating systems, user interfaces, application programs, and the like, and a modem processor, which primarily handles wireless communications. It is to be appreciated that the modem processor described above may not be integrated into processor 1710.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the image generation method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the embodiment of the image generation method, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. An image generation method, characterized in that the method comprises:
displaying first interface information, wherein the first interface information comprises elements in a first interface and layout information of the elements;
receiving a first input of the first interface information by a user;
responding to the first input, updating the content of the first interface information, and obtaining second interface information;
and generating a first target image according to the second interface information.
2. The method of claim 1, wherein prior to displaying the first interface information, further comprising:
receiving a second input of the first interface from the user;
in response to the second input, copying elements in the first interface and layout information for the elements.
3. The method of claim 1, wherein the updating the content of the first interface information comprises at least one of:
deleting a first element in the first interface information; adding a second element to the first interface information; replacing a third element in the first interface information; updating the content of a fourth element in the first interface information; and updating the layout information.
4. The method of claim 3, wherein the updating the layout information comprises:
under the condition that a first target element at a first position in the first interface information is deleted, controlling a second target element at a second position in the first interface information to move to a third position;
under the condition that a third target element is added to a fourth position in the first interface information, controlling an original fourth target element of the fourth position in the first interface information to move to a fifth position;
and exchanging the positions of the fifth target element and the sixth target element in the first interface information.
5. The method of claim 3, further comprising:
displaying a first replacement control for replacing at least two third elements in the first interface information;
the first input comprises a first sub-input and a second sub-input;
the receiving a first input of the first interface information by a user includes:
receiving a first sub-input of a user to the first replacement control;
the updating the first interface information in response to the first input includes:
displaying at least one replacement element in response to the first sub-input;
receiving a second sub-input of a user to a target replacement element of the at least one replacement element;
and in response to the second sub-input, controlling at least two of the third elements in the first interface information to be replaced by the target replacement element.
6. The method of claim 1, wherein after displaying the first interface information, further comprising:
receiving a third input of the user;
in response to the third input, in the event that the first interface information is displayed, simultaneously displaying third interface information;
and generating a second target image according to the first interface information and the third interface information.
7. The method of claim 6, further comprising:
displaying a first editing control in a region where the first interface information is located, and displaying a second editing control in a region where the third interface information is located, wherein the first editing control is used for updating the content of the first interface information, and the second editing control is used for updating the content of the third interface information;
or displaying a third editing control, wherein the third editing control is used for updating the content of the target interface information, and the target interface information comprises at least one of the following items: first interface information and third interface information.
8. An image generation apparatus, characterized in that the apparatus comprises:
the first display module is used for displaying first interface information, and the first interface information comprises elements in a first interface and layout information of the elements;
the first receiving module is used for receiving first input of the first interface information by a user;
the updating module is used for responding to the first input and updating the content of the first interface information to obtain second interface information;
and the first generating module is used for generating a first target image according to the second interface information.
9. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing the steps of the image generation method of any of claims 1-7.
10. A readable storage medium, characterized in that it stores thereon a program or instructions which, when executed by a processor, implement the steps of the image generation method according to any one of claims 1 to 7.
CN202111249208.3A 2021-10-26 2021-10-26 Image generation method, apparatus, device and medium Pending CN113989421A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111249208.3A CN113989421A (en) 2021-10-26 2021-10-26 Image generation method, apparatus, device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111249208.3A CN113989421A (en) 2021-10-26 2021-10-26 Image generation method, apparatus, device and medium

Publications (1)

Publication Number Publication Date
CN113989421A true CN113989421A (en) 2022-01-28

Family

ID=79741738

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111249208.3A Pending CN113989421A (en) 2021-10-26 2021-10-26 Image generation method, apparatus, device and medium

Country Status (1)

Country Link
CN (1) CN113989421A (en)

Similar Documents

Publication Publication Date Title
CN113300938B (en) Message sending method and device and electronic equipment
JP2020516994A (en) Text editing method, device and electronic device
CN107102786B (en) Information processing method and client
WO2023131055A1 (en) Message sending method and apparatus, and electronic device
WO2022063045A1 (en) Message display method and apparatus, and electronic device
WO2023040845A1 (en) Message transmission method and apparatus, and electronic device
WO2023040896A1 (en) Content sharing method and apparatus, and electronic device
CN112672061A (en) Video shooting method and device, electronic equipment and medium
CN113179205A (en) Image sharing method and device and electronic equipment
CN115357158A (en) Message processing method and device, electronic equipment and storage medium
CN112306590B (en) Screenshot generating method and related device
CN114116098B (en) Application icon management method and device, electronic equipment and storage medium
WO2024083018A1 (en) Information processing method and apparatus, and electronic device
CN112099714B (en) Screenshot method and device, electronic equipment and readable storage medium
WO2023155874A1 (en) Application icon management method and apparatus, and electronic device
CN111857503A (en) Display method, display device and electronic equipment
WO2022247787A1 (en) Application classification method and apparatus, and electronic device
CN113852540B (en) Information transmission method, information transmission device and electronic equipment
CN113989421A (en) Image generation method, apparatus, device and medium
CN113835578A (en) Display method and device and electronic equipment
CN112162681A (en) Text operation execution method and device and electronic equipment
CN113037618B (en) Image sharing method and device
CN112035032B (en) Expression adding method and device
CN113393373B (en) Icon processing method and device
CN117742538A (en) Message display method, device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination