CN112165553A - Image generation method and device and electronic equipment - Google Patents
Image generation method and device and electronic equipment Download PDFInfo
- Publication number
- CN112165553A CN112165553A CN202011039599.1A CN202011039599A CN112165553A CN 112165553 A CN112165553 A CN 112165553A CN 202011039599 A CN202011039599 A CN 202011039599A CN 112165553 A CN112165553 A CN 112165553A
- Authority
- CN
- China
- Prior art keywords
- target
- input
- chat
- contact
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04845—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The application discloses an image generation method and device and electronic equipment, and belongs to the technical field of communication. The problem that the operation process of obtaining the image to be shared is complicated and time-consuming can be solved. The method comprises the following steps: receiving a first input in a target chat interface, wherein the first input is used for triggering the electronic equipment to identify a chat record in the target chat interface; responding to the first input, displaying N contact person identifications, wherein the N contact person identifications correspond to M chat records in the target chat interface, N and M are positive integers, and M is larger than or equal to N; receiving a second input of the target contact identification; and responding to the second input, and generating an image to be shared, wherein the image to be shared comprises a target chat record corresponding to the target contact. And the target contact person identifies a corresponding contact person for the target contact person. The method can be applied to scenes of the images to be shared of the chat records.
Description
Technical Field
The embodiment of the application relates to the technical field of communication, in particular to an image generation method and device and electronic equipment.
Background
The electronic equipment can be provided with a plurality of communication application programs to meet different use requirements of users, and the users can share the chat records among the communication application programs.
At present, if a user wants to share a part of chat records in the application program a to a friend of the user in the application program B, the user may search the chat records to be shared in the application program a, and trigger the electronic device to perform screen capture storage on the chat records. Then, the user can perform processing operations such as cutting, splicing and coding on the screenshot chat records, so as to obtain the image to be shared containing part of the chat records. As such, the operation process of obtaining the image to be shared may be tedious and time-consuming.
Disclosure of Invention
The embodiment of the application aims to provide an image generation method, an image generation device and electronic equipment, and the problems that the operation process for obtaining an image to be shared is complicated and time-consuming can be solved.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides an image generation method, where the method includes: receiving a first input in a target chat interface, wherein the first input is used for triggering the electronic equipment to identify a chat record in the target chat interface; responding to the first input, displaying N contact person identifications, wherein the N contact person identifications correspond to M chat records in the target chat interface, N and M are positive integers, and M is larger than or equal to N; receiving a second input of the target contact identification; responding to the second input, and generating an image to be shared, wherein the image to be shared comprises a target chat record corresponding to the target contact; and the target contact person identifies a corresponding contact person for the target contact person.
In a second aspect, an embodiment of the present application provides an image generating apparatus, including: the device comprises a receiving module, a display module and a processing module. The receiving module is used for receiving a first input in the target chat interface, wherein the first input is used for triggering the electronic equipment to identify the chat records in the target chat interface; the display module is used for responding to the first input received by the receiving module and displaying N contact person identifications, wherein the N contact person identifications correspond to M chat records in the target chat interface, N and M are positive integers, and M is larger than or equal to N; the receiving module is further used for receiving a second input of the target contact person identification; the processing module is used for responding to the second input received by the receiving module and generating an image to be shared, wherein the image to be shared comprises a target chat record corresponding to the target contact; and the target contact person identifies a corresponding contact person for the target contact person.
In a third aspect, embodiments of the present application provide an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, where the program or instructions, when executed by the processor, implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium on which a program or instructions are stored, which when executed by a processor, implement the steps of the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a chip, where the chip includes a processor and a communication interface, and the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the application, receiving a first input of a user in a target chat interface, wherein the first input is used for triggering the electronic equipment to identify a chat record in the target chat interface; responding to the first input, and displaying N contact person identifications corresponding to M chat records in the target chat interface; and receiving a second input of the target contact identification; responding to the second input, and generating an image to be shared comprising a target chat record corresponding to the target contact; and the target contact person identifies a corresponding contact person for the target contact person. By the method, the electronic equipment can directly generate the image to be shared comprising the target chat record, so that secondary editing operations such as cutting, splicing and coding of the chat record after screen capture are avoided after the user captures the target chat record, user operation is reduced, user time is saved, and user experience is improved.
Drawings
Fig. 1 is a schematic diagram of an image generation method according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram illustrating an operation of generating an image to be shared according to an embodiment of the present disclosure;
fig. 3 is a second schematic diagram of an image generation method according to an embodiment of the present application;
fig. 4 is one of operation diagrams of an electronic device determining a target area according to an embodiment of the present application;
fig. 5 is a third schematic diagram of an image generation method according to an embodiment of the present application;
fig. 6 is a second schematic view illustrating an operation of determining a target area by an electronic device according to an embodiment of the present application;
fig. 7 is a fourth schematic diagram of an image generation method according to an embodiment of the present application;
fig. 8 is a fifth schematic view of an image generation method according to an embodiment of the present application;
fig. 9 is an operation schematic diagram of generating an image to be shared by electronic equipment in a splicing manner according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of an image generating apparatus according to an embodiment of the present application;
fig. 11 is a hardware schematic diagram of an electronic device according to an embodiment of the present disclosure;
fig. 12 is a second hardware schematic diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or described herein. The objects distinguished by "first", "second", and the like are usually a class, and the number of the objects is not limited, and for example, the first object may be one or a plurality of objects. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The image generation method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings by specific embodiments and application scenarios thereof.
The image generation method provided by the embodiment of the application can be applied to the following scenes:
scene 1, a user needs to share part of content in a chat record in one application program to a friend in another application program.
And 2, after the user needs to cut, splice and code the screen capture image of the chat record, sharing the image.
In the embodiment of the application, the electronic equipment can receive a first input of a user in a target chat interface; responding to the first input, and displaying N contact person identifications corresponding to M chat records in the target chat interface; receiving a second input of selecting a target contact identification from the N contact identifications by the user; and generating an image to be shared including a target chat record corresponding to the target contact in response to the second input. By the method, the electronic equipment can directly generate the image to be shared comprising the target chat record, so that secondary editing operations such as cutting, splicing and coding of the chat record after screen capture are avoided after the user captures the target chat record, user operation is reduced, user time is saved, and user experience is improved.
As shown in fig. 1, an embodiment of the present application provides an image generation method, which may include steps 101 to 104 described below.
Step 101, the electronic device receives a first input in a target chat interface.
The first input is used for triggering the electronic equipment to identify the chat records in the target chat interface.
In this embodiment, the target chat interface refers to a chat interface of any communication application in the electronic device. Specifically, the chat interface may be a one-to-one chat interface between the user and the friend, or a group chat interface.
Optionally, in this embodiment of the application, the first input is used to trigger the electronic device to identify a chat record in the target chat interface, and display the identified N contact identifiers. A contact identification is used to indicate a contact of the target chat interface. According to the embodiment of the application, the first input can be decomposed into a plurality of sub-inputs to be executed according to actual use requirements. Specifically, the first input may include a first sub-input and a second sub-input, the first sub-input may be used to determine a target area in the target chat interface, and the second sub-input may be used to determine and display the N contact identifiers according to the chat records in the target area. Specifically, reference may be made to the following steps 105 and 106, and the following detailed description of steps 105a to 105d, which are not repeated herein.
Optionally, in this embodiment of the application, the first input may be a touch input of a user to a target chat interface of the electronic device, or a touch input of a first control displayed in the target chat interface. The touch input may be any one of the following: single click, double click, long press, sliding according to a preset trajectory, etc. The first control may be a control including: controls for time options, contact options, and keyword options. The determination may be specifically performed according to actual use requirements, and the embodiment of the present application is not specifically limited.
And 102, the electronic equipment responds to the first input and displays the N contact person identifications.
The N contact person identifications correspond to M chat records in the target chat interface, N and M are positive integers, and M is larger than or equal to N.
It should be noted that, in this embodiment of the application, each of the N contact identifiers is used to indicate one contact, and the contact indicated by the one contact identifier is a contact that sends at least one of the M chat records. That is, each of the N contacts sends at least one chat record on the target chat interface.
Optionally, in this embodiment of the application, each contact in the N contact identifiers is used to indicate one contact. Specifically, the contact identifier may be any one of the following: contact photo, contact name, contact nickname on target chat interface, etc. The determination may be specifically performed according to actual use requirements, and the embodiment of the present application is not specifically limited.
In addition, in this embodiment of the application, the N contacts are contacts corresponding to the M chat records. For example, if there are 6 (i.e., M ═ 6) chat records, the chat record of contact a is 3, the chat record of contact B is 2, and the chat record of contact C is 1. Then, the electronic device may display the icons of the 3 contacts (i.e., the contact identifiers) to respectively indicate that: contact A, contact B, and contact C.
It should be noted that, since each contact in the target chat interface can send at least one chat record, M is a positive integer greater than or equal to N.
Step 103, the electronic device receives a second input of the target contact identification.
And the target contact person is a contact person corresponding to the target contact person identification.
The electronic device receives a second input that the user selects a target contact identification from the N contact identifications.
Optionally, in this embodiment of the application, the second input is used to select a target contact identifier from the N contact identifiers. That is, the second input is an input by which the user selects a target contact identification from the N contact identifications. Specifically, the second input may be a voice input of the electronic device by the user, and the content of the voice input is used for instructing the electronic device to select the target contact identifier from the N contact identifiers. The second input may also be a touch input to the target contact identifier when the electronic device displays N contact identifiers. The touch input may be any one of the following: clicking, double clicking, long pressing, sliding according to a preset track, dragging to a preset area and the like. The determination may be specifically performed according to actual use requirements, and the embodiment of the present application is not specifically limited.
And step 104, the electronic equipment responds to the second input and generates an image to be shared.
The target chat records to be shared comprise target chat records corresponding to the target contact persons.
It should be noted that, in the embodiment of the present application, the image to be shared includes a target chat record corresponding to a target contact.
Optionally, in this embodiment of the application, after the electronic device generates the image to be shared, the user may trigger the electronic device to store the image to be shared to the electronic device, or store the image to be shared to another server connected to the electronic device. The detailed implementation can refer to the related art, and is not described herein.
Optionally, in this embodiment of the application, after the electronic device generates the image to be shared (i.e., step 104), the user may share the image to be shared with friends in other application programs, so that the image to be shared is shared across applications. The sharing can be performed in any one of the following two ways: in the mode 1, after the electronic device generates the image to be shared, the electronic device directly displays the sharing control on the current interface, and the user can trigger the electronic device to send the image to be shared to the chat interface of the friend or chat group selected by the user through input to the sharing control (for example, triggering the reverse control to display the identifier of the application program to be shared, selecting the application program to be shared from the identifiers, further selecting the friend or chat group desired to be shared by the user from the interface of the application program to be shared, and the like) to complete the sharing operation. In the mode 2, after the electronic device generates the image to be shared, the user may trigger the electronic device to store the image to be shared into the electronic device (e.g., a local album), and when the user needs to share, the user may select the image to be shared from the electronic device (e.g., the local album) to directly perform a sharing operation. It should be noted that, in the embodiment of the present application, details of the specific operation process of the saving operation and the sharing operation are not repeated, and specific reference may be made to specific descriptions in related technologies.
For example, fig. 2 is a schematic diagram of an operation of generating an image to be shared. As shown in fig. 2 (a), in a case where the electronic device 00 displays a chat interface 001 (i.e., a target chat interface) of the "technical department group", the user may press the avatar (e.g., a small avatar) of any contact in the chat interface 001 for a long time, and as shown in fig. 2 (b), the electronic device is triggered to display the control 002 in a floating manner on the chat interface 001. The control 002 includes a nickname and a head portrait (i.e. a contact identifier) of a contact in the interface 001, which are respectively: the Chinese medicine composition is prepared from the components of Xiaowang, Xiaoli, Liu and Xiaomei. Then, the user may single-click the selected contact "xiaowang" and "liu" in the control 002, and the electronic device may respond to the single-click input (i.e., the second input), take "xiaowang" and "liu" as the target contact, and perform a screenshot and splicing operation on the chat records of "xiaowang" and "liu" in the chat interface 001, as shown in (c) of fig. 2, and the electronic device may display an image 004 (i.e., an image to be shared) including the chat records of the target contact in the interface 003. The user can share the image 004 as required.
The image generation method provided by the embodiment of the application can receive a first input in a target chat interface, wherein the first input is used for triggering the electronic equipment to identify a chat record in the target chat interface; responding to the first input, and displaying N contact person identifications corresponding to M chat records in the target chat interface; and receiving a second input of the target contact identification; responding to the second input, and generating an image to be shared comprising a target chat record corresponding to the target contact; and the target contact person identifies a corresponding contact person for the target contact person. By the method, the electronic equipment can directly generate the image to be shared comprising the target chat record, so that secondary editing operations such as cutting, splicing and coding of the chat record after screen capture are avoided after the user captures the target chat record, user operation is reduced, user time is saved, and user experience is improved.
Optionally, in conjunction with fig. 1, as shown in fig. 3, the N contacts identify a target area located in the target chat interface. Before the step 102 of "displaying N contact identifiers", the image generation method provided in the embodiment of the present application further includes the following step 105, where the corresponding step 105 may specifically be implemented by the following step 105a and step 105b, and the step 102 may specifically be implemented by the following step 102 a. Fig. 3 illustrates an example in which step 105 is replaced by step 105a and step 105 b.
And 105, the electronic equipment determines the target area according to the first input.
It should be noted that, the following embodiments take the example of determining the first position and the second position by the first input, and further determining the target area as an example, which does not constitute a limitation on the determination of the target area in the present application.
Step 105a, in response to the first input, the electronic device determines a first position and a second position in the target chat interface according to the first input.
Optionally, in this embodiment of the application, the first input may be decomposed into two sub-inputs, which are used to determine the first location and the second location in the target chat interface, respectively. Where one sub-input (e.g., sub-input a described below) is used to determine a first location and another sub-input (e.g., sub-input B described below) is used to determine a second location. The input mode of the sub-input may refer to the specific description of the input mode of the first input in step 101, which is not described herein again.
The specific manner of determining the first position and the second position may be any of the following manners:
in the method a, when the electronic device displays a target chat interface, a user may trigger the electronic device to enter a target area selection interface by inputting a first target of a contact portrait, a nickname, or a chat record in the target chat interface. The user may then determine the first location by sub-input a to the interface and the second location by sub-input B to the interface.
Mode B, in a case that the electronic device displays a target chat interface, a user may trigger the electronic device to display two location identifiers (i.e., a first location identifier and a second location identifier, where the location identifier is used to mark a location of one chat record in the target interface) on the target chat interface by inputting a contact avatar, a nickname, or a second target of the chat record in the target chat interface. The user may then drag the first location identity to the X location of the target chat interface (i.e., child input a) and drag the second location identity to the Y location of the target chat interface (i.e., child input B). Subsequently, the electronic device determines the location where the first location identifier is located (i.e., the X location) as the first location and determines the location where the second location identifier is located (i.e., the Y location) as the second location.
It should be noted that the sub-input a for determining the first position and the sub-input B for determining the second position may be a specific implementation manner of the second sub-input in the following steps 105C and 105D, that is, the following second sub-input may be specifically implemented by the sub-input a and the sub-input B.
Optionally, in this embodiment of the application, after the user determines the first position through the sub-input a, the user may slide the target chat interface, and in a case that the content of the target chat interface displayed by the electronic device includes a chat record required by the user, the user may determine the second position through the sub-input B. That is, the first location and the second location may be selected from different locations on the same display interface, and the user may determine the first location and the second location according to actual usage requirements.
And step 105b, the electronic equipment determines the area between the first position and the second position as the target area.
Optionally, in this embodiment of the application, the electronic device may determine, as the target area, an area between the first location and the second location in the target interface.
For example, assuming that the first position is the position of chat log 1 and the second position is the position of chat log 2, the electronic device may determine a rectangular area between chat logs 1 and 2 in the target chat interface (i.e., the upper and lower boundaries of the rectangle are chat log 1 and chat log 2, respectively; and the left and right boundaries of the rectangle are the left and right boundaries of the display interface of the electronic device) as the target area.
Step 102a, the electronic device responds to the first input, and displays N contact person identifications of the target area.
It should be noted that, in this embodiment of the application, after the electronic device determines the target area, the electronic device may identify the contacts and the chat records in the target area, so that the N identified contact identifiers may be displayed. Each contact in the N contact identifiers is a contact corresponding to at least one chat record in the target area, that is, each contact may correspond to at least one chat record in the target area.
For example, fig. 4 is one of the operation diagrams of the electronic device for determining the target area. As shown in fig. 2 (a), a chat interface 001 (i.e., a target chat interface) of the "technical department group" is displayed on the electronic apparatus 00. As shown in (a) of fig. 4, a user may click a blank area of the chat interface 001 with a large force to trigger the electronic device to display two boundary lines in a floating manner on the chat interface 001, and the user may drag the first boundary line 0051 to a first position and the second boundary line 0052 to a second position manually. The electronic device can determine the area in the chat interface 001 from the first location 0051 to the second location 0052 as the target area. Subsequently, after the user previews the target area without error, the electronic device may identify the chat record in the target area through a triggering operation on the target area, as shown in (b) in fig. 4, and display controls 002 including the contacts "queen", "plum", and "liu" in response to the triggering operation. Thereafter, the user may click on the selected contact "xiaowang" and "liu" in the control 002, and the electronic device may perform screenshot and splicing operations on the chat records of "xiaowang" and "liu" in the chat interface 001 in response to the click input (i.e., the second input), as shown in (c) of fig. 2, and the electronic device may display an image 004 (i.e., an image to be shared) including the chat records of the target contact in the interface 003. The user can share the image 004 as required.
It can be understood that the electronic device may determine, according to the first input, the first position and the second position in the target chat interface, and further determine an area between the first position and the second position as the target area, so that the electronic device may respond to the first input, recognize the chat record of the target area, thereby determining N contacts, and display the identifiers of the N contacts, thereby facilitating a user to directly operate the identifiers of the contacts, for example, select the target contact to operate, so as to generate the image to be shared including the chat record of the target contact.
Optionally, in conjunction with fig. 3, as shown in fig. 5, the first input includes a first sub-input and a second sub-input. The step 105 can be specifically realized by the following steps 105A to 105D. In fig. 5, step 105A and step 105b in fig. 3 are replaced by step 105A to step 105D.
Step 105A, receiving a first sub-input.
The first sub-input can be input to a target chat interface of the electronic device.
Optionally, in this embodiment of the application, the first sub-input is used to trigger the electronic device to display at least one of a time option, a contact option, and a keyword option.
In addition, in this embodiment of the application, the first sub-input may be a touch input to the target chat interface. For a specific input manner, reference may be made to the specific description of the first input in step 101, which is not described herein again.
And step 105B, the electronic equipment responds to the first sub-input and displays at least one of a time option, a contact option and a keyword option in the target chat interface.
It should be noted that, in the embodiment of the present application, display manners of the time option, the contact option, and the keyword option are not particularly limited, and may be determined according to actual use requirements. Specifically, the three options may be respectively displayed in a floating manner; a control may also be displayed in hover. The control override may display three options (i.e., the control may include a time option, a contacts option, and a keywords option); three controls can also be respectively displayed in a floating manner, and each control corresponds to one of the three options.
And step 105C, the electronic equipment receives a second sub-input of the target option.
The target option is at least one of a time option, a contact option and a keyword option displayed in the target chat interface.
It should be noted that, in this embodiment of the application, the target option is at least one of a time option, a contact option and a keyword option displayed in the target chat interface, that is, the user may operate at least one of the time option, the contact option and the keyword option through the second sub-input to quickly filter the chat records. The following embodiments exemplify operations of the time option, the contact option, and the keyword option, respectively.
Optionally, in this embodiment of the application, the electronic device may filter the chat records according to the user input to the time option in the target options. The concrete mode is as follows: the electronic device obtains a time interval determined by the time option in the target option (i.e. the second sub-input may be specifically used to determine the time interval, for example, two times are input to determine a time interval), and then determines, from the M chat records, a chat record whose sending time is within the time interval, and performs highlighting. That is, the electronic device can determine that the sending and receiving time of each of the at least one chat log is within the duration indicated by the target option.
Optionally, in this embodiment of the application, the electronic device may filter the chat records according to the input of the user to the contact option in the target options. The concrete mode is as follows: the electronic equipment acquires the contact person selected by the user from the contact person options, then searches the chat records sent by the contact person from the M chat records according to the selected contact person, and highlights the chat records. That is, the electronic device screens out the chat records sent by the contact from the M chat records according to the contact selected by the user in the contact option.
Optionally, in this embodiment of the application, the electronic device may filter the chat records according to the input of the user to the keyword option in the target options. The concrete mode is as follows: the electronic equipment obtains preset keywords input in the keyword options, and searches the contents of the M chat records according to the preset keywords. Then, all chat contents containing the preset keywords are highlighted. That is, the electronic device may screen out the M chat records that include the keyword.
The highlighting mode may be any of the following: displaying the screened chatting records in different colors, displaying the screened chatting records in different fonts, displaying the screened chatting records in a flashing special effect, and the like. In addition, in the actual use process, at least one of the three screening methods can be used for screening.
Step 105D, the electronic equipment responds to the second sub-input, and at least one chat record corresponding to the target option is determined in the target chat interface; and determining the target area according to the position of the at least one chat record.
Optionally, in this embodiment of the application, the determining the target area according to the position of the at least one chat record includes: the electronic equipment determines the position of a first chat record in the at least one chat record as a first position; and determining the position of the last chat record in the at least one chat record as a second position. Then, the electronic device determines an area between the first location and the second location as a target area.
It should be noted that, in this embodiment of the application, a position where a first chat record in the at least one chat record is located, that is, a start position of a chat record in the at least one chat record, also refers to a start position of a chat record in the at least one chat record filtered according to the target option. Similarly, the position of the last chat record in the at least one chat record, that is, the end position of the chat record in the at least one chat record, also refers to the end position of the chat record in the at least one chat record screened according to the target option.
Optionally, in this embodiment of the application, the user may further perform fine adjustment on the determined first position and the second position. Specifically, the user may trigger the electronic device to update and display the first location by inputting a third target for the first location (e.g., dragging an indicator of the first location); the user may also trigger the electronic device to update the display of the second location by a fourth target input to the second location (e.g., dragging an indicator of the second location). In this way, the electronic device may determine the updated area between the first location and the second location as the target area.
It should be noted that, in this embodiment of the application, when the first location and the second location determined through the above steps 105A to 105D do not meet the user requirement, the user may trigger the electronic device to repeatedly execute the above steps 105A to 105D to re-determine the first location and the second location.
Fig. 6 is a second schematic view illustrating the operation of determining the target area by the electronic device. As shown in fig. 2 (a), in a case where the electronic device 00 displays a chat interface 001 (i.e., a target chat interface) of the "technical department group", the user may double-click a blank area of the chat interface 001 to trigger the electronic device to hover a display control 005 on the chat interface 001, as shown in fig. 6 (a). The user may then edit the time screening options in the control 005 via a first input, such as setting the start time to 10:00 today and the end time to 14:00 today. As shown in fig. 6 (b), the electronic device can determine a first location 0051 and a second location 0052 in response to the first input, wherein the first location 0051 is the location of the beginning chat history after the start time 10:00 and the second location is the location of the last chat history before the end time 14: 00. The electronic device determines the area in the chat interface 001 from the first location 0051 to the second location 0052 as the target area. Subsequently, after the user previews the target area without error, the electronic device may identify the chat record in the target area through a triggering operation on the target area, as shown in (c) in fig. 6, and display controls 002 including the contacts "queen", "plum" and "liu" in response to the triggering operation. Thereafter, the user may click on the selected contact "xiaowang" and "liu" in the control 002, and the electronic device may perform screenshot and splicing operations on the chat records of "xiaowang" and "liu" in the chat interface 001 in response to the click input (i.e., the second input), as shown in (c) of fig. 2, and the electronic device may display an image 004 (i.e., an image to be shared) including the chat records of the target contact in the interface 003. The user can share the image 004 as required.
It can be understood that the electronic device may trigger the electronic device to display at least one of the time option, the contact option, and the keyword option through the first sub-input, and trigger the electronic device to determine a location of a first chat record in the screened chat records as a first location and determine a location of a last chat record in the screened chat records as a second location by folding a second sub-input of at least one option (i.e., the target option) among the three options. Therefore, the electronic equipment can conveniently determine the target area according to the first position and the second position, and the identifications of the N contact persons in the target area can be displayed, so that the subsequent operation of a user is facilitated.
Alternatively, referring to fig. 1, as shown in fig. 7, the step 104 may be specifically realized by the following steps 104a and 104 b.
And 104a, the electronic equipment responds to the second input, and determines a target chat record corresponding to the target contact from the M chat records.
Optionally, in this embodiment of the application, the manner of determining the target chat record corresponding to the target contact may be any of the following manners: in a first way, the electronic device may obtain the target contact (i.e., the contact indicated by the target contact identifier), and determine, from the M chat records, that the chat record sent by the target contact is the target chat record. And secondly, the electronic equipment respectively compares the target contact person identification selected by the user with the contact person identifications corresponding to the M chat records, and determines the chat records corresponding to all identifications identical to the target contact person identification as the target chat records. The determination may be specifically performed according to actual use requirements, and the embodiment of the present application is not specifically limited.
Optionally, in this embodiment of the application, after the electronic device determines the target chat record in response to the second input, the electronic device may display a target preview interface including M chat records, so as to facilitate the user to preview and determine. The target chat records can display a first identifier (such as a check identifier) which can be used for indicating that the chat records are the target chat records and are displayed when the image to be shared is generated, and the chat records except the target chat records in the M chat records display a second identifier (such as a check identifier which can be used for indicating that the chat records are non-target chat records and are hidden when the image to be shared is generated), so that the target identifiers determined by the user can be distinguished.
Optionally, in this embodiment of the application, if there are a plurality of consecutive chat records displaying the second identifier, the electronic device is, for example, convenient to display a hidden line to temporarily hide the chat records of the plurality of second identifiers. And if the user needs to view the hidden specific content, touch input can be carried out on the hidden line, and the electronic equipment is triggered to display all the chat records displaying the second identifier at the position.
Optionally, in this embodiment of the application, in a case that the target chat record may display the first identifier and chat records other than the target chat record in the M chat records display the second identifier, the user may add or delete the determined target chat record. Specifically, in a case where G target chat records are determined among M chat records (where G is a positive integer less than or equal to M), the user may reselect J chat records from among (M-G) chat records to add to the target chat records (where J is a positive integer greater than or equal to 1), and deselect Q contacts from among G chat records (where Q is a positive integer greater than or equal to 1, and Q is less than or equal to G), so that the number of finally determined target chat records is (G + J-Q), and then, the electronic device updates the target chat records. The chat records can be added by switching the second identifier to the first identifier; the chat history may be reduced by switching the second identifier to the first identifier. The specific switching manner may refer to the description of the checking and the canceling in the related art, and details are not repeated herein.
And step 104b, the electronic equipment generates an image to be shared according to the target chat record.
Optionally, in this embodiment of the application, the mode that the electronic device generates the image to be shared according to the target chat record may be any one of the following modes: in the first way, the electronic device displays all chat records with the first identifier displayed in the target preview interface (for a detailed description of the target preview interface, see the related description in step 104a above), and hides all chat records with the second identifier, so as to generate the image to be shared. In a second mode, the electronic device may perform image cropping on each target chat record, and perform image stitching according to the sending time of each chat record, so as to generate an image to be shared. The determination may be specifically performed according to actual use requirements, and the embodiment of the present application is not specifically limited. It should be noted that the following embodiments (i.e., step 104b1 and step 104b2) are exemplified by taking the second mode as an example.
It should be noted that, in the process of operating the target chat record by the electronic device to obtain the image to be shared, mainly generating an image that can be directly known by the user through vision, automatically recognizing and converting the voice in the chat record into characters, and storing the first frame image of the video in the recording of the coming day.
It can be appreciated that the electronic device can determine, in response to the second input, a target chat record corresponding to the target contact from the M chat records, and generate the image to be shared according to the target chat record. Therefore, the electronic equipment can directly generate the image to be shared including the target chat record, secondary editing of the user is avoided, operation of the convenience user is reduced, the convenience user uses the electronic equipment, and the use experience of the user is improved.
Optionally, with reference to fig. 7, as shown in fig. 8, the target chat log includes S chat logs, where S is an integer greater than or equal to 2. The step 104b can be realized by the following steps 104b1 and 104b 2.
And step 104b1, the electronic equipment performs cutting operation on the target chat interface to obtain S images to be spliced.
Each image to be stitched in the S images to be stitched comprises one chat record in the S chat records.
In the embodiment of the present application, S is a positive integer less than or equal to M.
Optionally, in this embodiment of the application, when the target chat records include S chat records, the electronic device may perform a cropping operation on the S chat records to obtain S images to be stitched. Each image to be stitched in the S images to be stitched indicates a chat record.
Optionally, in this embodiment of the application, a specific process of the electronic device for performing the cutting operation on the S chat records may include the following steps: and step A, the electronic equipment determines a region to be cut according to the position of one chat record in the target chat record, and the region to be cut can completely envelop all contents recorded in two passing days. And step B, the electronic equipment cuts the image to be spliced according to the area to be cut. And step C, the electronic equipment repeatedly executes the steps (namely the step A and the step B) for S times so as to obtain S images to be spliced. It should be noted that, in the actual operation process, when step a is executed, S regions to be cropped may be directly determined, and then S images to be stitched are obtained by cropping S regions to be cropped in step B. The determination may be specifically performed according to actual use requirements, and the embodiment of the present application is not specifically limited.
Optionally, in this embodiment of the application, when the electronic device cuts out S chat records to obtain S images to be stitched, sending time of each chat record may be respectively recorded, and the sending time may be used as a basis for a stitching sequence of the images to be stitched corresponding to the chat record.
And step 104b2, the electronic device performs splicing operation on the S images to be spliced to generate images to be shared.
Optionally, in this embodiment of the application, the electronic device may clip S chat records to obtain S images to be stitched, and sequentially stitch the S images to be shared according to the sending time sequence of the S chat records.
Fig. 9 is an operation diagram illustrating an operation of generating an image to be shared by electronic device stitching. As shown in fig. 9 (a), in a case where the user operates the electronic device to display a control 002 including contacts "king", "plumet" and "bang", the user may click on the control 002 to select the contact "king", and the electronic device may take the "king" as a target contact in response to the click input (i.e., the second input), as shown in fig. 9 (b), and the electronic device performs screenshot on the chat record of the "king" in the chat interface 001 to obtain a screenshot 006 of the first chat record and a screenshot 007 of the second chat record. Then, as shown in fig. 9 (c), the electronic device may stitch the screenshot 006 of the first chat record and the screenshot 007 of the second chat record to obtain an image 008 (i.e., an image to be shared). The electronic device may display the image 008 in the interface 003.
It can be understood that the electronic device can cut the target chat records in the target interface and splice the cut target chat records to generate the image to be shared. Therefore, the electronic equipment can directly cut and splice the target chat records according to the use requirements of the user, and the images to be shared are directly generated, so that secondary editing of the user is avoided, the operation of convenience users is reduced, the convenience users use the electronic equipment, and the use experience of the user is improved.
In the image generation method provided in the embodiment of the present application, the execution subject may be an image generation apparatus, or a control module in the image generation apparatus for executing the image generation method. In the embodiment of the present application, an image generation apparatus executing an image generation method is taken as an example, and the apparatus provided in the embodiment of the present application is described.
As shown in fig. 10, an embodiment of the present application provides an image generation apparatus 1000. The image generation apparatus 1000 may include a receiving module 1001, a display module 1002, and a processing module 1003. The receiving module 1001 may be configured to receive a first input in the target chat interface, where the first input is used to trigger the electronic device to identify a chat log in the target chat interface. The display module 1002 may be configured to display, in response to the first input received by the receiving module 1001, N contact identifiers, where the N contact identifiers correspond to M chat records in the target chat interface, N and M are positive integers, and M is greater than or equal to N. The receiving module 1001 may be further configured to receive a second input of the target contact identification. The processing module 1003 may be configured to generate an image to be shared in response to the second input received by the receiving module 1001, where the image to be shared includes a target chat record corresponding to the target contact. And the target contact person identifies a corresponding contact person for the target contact person.
Optionally, in this embodiment of the application, the N contact identifiers are located in a target area in the target chat interface. The image generation apparatus 1000 may also include a determination module 1004. The determining module 1004 may be configured to determine the target area according to the first input before displaying the N contact identifiers. The display module 1002 may be specifically configured to display the N contact identifiers of the target area in response to the first input.
Optionally, in this embodiment of the application, the first input includes a first sub-input and a second sub-input. The receiving module 1001 is further configured to receive a first sub input. The display module 1002 is further configured to display at least one of a time option, a contact option, and a keyword option in the target chat interface in response to the first sub-input received by the receiving module 1001. The receiving module 1001 is further configured to receive a second sub-input of a target option, where the target option is at least one of a time option, a contact option, and a keyword option. The processing module 1003 is further configured to, in response to the second sub-input, determine, in the target chat interface, at least one chat record corresponding to the target option, and determine the target area according to a position of the at least one chat record.
Optionally, in this embodiment of the application, the processing module 1003 may be specifically configured to determine, in response to the second input, a target chat record corresponding to the target contact from the M chat records; and generating an image to be shared according to the target chat record.
Optionally, in this embodiment of the present application, the target chat records include S chat records, where S is an integer greater than or equal to 2. The processing module 1003 may be specifically configured to perform a clipping operation on the target chat interface to obtain S images to be stitched, where each image to be stitched includes one of the S chat records; and performing splicing operation on the S images to be spliced to generate the images to be shared.
The image generating apparatus in the embodiment of the present application may be a functional entity and/or a functional module in an electronic device, which executes the image generating method, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The image generation apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The image generation device provided in the embodiment of the present application can implement each process implemented by the image generation device in the method embodiments of fig. 1 to 9, and is not described herein again to avoid repetition.
The image generation device provided by the embodiment of the application can receive a first input in a target chat interface, wherein the first input is used for triggering the electronic equipment to identify a chat record in the target chat interface; responding to the first input, and displaying N contact person identifications corresponding to M chat records in the target chat interface; and receiving a second input of the target contact identification; responding to the second input, and generating an image to be shared comprising a target chat record corresponding to the target contact; and the target contact person identifies a corresponding contact person for the target contact person. By the method, the image generation device can directly generate the image to be shared including the target chat record, so that secondary editing operations such as cutting, splicing and coding of the chat record after screen capture are avoided after the user captures the target chat record, user operation is reduced, user time is saved, and user experience is improved.
Optionally, as shown in fig. 11, an electronic device 1100 is further provided in an embodiment of the present application, and includes a processor 1101, a memory 1102, and a program or an instruction stored in the memory 1102 and executable on the processor 1101, where the program or the instruction is executed by the processor 1101 to implement each process of the above-mentioned embodiment of the image generation method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 12 is a schematic hardware structure diagram of an electronic device implementing an embodiment of the present application.
The electronic device 2000 includes, but is not limited to: a radio frequency unit 2001, a network module 2002, an audio output unit 2003, an input unit 2004, a sensor 2005, a display unit 2006, a user input unit 2007, an interface unit 2008, a memory 2009, and a processor 2010.
Among other things, the input unit 2004 may include a graphic processor 20041 and a microphone 20042, the display unit 2006 may include a display panel 20061, the user input unit 2007 may include a touch panel 20071 and other input devices 20072, and the memory 2009 may be used to store software programs (e.g., an operating system, an application program required for at least one function), and various data.
Those skilled in the art will appreciate that the electronic device 2000 may further include a power supply (e.g., a battery) for supplying power to various components, and the power supply may be logically connected to the processor 2010 through a power management system, so as to manage charging, discharging, and power consumption management functions through the power management system. The electronic device structure shown in fig. 12 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
The user input unit 2007 may be configured to receive a first input in the target chat interface, where the first input is used to trigger the electronic device to identify a chat log in the target chat interface. A display unit 2006, which may be configured to display N contact identifiers in response to the first input received by the user input unit 2007, where the N contact identifiers correspond to M chat records in the target chat interface, N and M are positive integers, and M is greater than or equal to N. The user input unit 2007 may be further configured to receive a second input identifying the target contact. The processor 2010 may be configured to generate an image to be shared including a target chat record corresponding to the target contact in response to the second input received by the user input unit 2007. And the target contact person identifies a corresponding contact person for the target contact person.
The electronic device provided by the embodiment of the application can receive a first input in the target chat interface, wherein the first input is used for triggering the electronic device to identify the chat records in the target chat interface; responding to the first input, and displaying N contact person identifications corresponding to M chat records in the target chat interface; and receiving a second input of the target contact identification; responding to the second input, and generating an image to be shared comprising a target chat record corresponding to the target contact; and the target contact person identifies a corresponding contact person for the target contact person. By the method, the electronic equipment can directly generate the image to be shared comprising the target chat record, so that secondary editing operations such as cutting, splicing and coding of the chat record after screen capture are avoided after the user captures the target chat record, user operation is reduced, user time is saved, and user experience is improved.
Optionally, in this embodiment of the application, the N contact identifiers are located in a target area in the target chat interface. The processor 2010 may be further configured to determine a target area according to the first input before displaying the N contact identifiers. The display unit 2006 is specifically configured to display the N contact identifiers of the target area in response to the first input.
It can be understood that the electronic device may determine, according to the first input, the first position and the second position in the target chat interface, and further determine an area between the first position and the second position as the target area, so that the electronic device may respond to the first input, recognize the chat record of the target area, thereby determining N contacts, and display the identifiers of the N contacts, thereby facilitating a user to directly operate the identifiers of the contacts, for example, select the target contact to operate, so as to generate the image to be shared including the chat record of the target contact.
Optionally, in this embodiment of the application, the first input includes a first sub-input and a second sub-input. The user input unit 2007 is also used to receive the first sub-input. The display unit 2006 is further configured to display at least one of a time option, a contact option and a keyword option in the target chat interface in response to the first sub-input received by the user input unit 2007. The user input unit 2007 is further configured to receive a second sub-input of a target option, wherein the target option is at least one of a time option, a contact option and a keyword option. The processor 2010 is further configured to determine, in response to the second sub-input, at least one chat record corresponding to the target option in the target chat interface, and determine the target area according to a position of the at least one chat record.
It can be understood that the electronic device may trigger the electronic device to display at least one of the time option, the contact option, and the keyword option through the first sub-input, and trigger the electronic device to determine a location of a first chat record in the screened chat records as a first location and determine a location of a last chat record in the screened chat records as a second location by folding a second sub-input of at least one option (i.e., the target option) among the three options. Therefore, the electronic equipment can conveniently determine the target area according to the first position and the second position, and the identifications of the N contact persons in the target area can be displayed, so that the subsequent operation of a user is facilitated.
Optionally, in this embodiment of the application, the processor 2010 may be specifically configured to determine, in response to the second input, a target chat record corresponding to the target contact from the M chat records; and generating an image to be shared according to the target chat record.
It can be appreciated that the electronic device can determine, in response to the second input, a target chat record corresponding to the target contact from the M chat records, and generate the image to be shared according to the target chat record. Therefore, the electronic equipment can directly generate the image to be shared including the target chat record, secondary editing of the user is avoided, operation of the convenience user is reduced, the convenience user uses the electronic equipment, and the use experience of the user is improved.
Optionally, in this embodiment of the present application, the target chat records include S chat records, where S is an integer greater than or equal to 2. The processor 2010 is specifically configured to perform a cropping operation on the target chat interface to obtain S images to be stitched, where each image to be stitched includes one of the S chat records; and performing splicing operation on the S images to be spliced to generate the images to be shared.
It can be understood that the electronic device can cut the target chat records in the target interface and splice the cut target chat records to generate the image to be shared. Therefore, the electronic equipment can directly cut and splice the target chat records according to the use requirements of the user, and the images to be shared are directly generated, so that secondary editing of the user is avoided, the operation of convenience users is reduced, the convenience users use the electronic equipment, and the use experience of the user is improved.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the image generation method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the embodiment of the image generation method, and can achieve the same technical effect, and details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (12)
1. An image generation method, characterized in that the method comprises:
receiving a first input in a target chat interface, wherein the first input is used for triggering the electronic equipment to identify a chat record in the target chat interface;
responding to the first input, displaying N contact person identifications, wherein the N contact person identifications correspond to M chat records in the target chat interface, N and M are positive integers, and M is larger than or equal to N;
receiving a second input of the target contact identification;
responding to the second input, and generating an image to be shared, wherein the image to be shared comprises a target chat record corresponding to the target contact;
and the target contact person identifies a corresponding contact person for the target contact person.
2. The method of claim 1, wherein the N contacts identify a target area in the target chat interface;
before displaying the N contact identifiers, the method further includes:
determining the target area according to the first input;
the displaying of the N contact identifiers includes:
and displaying the N contact person identifications of the target area.
3. The method of claim 2, wherein the first input comprises a first sub-input and a second sub-input;
said determining said target region according to said first input comprises:
receiving a first sub-input;
displaying at least one of a time option, a contact option, and a keyword option in the target chat interface in response to the first sub-input;
receiving a second sub-input of a target option, wherein the target option is at least one of the time option, the contact option and the keyword option;
and responding to the second sub-input, determining at least one chat record corresponding to the target option in the target chat interface, and determining the target area according to the position of the at least one chat record.
4. The method of claim 1, wherein generating the image to share in response to the second input comprises:
in response to the second input, determining a target chat record corresponding to the target contact from the M chat records;
and generating the image to be shared according to the target chat record.
5. The method of claim 4, wherein the target chat records include S chat records, S being an integer greater than or equal to 2;
the generating the image to be shared according to the target chat record comprises:
cutting the target chat interface to obtain S images to be spliced, wherein each image to be spliced comprises one chat record in the S chat records;
and performing splicing operation on the S images to be spliced to generate the images to be shared.
6. An image generation apparatus, characterized in that the apparatus comprises: the device comprises a receiving module, a display module and a processing module;
the receiving module is used for receiving a first input in the target chat interface, wherein the first input is used for triggering the electronic equipment to identify the chat records in the target chat interface;
the display module is configured to display N contact identifiers in response to the first input received by the receiving module, where the N contact identifiers correspond to M chat records in the target chat interface, N and M are positive integers, and M is greater than or equal to N;
the receiving module is further used for receiving a second input of the target contact person identification;
the processing module is used for responding to the second input received by the receiving module, and generating an image to be shared, wherein the image to be shared comprises a target chat record corresponding to the target contact;
and the target contact person identifies a corresponding contact person for the target contact person.
7. The apparatus of claim 6, wherein the N contacts identify a target area in the target chat interface; the apparatus also includes a determination module;
and the determining module is used for determining the target area according to the first input before displaying the N contact person identifications.
The display module is specifically configured to display the N contact identifiers of the target area in response to the first input.
8. The apparatus of claim 7, wherein the first input comprises a first sub-input and a second sub-input;
the receiving module is further used for receiving a first sub-input;
the display module is further configured to display at least one of a time option, a contact option and a keyword option in the target chat interface in response to the first sub-input received by the receiving module;
the receiving module is further configured to receive a second sub-input of a target option, where the target option is at least one of the time option, the contact option, and the keyword option;
and the processing module is further used for responding to the second sub-input, determining at least one chat record corresponding to the target option in the target chat interface, and determining the target area according to the position of the at least one chat record.
9. The apparatus according to claim 6, wherein the processing module is specifically configured to determine, in response to the second input, a target chat record corresponding to the target contact from the M chat records; and generating the image to be shared according to the target chat record.
10. The apparatus of claim 9, wherein the target chat log comprises S chat logs, S being an integer greater than or equal to 2;
the processing module is specifically used for performing cutting operation on a target chat interface to obtain S images to be spliced, wherein each image to be spliced comprises one chat record in the S chat records; and performing splicing operation on the S images to be spliced to generate the images to be shared.
11. An electronic device comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing the steps of the image generation method of any of claims 1 to 5.
12. A readable storage medium, characterized in that it stores thereon a program or instructions which, when executed by a processor, implement the steps of the image generation method according to any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011039599.1A CN112165553B (en) | 2020-09-28 | 2020-09-28 | Image generation method and device, electronic equipment and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011039599.1A CN112165553B (en) | 2020-09-28 | 2020-09-28 | Image generation method and device, electronic equipment and computer readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112165553A true CN112165553A (en) | 2021-01-01 |
CN112165553B CN112165553B (en) | 2021-12-07 |
Family
ID=73860645
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011039599.1A Active CN112165553B (en) | 2020-09-28 | 2020-09-28 | Image generation method and device, electronic equipment and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112165553B (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112818094A (en) * | 2021-01-22 | 2021-05-18 | 维沃移动通信(杭州)有限公司 | Chat content processing method and device and electronic equipment |
CN113055525A (en) * | 2021-03-30 | 2021-06-29 | 维沃移动通信有限公司 | File sharing method, device, equipment and storage medium |
CN113179205A (en) * | 2021-03-31 | 2021-07-27 | 维沃移动通信有限公司 | Image sharing method and device and electronic equipment |
CN113300938A (en) * | 2021-04-02 | 2021-08-24 | 维沃移动通信有限公司 | Message sending method and device and electronic equipment |
CN113691443A (en) * | 2021-08-30 | 2021-11-23 | 维沃移动通信(杭州)有限公司 | Image sharing method and device and electronic equipment |
CN113867583A (en) * | 2021-09-17 | 2021-12-31 | 维沃移动通信有限公司 | Message record display method and device and electronic equipment |
WO2022135476A1 (en) * | 2020-12-24 | 2022-06-30 | 维沃移动通信(杭州)有限公司 | Screenshot method and apparatus, and electronic device |
CN114911393A (en) * | 2022-05-06 | 2022-08-16 | 维沃移动通信有限公司 | Screen capture method and device, electronic equipment and readable storage medium |
CN115695355A (en) * | 2022-10-31 | 2023-02-03 | 维沃移动通信有限公司 | Data sharing method and device, electronic equipment and medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103458123A (en) * | 2013-08-30 | 2013-12-18 | 广东明创软件科技有限公司 | Method for saving and sharing chatting records and mobile terminal thereof |
CN110752980A (en) * | 2019-09-27 | 2020-02-04 | 维沃移动通信有限公司 | Message sending method and electronic equipment |
-
2020
- 2020-09-28 CN CN202011039599.1A patent/CN112165553B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103458123A (en) * | 2013-08-30 | 2013-12-18 | 广东明创软件科技有限公司 | Method for saving and sharing chatting records and mobile terminal thereof |
CN110752980A (en) * | 2019-09-27 | 2020-02-04 | 维沃移动通信有限公司 | Message sending method and electronic equipment |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022135476A1 (en) * | 2020-12-24 | 2022-06-30 | 维沃移动通信(杭州)有限公司 | Screenshot method and apparatus, and electronic device |
CN112818094A (en) * | 2021-01-22 | 2021-05-18 | 维沃移动通信(杭州)有限公司 | Chat content processing method and device and electronic equipment |
CN113055525A (en) * | 2021-03-30 | 2021-06-29 | 维沃移动通信有限公司 | File sharing method, device, equipment and storage medium |
CN113179205A (en) * | 2021-03-31 | 2021-07-27 | 维沃移动通信有限公司 | Image sharing method and device and electronic equipment |
US12073068B2 (en) | 2021-03-31 | 2024-08-27 | Vivo Mobile Communication Co., Ltd. | Image dividing and sharing method |
CN113179205B (en) * | 2021-03-31 | 2023-04-18 | 维沃移动通信有限公司 | Image sharing method and device and electronic equipment |
CN113300938A (en) * | 2021-04-02 | 2021-08-24 | 维沃移动通信有限公司 | Message sending method and device and electronic equipment |
CN113691443B (en) * | 2021-08-30 | 2022-11-11 | 维沃移动通信(杭州)有限公司 | Image sharing method and device and electronic equipment |
CN113691443A (en) * | 2021-08-30 | 2021-11-23 | 维沃移动通信(杭州)有限公司 | Image sharing method and device and electronic equipment |
CN113867583A (en) * | 2021-09-17 | 2021-12-31 | 维沃移动通信有限公司 | Message record display method and device and electronic equipment |
CN114911393A (en) * | 2022-05-06 | 2022-08-16 | 维沃移动通信有限公司 | Screen capture method and device, electronic equipment and readable storage medium |
CN114911393B (en) * | 2022-05-06 | 2024-06-11 | 维沃移动通信有限公司 | Screen capturing method, device, electronic equipment and readable storage medium |
CN115695355A (en) * | 2022-10-31 | 2023-02-03 | 维沃移动通信有限公司 | Data sharing method and device, electronic equipment and medium |
WO2024093815A1 (en) * | 2022-10-31 | 2024-05-10 | 维沃移动通信有限公司 | Data sharing method and apparatus, electronic device, and medium |
Also Published As
Publication number | Publication date |
---|---|
CN112165553B (en) | 2021-12-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112165553B (en) | Image generation method and device, electronic equipment and computer readable storage medium | |
CN113093968B (en) | Shooting interface display method and device, electronic equipment and medium | |
CN113300938B (en) | Message sending method and device and electronic equipment | |
CN113014476B (en) | Group creation method and device | |
CN112748844A (en) | Message processing method and device and electronic equipment | |
CN112269519B (en) | Document processing method and device and electronic equipment | |
CN112672061A (en) | Video shooting method and device, electronic equipment and medium | |
CN113918522A (en) | File generation method and device and electronic equipment | |
CN111954079A (en) | Image processing method, image processing apparatus, electronic device, and medium | |
WO2022068803A1 (en) | File processing method and apparatus, and electronic device | |
CN113891127A (en) | Video editing method and device and electronic equipment | |
CN111885298B (en) | Image processing method and device | |
CN112684963A (en) | Screenshot method and device and electronic equipment | |
CN113783770B (en) | Image sharing method, image sharing device and electronic equipment | |
CN113360060B (en) | Task realization method and device and electronic equipment | |
CN115344159A (en) | File processing method and device, electronic equipment and readable storage medium | |
CN113726953B (en) | Display content acquisition method and device | |
CN113127425B (en) | Picture processing method and device and electronic equipment | |
CN111796733B (en) | Image display method, image display device and electronic equipment | |
CN112558811A (en) | Content processing method and device and electronic equipment | |
CN113778300A (en) | Screen capturing method and device | |
CN112328149A (en) | Picture format setting method and device and electronic equipment | |
CN113347355A (en) | Image processing method and device and electronic equipment | |
CN112764632B (en) | Image sharing method and device and electronic equipment | |
CN113037618B (en) | Image sharing method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |