CN110618851A - Information operation method and mobile terminal - Google Patents

Information operation method and mobile terminal Download PDF

Info

Publication number
CN110618851A
CN110618851A CN201910818400.6A CN201910818400A CN110618851A CN 110618851 A CN110618851 A CN 110618851A CN 201910818400 A CN201910818400 A CN 201910818400A CN 110618851 A CN110618851 A CN 110618851A
Authority
CN
China
Prior art keywords
image
target
input
contact
display area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910818400.6A
Other languages
Chinese (zh)
Inventor
周芳香
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201910818400.6A priority Critical patent/CN110618851A/en
Publication of CN110618851A publication Critical patent/CN110618851A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Abstract

The invention discloses an information operation method and a mobile terminal, wherein the method comprises the following steps: receiving a first input for a first display area; responding to the first input, and displaying an operation interface; the target image displayed in the first display area is in an editing state in the operation interface; receiving a second input aiming at the operation interface; establishing an association between a target contact and a target avatar in response to the second input; the target contact is one of at least one contact displayed in the second display area, and the target avatar is a part or all of the target image. The invention is convenient for the user to set the head portrait of the contact person, and avoids switching among a plurality of operation interfaces, thereby simplifying the operation process.

Description

Information operation method and mobile terminal
Technical Field
The present invention relates to the field of communications technologies, and in particular, to an information operation method and a mobile terminal.
Background
To facilitate communication, users typically pre-store contact information. When the contact information is stored, the head portrait can be set for the corresponding contact, so that personalized and interesting experience is provided for the user. The general avatar setting approaches include the following two approaches: firstly, when a contact is newly built/edited, a user can select a button of selecting photos from an album or a button of taking photos by clicking a head portrait, and after an original image of the head portrait to be set is obtained, the user can perform operations such as moving, amplifying, reducing and the like through an image cutting interface to obtain the cut head portrait. Secondly, in an image viewing interface, a user can enter a contact list interface by clicking a 'set contact photo' button, and further the user can obtain a clipped photo by selecting a contact and performing operations such as moving, amplifying, reducing and the like through an image clipping interface. The above methods need to switch among a plurality of operation interfaces, the operation process is tedious, and the image viewing or contact person viewing is not facilitated.
Disclosure of Invention
The invention provides an information operation method and a mobile terminal, and aims to solve the problems that in the prior art, a contact photo setting method needs to be switched among a plurality of operation interfaces, and the operation process is complicated.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides an information operation method, applied to a mobile terminal, including:
receiving a first input for a first display area;
responding to the first input, and displaying an operation interface; the target image displayed in the first display area is in an editing state in the operation interface;
receiving a second input aiming at the operation interface;
establishing an association between a target contact and a target avatar in response to the second input; the target contact is one of at least one contact displayed in the second display area, and the target avatar is a part or all of the target image.
In a second aspect, an embodiment of the present invention further provides a mobile terminal, including:
the first receiving module is used for receiving a first input aiming at the first display area;
the first response module is used for responding to the first input and displaying an operation interface; the target image displayed in the first display area is in an editing state in the operation interface;
the second receiving module is used for receiving second input aiming at the operation interface;
a second response module for establishing an association between the target contact and the target avatar in response to the second input; the target contact is one of at least one contact displayed in the second display area, and the target avatar is a part or all of the target image.
In a third aspect, an embodiment of the present invention further provides a mobile terminal, including a processor, a memory, and a computer program stored on the memory and executable on the processor, where the computer program, when executed by the processor, implements the steps of the information operation method described above.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the operation method of the information as described above.
In the embodiment of the invention, an operation interface is displayed through first input aiming at a first display area, and a target image displayed in the first display area is in an editing state in the operation interface; and establishing an association between the target contact and the target avatar through a second input aiming at the operation interface; the target contact is one of at least one contact displayed in the second display area, and the target avatar is a part or all of the target image. Therefore, the operation interface is displayed in the first display area, and under the condition that the contact person image is displayed in the second display area, the setting operation of the contact person head portrait is convenient for a user, the switching among a plurality of operation interfaces is avoided, and the operation process is simplified.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
FIG. 1 shows one of the flow charts of the method of operation of the information of an embodiment of the present invention;
FIG. 2 is a schematic view of an operator interface according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an operator interface and contact list of an embodiment of the present invention;
FIG. 4 is a flow chart illustrating displaying an operator interface according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a selection box of an embodiment of the invention;
FIG. 6 is a second flow chart of a method of operation of the message according to an embodiment of the invention;
FIG. 7 is a third flow chart of a method of operation of the message of an embodiment of the present invention;
FIG. 8 is a schematic diagram of a fourth input and a fifth input of an embodiment of the present invention;
FIG. 9 shows a schematic diagram of sixth and seventh inputs of an embodiment of the present invention;
FIG. 10 is a schematic diagram of an eighth input of an embodiment of the present invention;
FIG. 11 shows a block diagram of a mobile terminal of an embodiment of the invention;
fig. 12 is a schematic diagram of a hardware configuration of a mobile terminal according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, an embodiment of the present invention provides an information operation method, applied to a mobile terminal, including:
step 11: a first input is received for a first display region.
Optionally, the first input may be a sliding input, a clicking input, a long-press input in which a press duration exceeds a preset threshold, a re-press input in which a pressure value exceeds a preset threshold, and the like, where the user operates the first display region. For example: the first input may be a three-finger slide down input operated by a user in the first display region.
Step 12: responding to the first input, and displaying an operation interface; and the target image displayed in the first display area is in an editing state in the operation interface.
Specifically, an operation interface for editing the head portrait of the contact person can be entered through three-finger sliding down input on any interface of the first display area, as shown in fig. 2.
For example: the target image may be displayed in the "trimming image area", and a square "crop box" of a default size is displayed at the center position of the "trimming image area". The cutting frame can be stretched, enlarged or squeezed, reduced or dragged to change positions. The upper part of the ' trimming image area ' displays a circular ' head portrait display area ', the head portrait display area ' can be a head portrait preview control, the head portrait in the preview control comes from the image area selected by the cropping frame, the center point of the selected image is used as a round point, the head portrait is cropped according to the radius of the preview control, and in addition, the equal-ratio amplification filling can be further carried out according to the size of the cropped image.
Further, a preview interface of the full-screen avatar is displayed by clicking the circular avatar preview control. The head portrait displayed in the full screen is a picture selected according to the 'cutting frame', is cut and is enlarged to the size of the screen in an equal ratio mode, and therefore the head portrait can be displayed in the full screen mode. And pressing a return key on a preview interface of the full-screen head portrait to return to an operation interface for editing the head portrait of the contact person.
Specifically, in the case where no contact is selected, the contact name and the "ok" button are not displayed in the operation interface.
Step 13: and receiving a second input aiming at the operation interface.
Wherein the second input may be a predetermined gesture input for the operation interface; as shown in fig. 3, the second input may be a click input for the "ok" button in a case where the contact name and the "ok" button are displayed in the operation interface when the contact is selected.
Step 14: establishing an association between a target contact and a target avatar in response to the second input; the target contact is one of at least one contact displayed in the second display area, and the target avatar is a part or all of the target image.
The first display area and the second display area can be different display areas on the same display screen of the mobile terminal; or for a mobile terminal having at least two display screens, the first display region may be a display region on a first display screen of the mobile terminal and the second display region may be a display region on a second display screen of the mobile terminal.
For example: the contact list interface may be displayed within the second display area by a predetermined gesture input on the second display area, or an input to an activate button that activates the contact list interface, as shown in fig. 3.
In the embodiment, an operation interface is displayed through a first input aiming at a first display area, and a target image displayed in the first display area is in an editing state in the operation interface; and establishing an association between the target contact and the target avatar through a second input aiming at the operation interface; the target contact is one of at least one contact displayed in the second display area, and the target avatar is a part or all of the target image. Therefore, the operation interface is displayed in the first display area, and under the condition that the contact person image is displayed in the second display area, the setting operation of the contact person head portrait is convenient for a user, the switching among a plurality of operation interfaces is avoided, and the operation process is simplified.
As an implementation manner, in the case that the first display area displays an image viewing interface (e.g. an image viewing interface of an album), the step 12 may specifically include: and displaying an operation interface in the first display area, and taking an image displayed by the image viewing interface as a target image.
As another implementation, the first display area displays any interface, such as: in the case of browsing web pages, social applications, instant messaging applications, and other interfaces, as shown in fig. 4, step 12 may further include:
step 121: in response to the first input, detecting whether an image is included in a current interface within the first display area.
Step 122: and if the images are contained, determining one of the images as the target image, and displaying the target image in an editing state in the operation interface.
Specifically, the step 122 can also be implemented as follows:
the first method is as follows: and if the number of the images is 1, determining that the images are the target images, and displaying the target images in an editing state in the operation interface.
In this embodiment, when only one image exists in the current interface in the first display area, the image is directly displayed in the operation interface as the target image, and the target image is in the editing state, so that the user can directly perform editing operation on the target image, thereby facilitating simplification of the user operation process.
The second method comprises the following steps: if the thumbnail contains the image, displaying the thumbnail corresponding to the image in a selection frame; determining an image corresponding to the target thumbnail in the selection frame as a target image; the target thumbnail is one of the selection frame thumbnail; and displaying the target image in an editing state in the operation interface.
Optionally, if the number of the images is at least two, displaying the thumbnails corresponding to the at least two images in a selection frame; and determining a thumbnail in the selection frame as a target thumbnail, and further determining an image corresponding to the target thumbnail as a target image so as to display the target image in an editing state in the operation interface.
Optionally, if it is detected that the number of the images included in the current interface in the first display area is at least two, the images included in the current interface in the first display area are downloaded to the local through the background (the images included in the current interface in the first display area are not limited to the images displayed in the first display area, and may also include images that are not displayed in the first display area in the interface).
When the number of the images is at least two, a bar-shaped selection box can be displayed on the operation interface, for example: the selection box may be displayed in a floating manner. Alternatively, thumbnails of the downloaded images may be displayed from left to right in the downloading order of the images, as shown in fig. 5.
Further, determining one thumbnail within the selection box as the target thumbnail may be: the thumbnail (e.g., the first one) at the target location within the selection box is determined as the target thumbnail.
Or, determining a thumbnail in the selection box as the target thumbnail may further be: and receiving a sliding input aiming at any position of the selection frame, and responding to the sliding input, and scrolling and displaying the thumbnail in the selection frame. Such as: receiving a sliding input of leftward sliding for any position of the selection frame, and then, displaying the thumbnail in the selection frame in a leftward rolling mode; and receiving a sliding input of sliding to the right at any position of the selection frame, and then scrolling to the right to display the thumbnails in the selection frame. Alternatively, by scrolling display of thumbnails within the selection frame, it is possible to determine that a thumbnail slid to a target position is a target thumbnail. Further, the display of the thumbnail may be exited by an operation for the return key.
Furthermore, if the number of the images is at least two, displaying the thumbnail corresponding to each of the at least two images in a selection frame; receiving a seventh input for a target thumbnail within the selection box; and responding to the seventh input, and displaying the target image corresponding to the target thumbnail in an edited state in the operation interface.
In this embodiment, when a plurality of images are included in the current interface in the first display area, thumbnails of the plurality of images are displayed in the selection frame, so that the user can conveniently view the content of the corresponding image through the thumbnail and the user's selection operation is facilitated.
Further, after the target image corresponding to the target thumbnail is displayed in an edited state in the operation interface in response to the seventh input, a downloaded image corresponding to a previous thumbnail of the displayed target thumbnail may be switched within the operation interface by a slide input sliding leftward within the operation interface; by the slide input of the rightward slide, the downloaded image corresponding to the next thumbnail of the display target thumbnail can be switched within the operation interface. Particularly, after switching, the position and the size of the 'cutting frame' in the operation interface are not changed, and the 'cutting frame' automatically selects the switched image for cutting. The circular avatar preview control also updates the image displayed as the "crop box" selection after switching. And the head portrait displayed in the full-screen head portrait preview interface after the head portrait preview control is clicked is changed. Optionally, the method may further return to the foreground interface before the first input is executed by operating the return key.
Step 123: and if the image is not included, acquiring a screen capture image of the current interface as the target image, and displaying the target image in an editing state in the operation interface.
Specifically, under the condition that the current interface displayed in the first display area does not contain an image, the interface currently displayed in the first display area can be intercepted, a screenshot image is generated, the screenshot image is used as a target image, and then the screenshot image can be used as a contact photo after being edited.
In the embodiment, the contact person head portrait can be set for any interface displayed in the first display area, so that a complex operation process that a screenshot image is acquired through screenshot operation and then is used as the contact person head portrait after editing operation is performed according to the screenshot image is avoided, and the operation of a user is further facilitated to be simplified.
As shown in fig. 6, an embodiment of the present invention further provides an information operation method, which is applied to a mobile terminal, and includes:
step 61: a first input is received for a first display region.
Step 62: responding to the first input, and displaying an operation interface; and the target image displayed in the first display area is in an editing state in the operation interface.
And step 63: and detecting whether the target image contains a face image.
Specifically, whether a face image exists in the target image can be judged through an image recognition technology (a face is a technology); if so, perform the following step 64; if not, the following step 66 is performed directly.
Step 64: and if the contact person image comprises the face image, matching the face image with the head portrait respectively associated with at least one contact person displayed in the second display area.
Specifically, under the condition that the target image contains the contacts, the face images in the target image are compared with the face images in the head portraits respectively corresponding to the contacts with the head portraits displayed in the second display area one by one; if the face image in the target image matches the face in the contact person avatar (if the face of the same person is the same person, the face is considered to match), the following step 65 is executed; if not, the following step 66 is performed directly.
Step 65: and determining a contact associated with the head portrait matched with the face image as the target contact.
And step 66: and receiving second input aiming at the selection target contact of the operation interface.
Step 67: establishing association between the target contact person and the target head portrait; the target avatar is a part or all of the target image.
In the embodiment, the operation interface is displayed in the first display area, and under the condition that the contact image is displayed in the second display area, the setting operation of the contact head portrait is convenient for a user, and the switching among a plurality of operation interfaces is avoided, so that the operation process is simplified. And the face image in the target image is matched with the face image in the head portrait corresponding to the contact person with the head portrait displayed in the second display area, so that the user can be purposefully and directly prompted that the contact person head portrait is set for the target person corresponding to the face image in the target image, and the head portrait setting is carried out on the contact person with the face in the contact person head portrait matched with the face image in the target image, thereby being beneficial to improving the user experience effect.
As shown in fig. 7, an embodiment of the present invention further provides an information operation method, which is applied to a mobile terminal, and includes:
step 71: a first input is received for a first display region.
Step 72: responding to the first input, and displaying an operation interface; and the target image displayed in the first display area is in an editing state in the operation interface.
Step 73: and detecting whether the target image contains a face image.
Step 74: and if the contact person image comprises the face image, matching the face image with the head portrait respectively associated with at least one contact person displayed in the second display area.
Step 75: and detecting whether the contact associated with the head portrait matched with the face image is stored in the local storage space of the mobile terminal.
Optionally, when the mobile terminal stores the contact information, the contact information may be stored in a local storage space of the mobile terminal, or may be stored on the SIM card. The mobile terminal can set a contact photo for the contacts stored in the local storage space, and cannot set a contact photo for the contacts stored on the SIM card. If the contact associated with the head portrait matched with the face image is stored in the local storage space of the mobile terminal, executing the following step 76; if it is detected that the contact associated with the avatar matching the face image is stored on the SIM card, the following step 77 is executed.
Step 76: and if the head portrait is stored in the local storage space, determining a contact associated with the head portrait matched with the face image as the target contact.
Step 77: and if the head portrait is not stored in the local storage space, displaying the prompt information that the contact associated with the head portrait matched with the face image cannot set the head portrait.
In this embodiment, for the contact stored on the SIM card, the contact icon cannot be set, and the user is prompted by displaying the prompt information, for example, in a manner of "selecting the contact cannot set the contact icon".
Step 78: and determining a contact associated with the head portrait matched with the face image as the target contact.
Step 79: and receiving second input aiming at the selection target contact of the operation interface.
Step 710: establishing association between the target contact person and the target head portrait; the target avatar is a part or all of the target image.
In the embodiment, the operation interface is displayed in the first display area, and under the condition that the contact image is displayed in the second display area, the setting operation of the contact head portrait is convenient for a user, and the switching among a plurality of operation interfaces is avoided, so that the operation process is simplified. And the face image in the target image is matched with the face image in the head portrait corresponding to the contact person with the head portrait displayed in the second display area, so that the user can be purposefully and directly prompted that the contact person head portrait is set for the target person corresponding to the face image in the target image, and the head portrait setting is carried out on the contact person with the face in the contact person head portrait matched with the face image in the target image, thereby being beneficial to improving the user experience effect. In addition, the prompt that the head portrait of the contact person cannot be set can be carried out on the contact person stored on the SIM card, and the user experience effect is further improved.
Further, the operation interface includes: an image cropping area and a non-cropping area other than the image cropping area;
in the above embodiment, in response to the first input, the step of displaying an operation interface may further include:
receiving a third input for the non-cropped area and a fourth input for a first contact in the second display area; deleting the avatar associated with the first contact in response to the third input and the fourth input.
As shown in fig. 8, the image cropping area may be a "cropping frame" on the operation interface in the first display area, and the non-image cropping area may be an area other than the "cropping frame" in the operation interface, such as: the "cropped picture region" is a region other than the "crop box".
Specifically, the user may press a non-clip box of the first display area with a single finger length, that is, the user may operate on an area other than the "clip box" with the single finger length, and the pressing length exceeds a preset threshold, and at the same time, by clicking the avatar of the first contact (for example, contact B in fig. 8) on the second display area, the avatar associated with contact B is deleted.
Further, the operation interface includes: an image cropping area and a non-cropping area other than the image cropping area;
in the above embodiment, in response to the first input, the step of displaying an operation interface may further include:
receiving a fifth input for the image cropping area and a sixth input for a second contact in the second display area; establishing an association between the target avatar and the second contact in response to the fifth input and the sixth input; and the target head portrait is an image in the image cutting area.
As shown in fig. 9, the image cropping area may be a "cropping frame" on the operation interface in the first display area, and the non-image cropping area may be an area other than the "cropping frame" in the operation interface, such as: the "cropped picture region" is a region other than the "crop box".
Specifically, the user may press the "crop box" of the first display area with a single finger length, that is, operate on the area of the "crop box", and press the area for a time length exceeding a preset threshold, and at the same time, by clicking the avatar of the first contact (for example, contact B in fig. 9) on the second display area, establish an association between the target avatar (the image corresponding to the "crop box") and the contact B, that is, take the image in the "crop box" as the avatar of the contact B.
Further, after the step of determining the contact associated with the avatar matching the face image as the target contact, if an eighth input for one of the contacts in the second display area is received, the process returns to the step 75.
As shown in fig. 10, after the step of determining the contact associated with the avatar matching the face image as the target contact, if a double-click input is received for contact B in the second display area, the process returns to step 75.
As shown in fig. 11, an embodiment of the present invention further provides a mobile terminal 1100, including:
a first receiving module 1110 for receiving a first input for a first display region;
a first response module 1120, configured to display an operation interface in response to the first input; the target image displayed in the first display area is in an editing state in the operation interface;
a second receiving module 1130, configured to receive a second input for the operation interface;
a second response module 1140 for establishing an association between the target contact and the target avatar in response to the second input; the target contact is one of at least one contact displayed in the second display area, and the target avatar is a part or all of the target image.
Optionally, the first response module 1120 includes:
the first response submodule is used for responding to the first input and detecting whether the current interface in the first display area contains an image or not;
the first processing submodule is used for determining one of the images as the target image if the image is contained, and displaying the target image in an editing state in the operation interface;
and the second processing submodule is used for acquiring a screen capture image of the current interface as the target image if the image is not included, and displaying the target image in an editing state in the operation interface.
Optionally, the first processing sub-module includes:
the processing unit is used for displaying the thumbnail corresponding to the image in a selection frame if the image is contained;
the determining unit is used for determining that the image corresponding to the target thumbnail in the selection frame is a target image; the target thumbnail is one of the selection frame thumbnail;
and the display unit is used for displaying the target image in an editing state in the operation interface.
Optionally, the mobile terminal 1100 further includes:
the first detection module is used for detecting whether the target image contains a face image;
the matching module is used for matching the face image with the head portrait which is respectively associated with at least one contact person and is displayed in the second display area if the face image is included;
and the determining module is used for determining the contact associated with the head portrait matched with the face image as the target contact.
Optionally, the mobile terminal 1100 further includes:
the second detection module is used for detecting whether a contact associated with the head portrait matched with the face image is stored in a local storage space of the mobile terminal;
a first processing module, configured to, if the head portrait is stored in the local storage space, execute a step of determining a contact associated with the head portrait matched with the face image as the target contact;
and the second processing module is used for displaying prompt information that the contact associated with the head portrait matched with the face image cannot set the head portrait if the prompt information is not stored in the local storage space.
Optionally, the operation interface includes: an image cropping area and a non-cropping area other than the image cropping area;
the mobile terminal 1100 further includes:
a third receiving module for receiving a third input for the non-cropped area and a fourth input for the first contact in the second display area;
a third response module to delete the avatar associated with the first contact in response to the third input and the fourth input.
Optionally, the operation interface includes: an image cropping area and a non-cropping area other than the image cropping area;
the mobile terminal 1100 further includes:
a fourth receiving module, configured to receive a fifth input for the image cropping area and a sixth input for a second contact in the second display area;
a fourth response module for establishing an association between the target avatar and the second contact in response to the fifth input and the sixth input; and the target head portrait is an image in the image cutting area.
The mobile terminal provided in the embodiment of the present invention can implement each process implemented by the mobile terminal in the method embodiments of fig. 1 to fig. 10, and is not described herein again to avoid repetition.
In the mobile terminal 1100 in the embodiment of the present invention, an operation interface is displayed by a first input to a first display area, and a target image displayed in the first display area is in an editing state in the operation interface; and establishing an association between the target contact and the target avatar through a second input aiming at the operation interface; the target contact is one of at least one contact displayed in the second display area, and the target avatar is a part or all of the target image. Therefore, the operation interface is displayed in the first display area, and under the condition that the contact person image is displayed in the second display area, the setting operation of the contact person head portrait is convenient for a user, the switching among a plurality of operation interfaces is avoided, and the operation process is simplified.
Fig. 12 is a schematic diagram of a hardware structure of a mobile terminal implementing various embodiments of the present invention.
The mobile terminal 1200 includes, but is not limited to: radio frequency unit 1201, network module 1202, audio output unit 1203, input unit 1204, sensor 1205, display unit 1206, user input unit 1207, interface unit 1208, memory 1209, processor 1210, and power source 1211. Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 12 is not intended to be limiting of mobile terminals, and that a mobile terminal may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the mobile terminal includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
The user input unit 1207 is configured to receive a first input for the first display area;
a processor 1210 for displaying an operation interface in response to the first input; the target image displayed in the first display area is in an editing state in the operation interface;
the user input unit 1207 is further configured to receive a second input for the operation interface;
a processor 1210 further configured to establish an association between a target contact and a target avatar in response to the second input; the target contact is one of at least one contact displayed in the second display area, and the target avatar is a part or all of the target image.
In the mobile terminal 1200 in the embodiment of the present invention, an operation interface is displayed through a first input to a first display area, and a target image displayed in the first display area is in an editing state in the operation interface; and establishing an association between the target contact and the target avatar through a second input aiming at the operation interface; the target contact is one of at least one contact displayed in the second display area, and the target avatar is a part or all of the target image. Therefore, the operation interface is displayed in the first display area, and under the condition that the contact person image is displayed in the second display area, the setting operation of the contact person head portrait is convenient for a user, the switching among a plurality of operation interfaces is avoided, and the operation process is simplified.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 1201 may be used for receiving and sending signals during information transmission and reception or during a call, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 1210; in addition, the uplink data is transmitted to the base station. Typically, the radio frequency unit 1201 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 1201 can also communicate with a network and other devices through a wireless communication system.
The mobile terminal provides wireless broadband internet access to the user through the network module 1202, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 1203 may convert audio data received by the radio frequency unit 1201 or the network module 1202 or stored in the memory 1209 into an audio signal and output as sound. Also, the audio output unit 1203 may also provide audio output related to a specific function performed by the mobile terminal 1200 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 1203 includes a speaker, a buzzer, a receiver, and the like.
The input unit 1204 is used to receive audio or video signals. The input Unit 1204 may include a Graphics Processing Unit (GPU) 12041 and a microphone 12042, and the Graphics processor 12041 processes image data of a still image or video obtained by an image capturing apparatus (such as a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 1206. The image frames processed by the graphics processor 12041 may be stored in the memory 1209 (or other storage medium) or transmitted via the radio frequency unit 1201 or the network module 1202. The microphone 12042 can receive sound, and can process such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 1201 in case of the phone call mode.
The mobile terminal 1200 also includes at least one sensor 1205, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that adjusts the brightness of the display panel 12061 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 12061 and/or backlight when the mobile terminal 1200 moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of the mobile terminal (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 1205 may also include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., and will not be described further herein.
The display unit 1206 is used to display information input by the user or information provided to the user. The Display unit 1206 may include a Display panel 12061, and the Display panel 12061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 1207 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Specifically, the user input unit 1207 includes a touch panel 12071 and other input devices 12072. The touch panel 12071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 12071 (e.g., operations by a user on or near the touch panel 12071 using a finger, a stylus, or any suitable object or attachment). The touch panel 12071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 1210, receives a command from the processor 1210, and executes the command. In addition, the touch panel 12071 may be implemented by using various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The user input unit 1207 may include other input devices 12072 in addition to the touch panel 12071. In particular, the other input devices 12072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 12071 may be overlaid on the display panel 12061, and when the touch panel 12071 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 1210 to determine the type of the touch event, and then the processor 1210 provides a corresponding visual output on the display panel 12061 according to the type of the touch event. Although the touch panel 12071 and the display panel 12061 are shown as two separate components in fig. 12 to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 12071 and the display panel 12061 may be integrated to implement the input and output functions of the mobile terminal, and is not limited herein.
The interface unit 1208 is an interface for connecting an external device to the mobile terminal 1200. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. Interface unit 1208 may be used to receive input from external devices (e.g., data information, power, etc.) and transmit the received input to one or more elements within mobile terminal 1200 or may be used to transmit data between mobile terminal 1200 and external devices.
The memory 1209 may be used to store software programs as well as various data. The memory 1209 may mainly include a storage program area and a storage data area, where the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 1209 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 1210 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by running or executing software programs and/or modules stored in the memory 1209 and calling data stored in the memory 1209, thereby integrally monitoring the mobile terminal. Processor 1210 may include one or more processing units; preferably, the processor 1210 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It is to be appreciated that the modem processor described above may not be integrated into processor 1210.
The mobile terminal 1200 may also include a power source 1211 (e.g., a battery) for powering the various components, and the power source 1211 may be logically connected to the processor 1210 through a power management system that may be configured to manage charging, discharging, and power consumption.
In addition, the mobile terminal 1200 includes some functional modules that are not shown, and are not described in detail herein.
Preferably, an embodiment of the present invention further provides a mobile terminal, which includes a processor 1210, a memory 1209, and a computer program stored in the memory 1209 and capable of running on the processor 1210, where the computer program, when executed by the processor 1210, implements each process of the above-mentioned information operation method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the information operation method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (14)

1. An information operation method applied to a mobile terminal is characterized by comprising the following steps:
receiving a first input for a first display area;
responding to the first input, and displaying an operation interface; the target image displayed in the first display area is in an editing state in the operation interface;
receiving a second input aiming at the operation interface;
establishing an association between a target contact and a target avatar in response to the second input; the target contact is one of at least one contact displayed in the second display area, and the target avatar is a part or all of the target image.
2. The method for manipulating information according to claim 1, wherein said displaying an operation interface in response to the first input comprises:
detecting whether a current interface in the first display area contains an image or not in response to the first input;
if the target image contains images, determining one of the images as the target image, and displaying the target image in an editing state in the operation interface;
and if the image is not included, acquiring a screen capture image of the current interface as the target image, and displaying the target image in an editing state in the operation interface.
3. The method for manipulating information according to claim 2, wherein the determining one of the images as the target image if the image is included and displaying the target image in an edited state in the manipulation interface includes:
if the thumbnail contains the image, displaying the thumbnail corresponding to the image in a selection frame;
determining an image corresponding to the target thumbnail in the selection frame as a target image; the target thumbnail is one of the selection frame thumbnail;
and displaying the target image in an editing state in the operation interface.
4. The method for manipulating information according to claim 1, wherein after displaying a manipulation interface in response to the first input and before receiving a second input to the manipulation interface, the method further comprises:
detecting whether the target image contains a face image or not;
if the contact person image comprises a face image, matching the face image with the head portrait respectively associated with at least one contact person displayed in the second display area;
and determining a contact associated with the head portrait matched with the face image as the target contact.
5. The method for manipulating information according to claim 1, wherein the manipulation interface comprises: an image cropping area and a non-cropping area other than the image cropping area;
after the responding to the first input and displaying an operation interface, the method further comprises the following steps:
receiving a third input for the non-cropped area and a fourth input for a first contact in the second display area;
deleting the avatar associated with the first contact in response to the third input and the fourth input.
6. The method for manipulating information according to claim 1, wherein the manipulation interface comprises: an image cropping area and a non-cropping area other than the image cropping area;
after the responding to the first input and displaying an operation interface, the method further comprises the following steps:
receiving a fifth input for the image cropping area and a sixth input for a second contact in the second display area;
establishing an association between the target avatar and the second contact in response to the fifth input and the sixth input; and the target head portrait is an image in the image cutting area.
7. A mobile terminal, comprising:
the first receiving module is used for receiving a first input aiming at the first display area;
the first response module is used for responding to the first input and displaying an operation interface; the target image displayed in the first display area is in an editing state in the operation interface;
the second receiving module is used for receiving second input aiming at the operation interface;
a second response module for establishing an association between the target contact and the target avatar in response to the second input; the target contact is one of at least one contact displayed in the second display area, and the target avatar is a part or all of the target image.
8. The mobile terminal of claim 7, wherein the first response module comprises:
the first response submodule is used for responding to the first input and detecting whether the current interface in the first display area contains an image or not;
the first processing submodule is used for determining one of the images as the target image if the image is contained, and displaying the target image in an editing state in the operation interface;
and the second processing submodule is used for acquiring a screen capture image of the current interface as the target image if the image is not included, and displaying the target image in an editing state in the operation interface.
9. The mobile terminal of claim 8, wherein the first processing sub-module comprises:
the processing unit is used for displaying the thumbnail corresponding to the image in a selection frame if the image is contained;
the determining unit is used for determining that the image corresponding to the target thumbnail in the selection frame is a target image; the target thumbnail is one of the selection frame thumbnail;
and the display unit is used for displaying the target image in an editing state in the operation interface.
10. The mobile terminal of claim 7, wherein the mobile terminal further comprises:
the first detection module is used for detecting whether the target image contains a face image;
the matching module is used for matching the face image with the head portrait which is respectively associated with at least one contact person and is displayed in the second display area if the face image is included;
and the determining module is used for determining the contact associated with the head portrait matched with the face image as the target contact.
11. The mobile terminal according to claim 7, wherein the operation interface comprises: an image cropping area and a non-cropping area other than the image cropping area;
the mobile terminal further includes:
a third receiving module for receiving a third input for the non-cropped area and a fourth input for the first contact in the second display area;
a third response module to delete the avatar associated with the first contact in response to the third input and the fourth input.
12. The mobile terminal according to claim 7, wherein the operation interface comprises: an image cropping area and a non-cropping area other than the image cropping area;
the mobile terminal further includes:
a fourth receiving module, configured to receive a fifth input for the image cropping area and a sixth input for a second contact in the second display area;
a fourth response module for establishing an association between the target avatar and the second contact in response to the fifth input and the sixth input; and the target head portrait is an image in the image cutting area.
13. A mobile terminal, characterized in that it comprises a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the method of operation of the information according to any one of claims 1 to 6.
14. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, carries out the steps of the method of operation of the information according to any one of claims 1 to 6.
CN201910818400.6A 2019-08-30 2019-08-30 Information operation method and mobile terminal Pending CN110618851A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910818400.6A CN110618851A (en) 2019-08-30 2019-08-30 Information operation method and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910818400.6A CN110618851A (en) 2019-08-30 2019-08-30 Information operation method and mobile terminal

Publications (1)

Publication Number Publication Date
CN110618851A true CN110618851A (en) 2019-12-27

Family

ID=68922874

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910818400.6A Pending CN110618851A (en) 2019-08-30 2019-08-30 Information operation method and mobile terminal

Country Status (1)

Country Link
CN (1) CN110618851A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104252296A (en) * 2014-08-28 2014-12-31 广州三星通信技术研究有限公司 Method and device for applying pictures to electronic terminal
CN106101357A (en) * 2016-06-28 2016-11-09 上海青橙实业有限公司 Information processing method and mobile terminal

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104252296A (en) * 2014-08-28 2014-12-31 广州三星通信技术研究有限公司 Method and device for applying pictures to electronic terminal
CN106101357A (en) * 2016-06-28 2016-11-09 上海青橙实业有限公司 Information processing method and mobile terminal

Similar Documents

Publication Publication Date Title
CN108536365B (en) Image sharing method and terminal
CN108762954B (en) Object sharing method and mobile terminal
WO2019137429A1 (en) Picture processing method and mobile terminal
CN108958867B (en) Task operation method and device for application
CN109343755B (en) File processing method and terminal equipment
CN109213416B (en) Display information processing method and mobile terminal
CN110602565A (en) Image processing method and electronic equipment
CN108108079B (en) Icon display processing method and mobile terminal
CN108646960B (en) File processing method and flexible screen terminal
CN107728923B (en) Operation processing method and mobile terminal
CN108121486B (en) Picture display method and mobile terminal
CN110442279B (en) Message sending method and mobile terminal
CN110865745A (en) Screen capturing method and terminal equipment
CN109656461B (en) Screen capturing method and terminal
CN109271262B (en) Display method and terminal
CN108132749B (en) Image editing method and mobile terminal
CN108228902B (en) File display method and mobile terminal
CN108170329B (en) Display control method and terminal equipment
CN110609648A (en) Application program control method and terminal
CN110413363B (en) Screenshot method and terminal equipment
CN109542307B (en) Image processing method, device and computer readable storage medium
CN111459603A (en) Icon display method and electronic equipment
CN110955793A (en) Display control method and electronic equipment
CN109491964B (en) File sharing method and terminal
CN111142721A (en) Application icon processing method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20191227