CN111954079A - Image processing method, image processing apparatus, electronic device, and medium - Google Patents

Image processing method, image processing apparatus, electronic device, and medium Download PDF

Info

Publication number
CN111954079A
CN111954079A CN202010464424.9A CN202010464424A CN111954079A CN 111954079 A CN111954079 A CN 111954079A CN 202010464424 A CN202010464424 A CN 202010464424A CN 111954079 A CN111954079 A CN 111954079A
Authority
CN
China
Prior art keywords
image
target
area
user
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010464424.9A
Other languages
Chinese (zh)
Other versions
CN111954079B (en
Inventor
魏星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202010464424.9A priority Critical patent/CN111954079B/en
Publication of CN111954079A publication Critical patent/CN111954079A/en
Application granted granted Critical
Publication of CN111954079B publication Critical patent/CN111954079B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses an image processing method, an image processing device, electronic equipment and a medium, which belong to the technical field of communication, wherein the method comprises the following steps: displaying a target interface of an application program in a first area; displaying a target image and information of the target image in a second area based on the content of the target interface; the target image is an image operated by a user within a preset time length, the information of the target image comprises the description label, the image processing method disclosed by the application is convenient and fast in image sharing operation, the description note is correspondingly displayed for each target image, the user can easily find the first image to be shared, the time consumed for finding the image to be shared can be saved, and the finding difficulty is reduced.

Description

Image processing method, image processing apparatus, electronic device, and medium
Technical Field
Embodiments of the present invention relate to the field of communications technologies, and in particular, to an image processing method and apparatus, an electronic device, and a medium.
Background
At present, various communication applications are installed in electronic equipment, and users need to share screenshots when using the communication applications. At present, when a screenshot is shared in a communication application program, the general operation flow is as follows:
the user quits the application program to a screenshot at a target position, and the screenshot is stored in the album; and after the screenshot is finished, the application program is started, a preset picture sharing control in the application program is clicked, a picture selection interface is popped up, a target screenshot is selected from the picture selection interface, then a picture sending control is clicked, and finally the screenshot is shared, so that the operation is complex and the user operation is inconvenient.
When a plurality of screenshots need to be shared with the same contact object, each target screenshot needs to be sequentially searched from the historical screenshots in the picture selection interface, whether the screenshot is the target screenshot is identified through the screenshot preview image, and the time consumption is long and the searching difficulty is large.
Disclosure of Invention
The embodiment of the application aims to provide a screenshot processing method, a screenshot processing device, electronic equipment and a screenshot processing medium, which can solve the problems that time is consumed for searching a target image and searching difficulty is high when the target image is shared in the prior art.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present application provides an image processing method, where the method includes: displaying a target interface of an application program in a first area; displaying a target image and information of the target image in a second area based on the content of the target interface; the target image is an image operated by a user within a preset time length, and the information of the target image comprises a description label.
In a second aspect, an embodiment of the present application provides an image processing apparatus, where the apparatus includes: the first display module is used for displaying a target interface of the application program in a first area; the second display module is used for displaying a target image and information of the target image in a second area based on the content of the target interface; the target image is an image operated by a user within a preset time length, and the information of the target image comprises a description label.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the application, a target interface of an application program is displayed in a first area; and displaying the target image and the information of the target image in the second area based on the content of the target interface. On one hand, as the target image which is operated recently by the user is displayed in the second area, the user can directly select one or more images from the displayed target images to share, and the operation is convenient and fast. On the other hand, because the description labels are correspondingly displayed on the target images displayed in the second area, the user can easily find the image to be shared through the description note, so that the time consumed for finding the image to be shared can be saved, and the finding difficulty is reduced.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
FIG. 1 is a flow chart illustrating the steps of an image processing method according to an embodiment of the present application;
FIG. 2 is one of schematic diagrams of a first screen and a second screen;
FIG. 3 is a second schematic view of the first screen and the second screen;
FIG. 4 is a third schematic view of the first screen and the second screen;
FIG. 5 is a fourth schematic view of the first screen and the second screen;
FIG. 6 is a fifth schematic view of the first screen and the second screen;
fig. 7 is a block diagram showing a configuration of an image processing apparatus according to an embodiment of the present application;
fig. 8 is a schematic diagram showing a hardware configuration of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or described herein. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The image processing method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
Referring to fig. 1, a flowchart illustrating steps of an image processing method according to an embodiment of the present application is shown.
The image processing method of the embodiment of the application comprises the following steps:
step 101: and displaying a target interface of the application program in the first area.
The target interface may be any interface of the application program, may be a main interface of the application program, may also be a functional interface of the application program, and specifically, the functional interface of the application program may also be a chat interface between the user and a certain contact.
The first area and the second area may be two different areas on the same screen, or may be areas on two different screens of the same electronic device.
In the implementation process, the image processing flow shown in the embodiment of the application can be executed again when the user is preliminarily determined to have the image sharing requirement. For example: under the condition that the application program is started after the user operates the image is monitored, the user can be preliminarily judged to have the image sharing requirement.
Step 102: and displaying the target image and the information of the target image in the second area based on the content of the target interface.
The target images are images operated by a user within a preset time length, and the number of the target images can be one or more. The information of the target image includes a description tag. The user can select one or more images from the target images displayed in the second area to share.
The preset time period may be set by a person skilled in the art according to actual needs, and is not particularly limited in the embodiment of the present application. For example: the preset time period may be set to 1 day, 10 days, 1 month, or the like. The user's manipulation of the image includes at least one of: preview operation for an image, image capturing operation for an image, edit operation for an image, image generation operation by screen capture, and the like.
The descriptive label includes at least one of: image source label, image descriptor label. The image source label is used for representing the source of the image, and the image description language label is used for representing the summary of the content contained in the image.
The image descriptor tag can be determined only according to the content contained in the image, for example, main content in the image is identified through a preset image identification algorithm, a short image descriptor is extracted, and the image descriptor tag is generated according to the image descriptor. Or may be determined based on where the image is located, the content contained, and the associated contextual information. For example: when the image is a cosmetic image in an album, the semantic meaning obtained after the image content recognition is a single cosmetic, and when the image is a cosmetic image in a webpage of an online shopping mall, the image has purchase property, so the semantic meaning obtained by combining the context information should include other hidden semantic meanings such as purchase, link, promotion and the like, and therefore, the image description language label generated based on the content in the image and the hidden semantic meanings is more in line with the intention of the user to operate the image. When the image descriptor tag is generated only from the content included in the image, the processing load can be reduced. When the image description language label is determined by combining the position of the image, the contained content and the associated context information, the image identification accuracy can be greatly improved.
In an optional implementation manner, before the step of displaying the target image and the information of the target image in the second area based on the content of the target interface, a description tag may be generated for the target image according to the position of the target image, the contained content, or the context information of the content in the target interface. For example: when the image is an animal image in the photo album, it can be determined that the image contains two animals, namely a cat and a dog, after the image content is identified, and if the image is chat content of a dog discussed by a user and a certain contact person in the target interface, a description language tag for indicating that the image is the dog and an image source tag for indicating an image source and the photo album can be generated for the image.
Optionally, the context information of the content in the target interface may be obtained by performing semantic analysis according to the content of the target interface.
When the image description language tag is determined by combining the position of the image, the contained content and the content context information in the target interface, the recommendation of the image can be more fit with the chat scene. Referring to fig. 2, a target image display manner will be described by taking the first area and the second area as a first screen and a second screen of the electronic device, respectively.
Fig. 2 is one of schematic diagrams of a first screen and a second screen, as shown in fig. 2, a target interface of an application program is displayed in the first screen 201, a plurality of target images 2021 are displayed in the second screen 202, and each target image is correspondingly displayed with a description label 20211. The user can know the content abstract of the target image and the source of the target image through the description tag, and the user can conveniently determine the first image to be shared according to the content abstract and the source of the image.
According to the image processing method provided by the embodiment of the application, the target interface of the application program is displayed in the first area; and displaying the target image and the information of the target image in the second area based on the content of the target interface. On one hand, as the target image which is operated recently by the user is displayed in the second area, the user can directly select one or more images from the displayed target images to share, and the operation is convenient and fast. On the other hand, because the description labels are correspondingly displayed on the target images displayed in the second area, the user can easily find the image to be shared through the description note, so that the time consumed for finding the image to be shared can be saved, and the finding difficulty is reduced.
In an optional embodiment, after the description tag is generated for each target image according to the position of each target image, the contained content and the content context information in the target interface, the contact corresponding to at least one target image is determined according to the historical chat records of the user and each contact in the application program. And correspondingly displaying the target image, the description label of the target image and the corresponding contact information in the second area.
The system analyzes the contact persons chatting in a certain time recently by the user, records the context with the chatting of each contact person, and takes the contact person as a possible sharing contact person of the target image under the condition that the chatting content semantically related to any one target image is identified, and generates corresponding contact person prompt information for the target image, so that the system can help the user to quickly find the contact person which is most likely to be shared by the target image.
In the embodiment of the present application, the following implementable modes are still described by taking the first area and the second area as the first screen and the second screen of the electronic device, respectively.
Fig. 3 is a second schematic diagram of the first screen and the second screen, as shown in fig. 3, a target interface of an application program is displayed in the first screen 301, two target images are displayed in the second screen 302, the labels of the two target images are 3021 and 3022, respectively, and by analyzing the historical chat records, the target image 3021 is most matched with the chat content of the user and the minibar, and the target image 3022 is most matched with the chat content of the user and the minibar. Then in the second screen, the target image 3021 is displayed with a description tab 30211, a sharing contact small red 30212. A description label 30221 and a sharing contact yellow 30222 are correspondingly displayed on the target image 3022. The user can know the content abstract of the target image and the source of the target image through the description tag, and can be prompted to the sharing contact person with the highest matching degree of the target image through the sharing contact person, so that the user can quickly determine the target image to be shared and the contact person to be shared. In an optional embodiment, for each target image, after the target image, the description tag of the target image, and the contact corresponding to the target image are correspondingly displayed in the second area, an image recommendation process shown as follows may be further performed, including the following steps:
receiving a first input of a user to a target contact person; and responding to the first input, and jumping from the target interface of the application program to a chat interface of the user and the target contact.
The first input may be a single-click operation, a double-click operation, a long-press operation, or the like, on a target contact among the contacts displayed in the second region.
For example: and executing a single-click operation on the contact person 'xiaohuang' shown in fig. 3, namely triggering the image processing device to open a chat interface between the user and the xiaohuang.
After jumping from the target interface of the application program to the chat interface of the user and the target contact person, the chat interface of the user and the target contact person is displayed in the first area. The chat interface comprises the recent chat content of the user and the yellow.
In an optional embodiment, in a case that the target interface displayed in the first area is a chat interface between the user and the target contact, after the step of displaying the target image and the information of the target image in the second area based on the content of the target interface, the method may further include the following steps:
and sequencing the target images according to the chat content of the user and the target contact and the description labels corresponding to the target images, and displaying the sequenced target images in the second area.
And the image processing device performs contextual analysis on the content in the chat interface displayed in the first area, matches the analysis result with the description label of each target image displayed in the second area, and sorts each target image according to the matching similarity. The target images are ordered to facilitate a user to easily find the first image most likely to be shared from a large number of target images.
And if the target contact person changes or the chat content between the user and the target contact person changes, analyzing the chat content in the chat interface displayed in the first area again, and sequencing and displaying the target images displayed in the second areas again.
Fig. 4 is a third schematic diagram of the first screen and the second screen, and if the content in the chat interface in the first screen shown in fig. 4 is "zoo" -related content, the target images describing that the labels are related to zoos and animals are displayed in the second screen in a sequence. Fig. 5 is a fourth schematic diagram of the first screen and the second screen, and if the content in the chat interface in the first screen shown in fig. 5 is "document content" related content, among the target images displayed in the second screen, the target image in which the description tag is related to the document content is displayed in a sequence. As can be seen from a comparison between fig. 4 and fig. 5, since the user switches the chat content, the ordering of the target images displayed in the second screen is also readjusted.
In an optional embodiment, after the sorted target images are displayed in the second area, the following target image sharing process may be further included:
receiving a first input of a user to a first image in a target image; in response to the first input, sending the first image to the target contact; and moving the first image from a first sub-area in a second area to a second sub-area for displaying, wherein the first sub-area is used for displaying the target image which is not shared, and the second sub-area is used for displaying the shared target image.
The first input may be an operation of sliding the first image into the first area, and may also be a selection operation on the first image, and the selected first image is sent by the image processing device to the target contact corresponding to the currently displayed chat interface in the first area. Two sub-areas can be arranged in the second area, namely a first sub-area and a second sub-area, and the first image can be moved into the second sub-area to be displayed after being sent to the target contact.
In the optional mode, the user can share the first image by executing simple first input, and the operation is convenient and fast. In addition, in the optional mode, the first image is moved from the first sub-area to the second sub-area for displaying, so that the shared first image can be managed conveniently in the following process.
For the shared first image, management operations such as cancelling sharing and deleting can be performed on the shared first image, and chat content operations corresponding to the first image can be quickly located based on the first image.
One way to optionally withdraw the sharing of the first image is to receive a second input in the second area while the user moves the first image from the chat interface of the target contact after sending the first image to the target contact and moving the first image from the first sub-area to the second sub-area for display in response to the first input; or, receiving a second input that the user moves the first image from the second sub-region into the first sub-region; in response to the second input, the first image shared to the target contact is revoked.
Fig. 6 is a fifth schematic view of a first screen and a second screen, in which a chat interface with a target contact is displayed in the first screen, and a sent image, i.e., a first image, is displayed in the chat interface. The first image shared in the second screen is displayed in the second sub-area. The user can slide the shared first image from the first screen to the second screen through a right-sliding operation, and the image processing device is triggered to withdraw the first image shared to the target contact. The user can also slide the shared first image out of the second sub-area by performing an upward sliding operation in the second screen, so that the first image is in the first sub-area in the second screen, and the system is triggered to withdraw the first image shared to the target contact.
In the optional mode, the user can withdraw the shared first image by executing a simple second input, and the operation is convenient and fast.
One way to optionally delete the sharing of the first image is to receive a third input of the user to the first image after responding to the first input, sending the first image to the target contact, and moving the first image from the first sub-area to the second sub-area in the second area for display; in response to a third input, the first image is deleted from the first area and the second area.
The third input may be applied to the first image in the target contact chat interface or may be applied to the first image in a second sub-region of the second region. Optionally, receiving a third input of the first image by the user comprises: and receiving the operation that the user slides the first image out of the display area from the target contact person chatting interface.
As shown in fig. 6, the user may slide the shared first image out of the display area from the first screen through a left-sliding operation, and trigger the image processing apparatus to delete the shared first image from the first screen and the second screen. Of course, the method is not limited to this, and the image processing apparatus may be triggered to delete the first image from the first screen and the second screen by sliding the first image in the second sub-area in the second screen to the right out of the display area of the second screen.
In this optional manner, the user can delete the shared first image by executing a simple third input, so that the user can conveniently delete and manage the shared first image.
Optionally, the operation of quickly positioning the chat content corresponding to the first image based on the first image is performed in such a way that, after responding to the first input, the first image is sent to the target contact and is moved from the first sub-area to the second sub-area in the second area for display, a fourth input of the user on the first image in the second sub-area is received; in response to a fourth input, locating a context segment corresponding to the first image in the chat record of the target contact; the context fragment is displayed in the chat interface of the target contact.
The fourth input may be an operation of sliding the first image in the second sub-region into the chat interface of the target contact in the first screen.
Assuming that the first image is a target image with a label of "zoo", the user slides the first image in the second sub-area of the second screen to the left in the first screen, the first image asks the relevant zoo segment in the corresponding context of the chat record of the target contact person, and at this time, the relevant zoo segment "go to zoo in the afternoon on tomorrow? "and the like.
The method for searching the chat records by executing the fourth input on the first image in the second sub-area is convenient for a user to quickly and accurately search the target chat records.
According to the image processing method provided by the embodiment of the application, the target interface of the application program is displayed in the first area; and displaying the target image and the information of the target image in the second area based on the content of the target interface. On one hand, as the target image which is operated recently by the user is displayed in the second area, the user can directly select one or more images from the displayed target images to share, and the operation is convenient and fast. On the other hand, because the description labels are correspondingly displayed on the target images displayed in the second area, the user can easily find the image to be shared through the description note, so that the time consumed for finding the image to be shared can be saved, and the finding difficulty is reduced. In addition, the embodiment of the application also provides a method for executing management operations such as sharing revocation and deletion on the shared first image, so that a user can manage the shared first image conveniently.
It should be noted that, in the image processing method provided in the embodiment of the present application, the execution subject may be a screenshot processing apparatus, or a control module in the image processing apparatus for executing the method for loading image processing. In the embodiment of the present application, a method for executing load image processing by an image processing apparatus is taken as an example, and the image processing method provided in the embodiment of the present application is described.
Fig. 7 is a block diagram of an image processing apparatus implementing an embodiment of the present application.
Screenshot processing apparatus 700 of the embodiment of the present application includes:
a first display module 701, configured to display a target interface of an application in a first area;
a second display module 702, configured to display a target image and information of the target image in a second area based on content of the target interface;
the target image is an image operated by a user within a preset time length, and the information of the target image comprises a description label.
Optionally, the apparatus further comprises: and the generating module is used for generating a description label for each target image according to the position, the contained content and the content context information in the target interface of each target image before the second display module displays the target image and the information of the target image in a second area based on the content of the target interface.
Optionally, the apparatus further comprises:
and the contact person determining module is used for determining at least one contact person corresponding to the target image according to the historical chat records of the user and each contact person in the application program after the generating module generates the description label for each target image according to the position, the contained content and the content context information in the target interface of each target image.
Optionally, the apparatus further comprises: the sequencing module is used for sequencing each target image according to the chat content of the user and the target contact and the corresponding description label of each target image under the condition that the target interface is a chat interface of the user and the target contact after the second display module displays the target image and the information of the target image in a second area based on the content of the target interface; and the third display module is used for displaying each sequenced target image in the second area.
Optionally, the apparatus further comprises: a first input receiving module, configured to receive a first input of the user to a first image of the target images after the third display module displays the sorted target images in a second area; the sending module is used for responding to the first input, sending the first image to the target contact person, and moving the first image from a first sub-area to a second sub-area in the second area for displaying, wherein the first sub-area is used for displaying the target image which is not shared, and the second sub-area is used for displaying the shared target image.
Optionally, the apparatus further comprises:
a second input receiving module, configured to receive a second input that the user moves the first image from the chat interface of the target contact into a second region after the sending module sends the first image to the target contact in response to the first input and moves the first image from a first sub-region to a second sub-region in the second region for display; or, receiving a second input that the user moves the first image from the second sub-region into the first sub-region;
a recall module to recall the first image shared with the target contact in response to the second input.
Optionally, the apparatus further comprises: a third input receiving module, configured to receive a third input of the first image by the user after the sending module sends the first image to the target contact in response to the first input and moves the first image from a first sub-area to a second sub-area in the second area for display;
a deletion module to delete the first image from the first area and the second area in response to the third input.
Optionally, the apparatus further comprises: a fourth input receiving module, configured to receive a fourth input of the first image in the second sub-area by the user after the sending module sends the first image to the target contact in response to the first input and moves the first image from the first sub-area to the second sub-area for display; the positioning module is used for responding to the fourth input and positioning a context segment corresponding to the first image in the chat records of the target contact person; and the fragment display module is used for displaying the fragments in the chat interface of the target contact.
The image processing apparatus in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The image processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The image processing apparatus provided in the embodiment of the present application can implement each process implemented by the image processing apparatus in the method embodiments of fig. 1 to fig. 6, and for avoiding repetition, details are not repeated here.
The image processing device provided by the embodiment of the application displays the target interface of the application program in the first area; and displaying the target image and the information of the target image in the second area based on the content of the target interface. On one hand, as the target image which is operated recently by the user is displayed in the second area, the user can directly select one or more images from the displayed target images to share, and the operation is convenient and fast. On the other hand, because the description labels are correspondingly displayed on the target images displayed in the second area, the user can easily find the image to be shared through the description note, so that the time consumed for finding the image to be shared can be saved, and the finding difficulty is reduced.
Optionally, an electronic device is further provided in this embodiment of the present application, and includes a processor 810, a memory 809, and a program or an instruction stored in the memory 809 and executable on the processor 810, where the program or the instruction is executed by the processor 810 to implement each process of the above-mentioned embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.
Fig. 8 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 800 includes, but is not limited to: a radio frequency unit 801, a network module 802, an audio output unit 803, an input unit 804, a sensor 805, a display unit 806, a user input unit 807, an interface unit 808, a memory 809, and a processor 810.
Those skilled in the art will appreciate that the electronic device 800 may further comprise a power source (e.g., a battery) for supplying power to the various components, and the power source may be logically connected to the processor 810 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system. The electronic device structure shown in fig. 8 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The display unit 806 is configured to display a target interface of the application program in the first area; displaying a target image and information of the target image in a second area based on the content of the target interface; the target image is an image operated by a user within a preset time length, and the information of the target image comprises a description label.
In the embodiment of the application, the electronic equipment displays a target interface of the application program in a first area; and displaying the target image and the information of the target image in the second area based on the content of the target interface. On one hand, as the target image which is operated recently by the user is displayed in the second area, the user can directly select one or more images from the displayed target images to share, and the operation is convenient and fast. On the other hand, because the description labels are correspondingly displayed on the target images displayed in the second area, the user can easily find the image to be shared through the description note, so that the time consumed for finding the image to be shared can be saved, and the finding difficulty is reduced.
Optionally, the processor 810 is configured to generate a description tag for each target image according to a position, included content, and content context information in the target interface of each target image before the display unit 806 displays the target image and the information of the target image in the second area based on the content of the target interface.
Optionally, the processor 810 is further configured to determine, after generating a description tag for each target image according to a position where each target image is located, content included in the target image, and content context information in the target interface, a contact corresponding to at least one target image according to a historical chat record of the user and each contact in the application program.
Optionally, the processor 810 is further configured to, after the display unit 806 displays the target image and the information of the target image in the second area based on the content of the target interface, in a case that the target interface is a chat interface between a user and a target contact, sort each target image according to the chat content between the user and the target contact and a description tag corresponding to each target image;
a display unit 806, configured to display each sorted target image in the second area.
Optionally, a user input unit 807, configured to receive a first input of a first image of the target images by the user after the display unit 806 displays the sorted target images in the second area; a network module 802, configured to send the first image to the target contact in response to the first input; the processor 810 is further configured to move the first image from a first sub-area in the second area to a second sub-area for displaying, where the first sub-area is used to display an object image that is not shared, and the second sub-area is used to display an object image that has been shared.
Optionally, the user input unit 807 is further configured to receive a second input that the user moves the first image from the chat interface of the target contact to the second area after the processor 810 moves the first image from the first sub-area to the second sub-area in the second area for display; or, receiving a second input that the user moves the first image from the second sub-region into the first sub-region;
the processor 810 is further configured to, in response to the second input, recall the first image shared with the target contact.
Optionally, a user input unit 807, further configured to receive a third input of the first image by the user after the processor 810 moves the first image from the first sub-area to the second sub-area in the second area for display;
a processor 810 further configured to delete the first image from the first area and the second area in response to the third input.
Optionally, a user input unit 807 for receiving a fourth input of the first image in a second sub-region by the user after the processor 810 moves the first image from the first sub-region to the second sub-region for display;
a processor 810, further configured to locate a context segment corresponding to the first image in a chat log of the target contact in response to the fourth input;
the display unit 806 is further configured to display the segment in the chat interface of the target contact.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the embodiment of the image processing method, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. An image processing method, comprising:
displaying a target interface of an application program in a first area;
displaying a target image and information of the target image in a second area based on the content of the target interface;
the target image is an image operated by a user within a preset time length, and the information of the target image comprises a description label.
2. The method of claim 1, wherein prior to the step of displaying a target image and information for the target image in a second area based on the content of the target interface, the method further comprises:
and generating a description label for the target image according to the position of the target image, the contained content or the content context information in the target interface.
3. The method of claim 2,
after the step of generating a description tag for the target image according to the position of the target image, the contained content, or the context information of the content in the target interface, the method further includes:
and determining at least one contact corresponding to the target image according to the historical chat records of the user and each contact in the application program.
4. The method of claim 1, wherein after the step of displaying a target image and information of the target image in a second area based on the content of the target interface, the method further comprises:
when the target interface is a chat interface between a user and a target contact person, sequencing each target image according to the chat content between the user and the target contact person and the description label corresponding to each target image;
and displaying each ordered target image in the second area.
5. The method of claim 4, wherein after the step of displaying the sorted target images in the second region, the method further comprises:
receiving a first input of a first image in the target images by the user;
responding to the first input, sending the first image to the target contact person, and moving the first image from a first sub-area in the second area to a second sub-area for displaying the target image which is not shared, wherein the second sub-area is used for displaying the target image which is shared.
6. The method of claim 5, wherein after the step of sending the first image to the target contact and moving the first image from a first sub-area to display in a second sub-area in the second area in response to the first input, the method further comprises:
receiving a second input that the user moves the first image from the chat interface of the target contact to the second area; alternatively, the first and second electrodes may be,
receiving a second input by the user to move the first image from the second sub-region into the first sub-region;
in response to the second input, the first image shared to the target contact is revoked.
7. The method of claim 5, wherein after the step of sending the first image to the target contact and moving the first image from a first sub-area to display in a second sub-area in the second area in response to the first input, the method further comprises:
receiving a third input of the first image by the user;
deleting the first image from the first area and the second area in response to the third input.
8. The method of claim 5, wherein after the step of sending the first image to the target contact and moving the first image from a first sub-area to display in a second sub-area in the second area in response to the first input, the method further comprises:
receiving a fourth input by the user to the first image in the second sub-region;
in response to the fourth input, locating a context segment corresponding to the first image in a chat record of the target contact;
and displaying the fragment in the chat interface of the target contact.
9. An image processing apparatus, characterized in that the apparatus comprises:
the first display module is used for displaying a target interface of the application program in a first area;
the second display module is used for displaying a target image and information of the target image in a second area based on the content of the target interface;
the target image is an image operated by a user within a preset time length, and the information of the target image comprises a description label.
10. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, which when executed by the processor, implement the steps of the image processing method of any one of claims 1 to 8.
CN202010464424.9A 2020-05-27 2020-05-27 Image processing method, device, electronic equipment and medium Active CN111954079B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010464424.9A CN111954079B (en) 2020-05-27 2020-05-27 Image processing method, device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010464424.9A CN111954079B (en) 2020-05-27 2020-05-27 Image processing method, device, electronic equipment and medium

Publications (2)

Publication Number Publication Date
CN111954079A true CN111954079A (en) 2020-11-17
CN111954079B CN111954079B (en) 2023-05-26

Family

ID=73337705

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010464424.9A Active CN111954079B (en) 2020-05-27 2020-05-27 Image processing method, device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN111954079B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113691443A (en) * 2021-08-30 2021-11-23 维沃移动通信(杭州)有限公司 Image sharing method and device and electronic equipment
WO2022179390A1 (en) * 2021-02-27 2022-09-01 华为技术有限公司 Tiled display method, electronic device, and system

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050021624A1 (en) * 2003-05-16 2005-01-27 Michael Herf Networked chat and media sharing systems and methods
US20090153389A1 (en) * 2007-12-14 2009-06-18 Apple Inc. Scroll bar with video region in a media system
CN106681623A (en) * 2016-10-26 2017-05-17 维沃移动通信有限公司 Screenshot picture sharing method and mobile terminal
WO2017087561A1 (en) * 2015-11-17 2017-05-26 Advisual, Inc. Methods and systems for dynamic chat background
CN107896279A (en) * 2017-11-16 2018-04-10 维沃移动通信有限公司 Screenshotss processing method, device and the mobile terminal of a kind of mobile terminal
CN108228715A (en) * 2017-12-05 2018-06-29 深圳市金立通信设备有限公司 A kind of method, terminal and computer readable storage medium for showing image
CN108432260A (en) * 2015-12-24 2018-08-21 三星电子株式会社 Electronic equipment and its display control method
CN109782976A (en) * 2019-01-15 2019-05-21 Oppo广东移动通信有限公司 Document handling method, device, terminal and storage medium
CN110134306A (en) * 2019-04-08 2019-08-16 努比亚技术有限公司 A kind of data sharing method, device and computer readable storage medium
WO2019228294A1 (en) * 2018-05-29 2019-12-05 维沃移动通信有限公司 Object sharing method and mobile terminal
CN110602565A (en) * 2019-08-30 2019-12-20 维沃移动通信有限公司 Image processing method and electronic equipment
CN110865745A (en) * 2019-10-28 2020-03-06 维沃移动通信有限公司 Screen capturing method and terminal equipment

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050021624A1 (en) * 2003-05-16 2005-01-27 Michael Herf Networked chat and media sharing systems and methods
US20090153389A1 (en) * 2007-12-14 2009-06-18 Apple Inc. Scroll bar with video region in a media system
WO2017087561A1 (en) * 2015-11-17 2017-05-26 Advisual, Inc. Methods and systems for dynamic chat background
CN108432260A (en) * 2015-12-24 2018-08-21 三星电子株式会社 Electronic equipment and its display control method
CN106681623A (en) * 2016-10-26 2017-05-17 维沃移动通信有限公司 Screenshot picture sharing method and mobile terminal
CN107896279A (en) * 2017-11-16 2018-04-10 维沃移动通信有限公司 Screenshotss processing method, device and the mobile terminal of a kind of mobile terminal
CN108228715A (en) * 2017-12-05 2018-06-29 深圳市金立通信设备有限公司 A kind of method, terminal and computer readable storage medium for showing image
WO2019228294A1 (en) * 2018-05-29 2019-12-05 维沃移动通信有限公司 Object sharing method and mobile terminal
CN109782976A (en) * 2019-01-15 2019-05-21 Oppo广东移动通信有限公司 Document handling method, device, terminal and storage medium
CN110134306A (en) * 2019-04-08 2019-08-16 努比亚技术有限公司 A kind of data sharing method, device and computer readable storage medium
CN110602565A (en) * 2019-08-30 2019-12-20 维沃移动通信有限公司 Image processing method and electronic equipment
CN110865745A (en) * 2019-10-28 2020-03-06 维沃移动通信有限公司 Screen capturing method and terminal equipment

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
LIANG QIAO 等: "Medical high-resolution image sharing and electronic whiteboard system: A pure-web-based system for accessing and discussing lossless original images in telemedicine", 《COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE》 *
李炳琰;: "基于用户体验的微信界面交互设计分析", 铜陵学院学报 *
高永英,章毓晋,罗云: "基于目标语义特征的图像检索系统", 电子与信息学报 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022179390A1 (en) * 2021-02-27 2022-09-01 华为技术有限公司 Tiled display method, electronic device, and system
CN113691443A (en) * 2021-08-30 2021-11-23 维沃移动通信(杭州)有限公司 Image sharing method and device and electronic equipment

Also Published As

Publication number Publication date
CN111954079B (en) 2023-05-26

Similar Documents

Publication Publication Date Title
CN105528388B (en) Search recommendation method and device
CN109710088B (en) Information searching method and device
CN112486444B (en) Screen projection method, device, equipment and readable storage medium
CN112333084B (en) File sending method and device and electronic equipment
CN111813284B (en) Application program interaction method and device
CN111954079B (en) Image processing method, device, electronic equipment and medium
CN113126838A (en) Application icon sorting method and device and electronic equipment
CN112416212B (en) Program access method, apparatus, electronic device and readable storage medium
CN113869063A (en) Data recommendation method and device, electronic equipment and storage medium
CN112100463A (en) Information processing method and device, electronic equipment and readable storage medium
CN111930281A (en) Reminding message creating method and device and electronic equipment
CN113783770B (en) Image sharing method, image sharing device and electronic equipment
CN113325986B (en) Program control method, program control device, electronic device and readable storage medium
CN112328149B (en) Picture format setting method and device and electronic equipment
CN112084151A (en) File processing method and device and electronic equipment
CN111796733B (en) Image display method, image display device and electronic equipment
CN114595391A (en) Data processing method and device based on information search and electronic equipment
CN112818094A (en) Chat content processing method and device and electronic equipment
CN112181570A (en) Background task display method and device and electronic equipment
CN111967430A (en) Message processing method and device, electronic equipment and readable storage medium
CN114995691A (en) Document processing method, device, equipment and medium
CN112287706A (en) Code scanning interaction method and device
CN112612400B (en) Text content processing method and electronic equipment
CN113037618B (en) Image sharing method and device
CN117472195A (en) Expression processing method, apparatus, electronic device and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant