CN111954079B - Image processing method, device, electronic equipment and medium - Google Patents

Image processing method, device, electronic equipment and medium Download PDF

Info

Publication number
CN111954079B
CN111954079B CN202010464424.9A CN202010464424A CN111954079B CN 111954079 B CN111954079 B CN 111954079B CN 202010464424 A CN202010464424 A CN 202010464424A CN 111954079 B CN111954079 B CN 111954079B
Authority
CN
China
Prior art keywords
image
target
user
input
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010464424.9A
Other languages
Chinese (zh)
Other versions
CN111954079A (en
Inventor
魏星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202010464424.9A priority Critical patent/CN111954079B/en
Publication of CN111954079A publication Critical patent/CN111954079A/en
Application granted granted Critical
Publication of CN111954079B publication Critical patent/CN111954079B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses an image processing method, an image processing device, electronic equipment and a medium, which belong to the technical field of communication, wherein the method comprises the following steps: displaying a target interface of the application program in a first area; displaying a target image and information of the target image in a second area based on the content of the target interface; the target images are images which are operated by a user within a preset time period, and the information of the target images comprises the description label, so that the image sharing operation is convenient, the description notes are correspondingly displayed for each target image, the user can easily find the first image to be shared, the time for finding the image to be shared can be saved, and the finding difficulty is reduced.

Description

Image processing method, device, electronic equipment and medium
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to an image processing method, an image processing device, electronic equipment and a medium.
Background
Currently, various communication applications are installed in an electronic device, and a user is required to share a screenshot by using the communication applications. At present, when a screenshot is shared in a communication application program, the general operation flow is as follows:
The user exits the application program to a screenshot at a target position, and the screenshot is stored in the album; and starting the application program after the screenshot is finished, clicking a preset picture sharing control in the application program, popping up a picture selection interface, selecting a target screenshot from the picture selection interface, clicking a picture sending control, finally finishing the screenshot sharing, and being complex in operation and inconvenient for a user to operate.
When multiple screenshots need to be shared with the same contact object, each target screenshot needs to be searched in sequence from the historical screenshots in the picture selection interface, whether the screenshot is the target screenshot is identified through the screenshot preview image, the time consumption is long, and the searching difficulty is high.
Disclosure of Invention
The embodiment of the application aims to provide a screenshot processing method, a screenshot processing device, electronic equipment and a screenshot processing medium, which can solve the problems that the existing method for searching a target image is time-consuming and the searching difficulty is high when sharing the image.
In order to solve the technical problems, the invention is realized as follows:
in a first aspect, an embodiment of the present application provides an image processing method, where the method includes: displaying a target interface of the application program in a first area; displaying a target image and information of the target image in a second area based on the content of the target interface; the target image is an image operated by a user within a preset time period, and the information of the target image comprises a description tag.
In a second aspect, an embodiment of the present application provides an image processing apparatus, where the apparatus includes: the first display module is used for displaying a target interface of the application program in the first area; the second display module is used for displaying a target image and information of the target image in a second area based on the content of the target interface; the target image is an image operated by a user within a preset time period, and the information of the target image comprises a description tag.
In a third aspect, embodiments of the present application provide an electronic device comprising a processor, a memory and a program or instruction stored on the memory and executable on the processor, the program or instruction implementing the steps of the method according to the first aspect when executed by the processor.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium having stored thereon a program or instructions which, when executed by a processor, implement the steps of the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement a method according to the first aspect.
In the embodiment of the application, a target interface of an application program is displayed in a first area; and displaying the target image and the information of the target image in the second area based on the content of the target interface. On the one hand, as the target image which is operated by the user recently is displayed in the second area, the user can directly select one or more images from the displayed target images for sharing, and the operation is convenient. On the other hand, as the description labels are correspondingly displayed on each target image displayed in the second area, the images to be shared can be easily searched by the description note user, so that the time for searching the images to be shared can be saved, and the searching difficulty is reduced.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments of the present invention will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart showing the steps of an image processing method according to an embodiment of the present application;
FIG. 2 is one of the schematic diagrams of the first screen and the second screen;
FIG. 3 is a second schematic view of the first screen and the second screen;
FIG. 4 is a third schematic view of the first screen and the second screen;
FIG. 5 is a fourth schematic view of a first screen and a second screen;
FIG. 6 is a fifth schematic view of a first screen and a second screen;
fig. 7 is a block diagram showing the structure of an image processing apparatus according to an embodiment of the present application;
fig. 8 is a schematic diagram showing a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The image processing method provided by the embodiment of the application is described in detail below by means of specific embodiments and application scenes thereof with reference to the accompanying drawings.
Referring to fig. 1, a flowchart of steps of an image processing method according to an embodiment of the present application is shown.
The image processing method of the embodiment of the application comprises the following steps:
step 101: and displaying a target interface of the application program in the first area.
The target interface can be any interface of the application program, can be a main interface of the application program, can be a functional interface of the application program, and can be a chat interface of a user and a certain contact person.
The first area and the second area may be two different areas on the same screen, or may be areas on two different screens of the same electronic device.
In the implementation process, when it is preliminarily determined that the user has a requirement for image sharing, the image processing flow shown in the embodiment of the application is executed. For example: under the condition that an application program is started after the user operates the image is monitored, the user can be preliminarily judged to have the image sharing requirement.
Step 102: and displaying the target image and the information of the target image in the second area based on the content of the target interface.
The target images are images operated by a user within a preset duration, and one or more target images can be provided. The information of the target image includes a description tag. The user may select one or more images from the target images displayed in the second region for sharing.
The preset duration may be set by those skilled in the art according to actual needs, and the embodiment of the present application is not specifically limited. For example: the preset duration may be set to 1 day, 10 days, 1 month, or the like. The user's manipulation of the image includes at least one of: a preview operation for an image, a photographing operation for an image, an editing operation for an image, an operation for generating an image by screenshot, and the like.
The descriptive label includes at least one of: image source tags, image description tags. The image source tag is used for representing the source of the image, and the image description tag is used for representing the content abstract contained in the image.
The image description language label can be determined only according to the content contained in the image, for example, the main content in the image is identified through a preset image identification algorithm, a short image description language is extracted, and the image description language label is generated according to the image description language. Or may be determined based on the location where the image is located, the content contained, and the associated context information. For example: when the image is a cosmetic image in the album, the semantic meaning obtained after image content identification is single cosmetic, and when the image is a cosmetic image in an online mall webpage, the image has purchasing property, so the semantic meaning obtained by combining the context information comprises other hidden semantic meaning such as purchasing, linking, sales promotion and the like, and therefore, the image descriptive language label generated based on the content in the image and the hidden semantic meaning better accords with the intention of a user to operate the image. When the image descriptor tag is generated only from the content included in the image, the processing load can be reduced. When the image description language label is determined by combining the position of the image, the contained content and the associated context information, the image recognition accuracy can be greatly improved.
In an alternative implementation manner, before the step of displaying the target image and the information of the target image in the second area based on the content of the target interface, a description tag may be generated for the target image according to the position where the target image is located, the contained content or the content context information in the target interface. For example: when the image is an animal image in the album, two animals, namely a cat and a dog, can be determined to be contained in the image after the image content is identified, and if the chat content of the dog is discussed for a user and a certain contact person in the target interface, a descriptive language label for indicating that the image is the dog and an image source label for indicating the image source and the album can be generated for the image.
Alternatively, the content context information in the target interface may refer to semantic analysis performed according to the content of the target interface.
When the image description language label is determined by combining the position of the image, the contained content and the content context information in the target interface, the recommendation of the image and the chat scene can be more matched. Referring to fig. 2, a target image display mode will be described by taking a first screen and a second screen of the electronic device as an example of the first area and the second area, respectively.
Fig. 2 is one of schematic views of the first screen and the second screen, and as shown in fig. 2, a target interface of an application program is displayed in the first screen 201, and a plurality of target images 2021 are displayed in the second screen 202, and each target image is correspondingly displayed with a description tag 20211. Through the description tag, a user can know the content abstract of the target image and the source of the target image, so that the user can conveniently determine the first image to be shared according to the content abstract and the image source.
According to the image processing method, a target interface of an application program is displayed in a first area; and displaying the target image and the information of the target image in the second area based on the content of the target interface. On the one hand, as the target image which is operated by the user recently is displayed in the second area, the user can directly select one or more images from the displayed target images for sharing, and the operation is convenient. On the other hand, as the description labels are correspondingly displayed on each target image displayed in the second area, the images to be shared can be easily searched by the description note user, so that the time for searching the images to be shared can be saved, and the searching difficulty is reduced.
In an alternative embodiment, after generating a description tag for each target image according to the position of each target image, the content contained in each target image and the content context information in the target interface, at least one contact corresponding to the target image is determined according to the historical chat record of the user and each contact in the application program. And correspondingly displaying the target image, the description label of the target image and the corresponding contact information in the second area.
The system analyzes the contact person which chatts in a certain time recently, the chat record context of each contact person, and under the condition that the chat content related to any one target image semantically is identified, the contact person is used as the possible shared contact person of the target image, and corresponding contact person prompt information is generated for the target image, so that the user can be helped to quickly find the contact person which is most likely to be shared by the target image.
In this embodiment, the first area and the second area are respectively the first screen and the second screen of the electronic device, which are still taken as examples, and description is made of the following embodiments.
Fig. 3 is a second schematic diagram of the first screen and the second screen, in which, as shown in fig. 3, a target interface of an application program is displayed in the first screen 301, and two target image labels 3021 and 3022 are displayed in the second screen 302, and by analyzing the historical chat records, it is obtained that the target image 3021 is best matched with the chat contents of the user and the reddish, and the target image 3022 is best matched with the chat contents of the user and the reddish. Then in the second screen, the target image 3021 is correspondingly displayed with a description tab 30211, a sharing contact red 30212. The target image 3022 correspondingly displays a description label 30221 and a sharing contact yellow 30222. Through the description tag, a user can know the content abstract of the target image and the source of the target image, and the user can be prompted for the sharing contact person with the highest matching degree of the target image through the sharing contact person, so that the user can quickly determine the target image to be shared and the contact person to be shared. In an alternative embodiment, after displaying the target image, the description tag of the target image, and the contact corresponding to the target image in the second area, for each target image, an image recommendation procedure as follows may be further performed, including the following steps:
Receiving a first input of a user to a target contact; in response to the first input, a jump is made from the target interface of the application to a chat interface of the user with the target contact.
The first input may be a single click operation, a double click operation, a long press operation, or the like on a target contact among the contacts displayed in the second area.
For example: a single click operation on the contact "xiaohuang" shown in fig. 3 may trigger the image processing apparatus to open the chat interface of the user with xiaohuang.
After jumping from the target interface of the application program to the chat interface of the user and the target contact, the chat interface of the user and the target contact is displayed in the first area. The chat interface includes recent chat content of the user with little yellow.
In an alternative embodiment, in the case that the target interface displayed in the first area is a chat interface between the user and the target contact, after the step of displaying the target image and the information of the target image in the second area based on the content of the target interface, the method may further include the following steps:
and sequencing each target image according to the chat content of the user and the target contact person and the description label corresponding to each target image, and displaying each sequenced target image in the second area.
The image processing device performs context analysis on the content in the chat interface displayed in the first area, matches the analysis result with the description labels of the target images displayed in the second area, and sorts the target images according to the matching similarity. The target images are ranked so that a user can easily find the most likely first image to be shared from a large number of target images.
And if the target contact person changes or the chat content of the user and the target contact person changes, re-analyzing the chat content in the chat interface displayed in the first area, and re-sequencing and displaying each target image displayed in each second area.
Fig. 4 is a third schematic diagram of the first screen and the second screen, where the content in the chat interface in the first screen shown in fig. 4 is "zoo" related content, and in each target image displayed in the second screen, the target images describing the tag and being related to the zoo and the animal are displayed in the front order. Fig. 5 is a schematic diagram of a first screen and a second screen, and if the content in the chat interface in the first screen shown in fig. 5 is "document content" related content, then in each target image displayed in the second screen, the target image describing the tag related to the document content is displayed in the front order. As can be seen by comparing fig. 4 and 5, the ordering of the target images displayed in the second screen is also readjusted as the user switches chat content.
In an alternative embodiment, after displaying the sorted target images in the second area, the method may further include the following target image sharing procedure:
receiving a first input of a user to a first image in the target image; responsive to the first input, sending the first image to the target contact; and moving the first image from a first subarea to a second subarea in the second area for displaying the target image which is not shared, and displaying the second subarea for displaying the target image which is shared.
The first input may be an operation of sliding the first image into the first area, or may be a selection operation of the first image, where the selected first image is sent by the image processing apparatus to a target contact corresponding to a chat interface currently displayed in the first area. Two sub-areas, namely a first sub-area and a second sub-area, can be arranged in the second area, and the first image can be moved to the second sub-area for display after being sent to the target contact person.
In the optional mode, the user can complete sharing of the first image by executing simple first input, and the operation is convenient. In addition, in the alternative, the first image is moved from the first subarea to the second subarea for display, so that the shared first image is convenient to manage later.
For the shared first image, management operations such as duplicate sharing and deleting can be performed on the shared first image, and chat content operations corresponding to the first image can be quickly located based on the first image.
One way to optionally withdraw the first image sharing is to receive a second input from the user in the moving second region from the chat interface of the target contact after sending the first image to the target contact in response to the first input and moving the first image from the first sub-region in the second region to display in the second sub-region; or, receiving a second input from the user to move the first image from the second sub-area to the first sub-area; in response to the second input, the first image shared with the target contact is withdrawn.
Fig. 6 is a schematic diagram of a first screen in which a chat interface with a target contact is displayed and a second screen in which a transmitted image, i.e., a first image, is displayed. The first image that has been shared in the second screen is displayed in the second sub-area. The user can slide the shared first image from the first screen to the second screen through a right sliding operation, and the image processing device is triggered to withdraw the first image shared to the target contact. The user can slide the shared first image out of the second sub-region by performing an upward sliding operation in the second screen, so that the first image is in the first sub-region in the second screen, and the triggering system withdraws the first image shared to the target contact.
In the alternative mode, the user can withdraw the shared first image by executing a simple second input, so that the operation is convenient.
Optionally deleting the first image sharing by receiving a third input of the user to the first image after the first image is sent to the target contact in response to the first input and the first image is moved from the first sub-region to the second sub-region in the second region for display; the first image is deleted from the first region and the second region in response to the third input.
The third input may be applied to the first image in the target contact chat interface or may be applied to the first image in the second sub-area in the second area. Optionally, receiving a third input from the user to the first image includes: and receiving the operation that the user slides out of the display area from the target contact chatting interface.
As shown in fig. 6, the user may slide the shared first image out of the display area through a left-sliding operation, and trigger the image processing apparatus to delete the shared first image from the first screen and the second screen. Of course, not limited thereto, the image processing apparatus may be triggered to delete the first image from the first screen and the second screen by sliding the first image within the second sub-area in the second screen rightward out of the display area of the second screen.
In the optional mode, the user can delete the shared first image by executing the simple third input, so that the user can delete and manage the shared first image conveniently.
Optionally, the method for quickly positioning the chat content corresponding to the first image based on the first image includes the steps of receiving a fourth input of the user on the first image in the second area after the first image is sent to the target contact person in response to the first input and the first image is moved from the first area to the second area in the second area for display; responsive to the fourth input, locating a top and bottom Wen Pianduan corresponding to the first image in the chat log of the target contact; the context clip is displayed in the chat interface of the target contact.
The fourth input may be an operation in a chat interface that slides the first image in the second sub-area into the target contact in the first screen.
Assuming that the first image is a target image describing a label as "zoo", the user slides the first image in the second sub-area of the second screen to the left into the first screen, where the first image asks for a zoo-related segment in the corresponding context in the chat record of the target contact, and at this time, a zoo-related segment "do you go to zoo in the beginning in the daytime afternoon? "and the like.
The method for searching the chat record by executing the fourth input on the first image in the second subarea is convenient for the user to quickly and accurately search the target chat record.
According to the image processing method, a target interface of an application program is displayed in a first area; and displaying the target image and the information of the target image in the second area based on the content of the target interface. On the one hand, as the target image which is operated by the user recently is displayed in the second area, the user can directly select one or more images from the displayed target images for sharing, and the operation is convenient. On the other hand, as the description labels are correspondingly displayed on each target image displayed in the second area, the images to be shared can be easily searched by the description note user, so that the time for searching the images to be shared can be saved, and the searching difficulty is reduced. In addition, in the embodiment of the present application, management operations such as undoing and deleting are performed on the shared first image, so that a user can manage the shared first image conveniently.
It should be noted that, in the image processing method provided in the embodiment of the present application, the execution subject may be a screenshot processing device, and a control module in the image processing device is used for executing the method for loading the image processing. In the embodiment of the present application, a method for executing loading image processing by an image processing apparatus is taken as an example, and an image processing method provided in the embodiment of the present application is described.
Fig. 7 is a block diagram of an image processing apparatus implementing an embodiment of the present application.
The screenshot processing apparatus 700 of the embodiment of the present application includes:
a first display module 701, configured to display a target interface of an application program in a first area;
a second display module 702, configured to display a target image and information of the target image in a second area based on the content of the target interface;
the target image is an image operated by a user within a preset time period, and the information of the target image comprises a description tag.
Optionally, the apparatus further comprises: the generating module is configured to generate, before the second display module displays the target image and the information of the target image in the second area based on the content of the target interface, a description tag for each target image according to the position where each target image is located, the content included in the target image, and the content context information in the target interface.
Optionally, the apparatus further comprises:
and the contact person determining module is used for determining at least one contact person corresponding to the target image according to the historical chat record of the user and each contact person in the application program after the generating module generates the description label for each target image according to the position of each target image, the contained content and the content context information in the target interface.
Optionally, the apparatus further comprises: the sorting module is used for sorting the target images according to the chat content of the user and the target contact person and the description labels corresponding to the target images under the condition that the target interface is a chat interface of the user and the target contact person after the second display module displays the target images and the information of the target images in a second area based on the content of the target interface; and the third display module is used for displaying the sorted target images in the second area.
Optionally, the apparatus further comprises: a first input receiving module, configured to receive a first input of the user to a first image in the target images after the third display module displays the sorted target images in a second area; and the sending module is used for responding to the first input, sending the first image to the target contact person, and moving the first image from a first subarea to a second subarea in the second area for displaying the target image which is not shared, wherein the second subarea is used for displaying the target image which is shared.
Optionally, the apparatus further comprises:
a second input receiving module, configured to receive a second input that the user moves the first image from the chat interface of the target contact to the second area after the sending module responds to the first input, sends the first image to the target contact, and moves the first image from the first sub-area to the second sub-area in the second area for display; or, receiving a second input by the user to move the first image from the second sub-region to the first sub-region;
and a revocation module for, in response to the second input, revoked the first image shared with the target contact.
Optionally, the apparatus further comprises: a third input receiving module, configured to receive a third input of the user to the first image after the sending module responds to the first input, sends the first image to the target contact, and moves the first image from a first sub-region to a second sub-region in the second region for display;
and a deletion module configured to delete the first image from the first area and the second area in response to the third input.
Optionally, the apparatus further comprises: a fourth input receiving module for receiving a fourth input of the first image from the user in the second sub-area after the transmitting module responds to the first input, transmits the first image to the target contact and moves the first image from the first sub-area to the second sub-area in the second area for display; a positioning module, configured to respond to the fourth input, and position, in the chat record of the target contact, an up-down Wen Pianduan corresponding to the first image; and the segment display module is used for displaying the segment in the chat interface of the target contact person.
The image processing device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device may be a mobile electronic device or a non-mobile electronic device. By way of example, the mobile electronic device may be a cell phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, wearable device, ultra-mobile personal computer (ultra-mobile personal computer, UMPC), netbook or personal digital assistant (personal digital assistant, PDA), etc., and the non-mobile electronic device may be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
The image processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.
The image processing device provided in this embodiment of the present application can implement each process implemented by the image processing device in the method embodiments of fig. 1 to 6, and in order to avoid repetition, a description is omitted here.
The image processing device provided by the embodiment of the application program displays a target interface of an application program in a first area; and displaying the target image and the information of the target image in the second area based on the content of the target interface. On the one hand, as the target image which is operated by the user recently is displayed in the second area, the user can directly select one or more images from the displayed target images for sharing, and the operation is convenient. On the other hand, as the description labels are correspondingly displayed on each target image displayed in the second area, the images to be shared can be easily searched by the description note user, so that the time for searching the images to be shared can be saved, and the searching difficulty is reduced.
Optionally, the embodiment of the present application further provides an electronic device, including a processor 810, a memory 809, and a program or an instruction stored in the memory 809 and capable of running on the processor 810, where the program or the instruction implements each process of the embodiment of the image processing method when executed by the processor 810, and the same technical effects can be achieved, and for avoiding repetition, a description is omitted herein.
It should be noted that, the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 8 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 800 includes, but is not limited to: radio frequency unit 801, network module 802, audio output unit 803, input unit 804, sensor 805, display unit 806, user input unit 807, interface unit 808, memory 809, and processor 810.
Those skilled in the art will appreciate that the electronic device 800 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 810 by a power management system to perform functions such as managing charge, discharge, and power consumption by the power management system. The electronic device structure shown in fig. 8 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
Wherein, the display unit 806 is configured to display a target interface of the application program in the first area; displaying a target image and information of the target image in a second area based on the content of the target interface; the target image is an image operated by a user within a preset time period, and the information of the target image comprises a description tag.
In the embodiment of the application, the electronic device displays a target interface of the application program in a first area; and displaying the target image and the information of the target image in the second area based on the content of the target interface. On the one hand, as the target image which is operated by the user recently is displayed in the second area, the user can directly select one or more images from the displayed target images for sharing, and the operation is convenient. On the other hand, as the description labels are correspondingly displayed on each target image displayed in the second area, the images to be shared can be easily searched by the description note user, so that the time for searching the images to be shared can be saved, and the searching difficulty is reduced.
Optionally, the processor 810 is configured to generate, before the display unit 806 displays the target images and the information of the target images in the second area based on the content of the target interface, a description tag for each of the target images according to the location where each of the target images is located, the content contained, and the content context information in the target interface, respectively.
Optionally, the processor 810 is further configured to determine, after generating a description tag for each target image according to the location where each target image is located, the content included in the target image, and content context information in the target interface, at least one contact corresponding to the target image according to a historical chat record of the user with each contact in the application program.
Optionally, the processor 810 is further configured to, after the display unit 806 displays, in the second area, the target image and the information of the target image based on the content of the target interface, and if the target interface is a chat interface between the user and the target contact, sort each target image according to the chat content between the user and the target contact and the description label corresponding to each target image;
a display unit 806, configured to display each of the sorted target images in the second area.
Optionally, a user input unit 807 is configured to receive a first input by the user of a first image of the target images after the display unit 806 displays the sorted target images in the second region; a network module 802 for sending the first image to the target contact in response to the first input; the processor 810 is further configured to move the first image from a first sub-area in the second area to be displayed in the second sub-area, where the first sub-area is used for displaying the target image that is not shared, and the second sub-area is used for displaying the target image that is shared.
Optionally, the user input unit 807 is further configured to, after the processor 810 moves the first image from the first sub-region to the second sub-region in the second region for display, receive a second input by the user to move the first image from the chat interface of the target contact to the second region; or, receiving a second input by the user to move the first image from the second sub-region to the first sub-region;
the processor 810 is further configured to withdraw the first image shared with the target contact in response to the second input.
Optionally, the user input unit 807 is further configured to receive a third input of said first image by said user after said first image is moved by the processor 810 from a first sub-region of said second region to be displayed in the second sub-region;
the processor 810 is further configured to delete the first image from the first area and the second area in response to the third input.
Optionally, the user input unit 807 is further configured to receive a fourth input by the user of the first image in the second region after the processor 810 moves the first image from the first region in the second region to be displayed in the second region;
The processor 810 is further configured to locate, in response to the fourth input, a top and bottom Wen Pianduan corresponding to the first image in the chat record of the target contact;
the display unit 806 is further configured to display the segment in a chat interface of the target contact.
810, the embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implement each process of the above-mentioned image processing method embodiment, and the same technical effects can be achieved, so that repetition is avoided, and no further description is provided herein.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium such as a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk or an optical disk, and the like.
The embodiment of the application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled with the processor, and the processor is used for running a program or an instruction, so as to implement each process of the embodiment of the image processing method, and achieve the same technical effect, so that repetition is avoided, and no redundant description is provided here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), including several instructions for causing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method described in the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are also within the protection of the present application.

Claims (7)

1. An image processing method, comprising:
displaying a target interface of the application program in a first area;
displaying a target image and information of the target image in a second area based on the content of the target interface, wherein the target image is an image operated by a user within a preset duration, and the information of the target image comprises a description tag;
under the condition that the target interface is a chat interface of a user and a target contact person, sequencing each target image according to chat contents of the user and the target contact person and description labels corresponding to each target image;
displaying the sorted target images in the second area;
receiving a first input of the user to a first image of the target images;
in response to the first input, sending the first image to the target contact and moving the first image from a first sub-region in the second region to be displayed in a second sub-region, the first sub-region being for displaying the target image that is not shared, the second sub-region being for displaying the target image that is already shared;
receiving a second input that the user moves the first image from the chat interface of the target contact to the second area; or alternatively, the process may be performed,
Receiving a second input by the user to move the first image from the second sub-region to the first sub-region;
in response to the second input, the first image shared with the target contact is withdrawn.
2. The method of claim 1, wherein prior to the step of displaying a target image and information of the target image in a second area based on the content of the target interface, the method further comprises:
and generating a description tag for the target image according to the position of the target image, contained content or content context information in the target interface.
3. The method of claim 2, wherein after the step of generating a descriptive label for the target image based on the location of the target image, the contained content, or content context information in the target interface, the method further comprises:
and determining at least one contact corresponding to the target image according to the historical chat record of the user and each contact in the application program.
4. The method of claim 1, wherein after the steps of sending the first image to the target contact and moving the first image from a first sub-region to a second sub-region in the second region for display in response to the first input, the method further comprises:
Receiving a third input of the user to the first image;
the first image is deleted from the first region and the second region in response to the third input.
5. The method of claim 1, wherein after the steps of sending the first image to the target contact and moving the first image from a first sub-region to a second sub-region in the second region for display in response to the first input, the method further comprises:
receiving a fourth input by the user of the first image in the second sub-region;
responsive to the fourth input, locating a top and bottom Wen Pianduan corresponding to the first image in a chat log of the target contact;
and displaying the fragments in the chat interface of the target contact.
6. An image processing apparatus, characterized in that the apparatus comprises:
the first display module is used for displaying a target interface of the application program in the first area;
the second display module is used for displaying a target image and information of the target image in a second area based on the content of the target interface;
the target image is an image operated by a user within a preset time period, and the information of the target image comprises a description tag;
The sorting module is used for sorting the target images according to the chat content of the user and the target contact person and the description labels corresponding to the target images under the condition that the target interface is a chat interface of the user and the target contact person after the second display module displays the target images and the information of the target images in a second area based on the content of the target interface;
a third display module, configured to display each of the sorted target images in the second area;
a first input receiving module, configured to receive a first input of the user to a first image in the target images after the third display module displays the sorted target images in a second area;
the sending module is used for responding to the first input, sending the first image to the target contact person, and moving the first image from a first subarea to a second subarea in the second area for displaying the target image which is not shared, and displaying the target image which is shared;
a second input receiving module, configured to receive a second input that the user moves the first image from the chat interface of the target contact to the second area after the sending module responds to the first input, sends the first image to the target contact, and moves the first image from the first sub-area to the second sub-area in the second area for display; or, receiving a second input by the user to move the first image from the second sub-region to the first sub-region;
And a revocation module for, in response to the second input, revoked the first image shared with the target contact.
7. An electronic device comprising a processor, a memory and a program or instruction stored on the memory and executable on the processor, which when executed by the processor, implements the steps of the image processing method of any one of claims 1 to 5.
CN202010464424.9A 2020-05-27 2020-05-27 Image processing method, device, electronic equipment and medium Active CN111954079B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010464424.9A CN111954079B (en) 2020-05-27 2020-05-27 Image processing method, device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010464424.9A CN111954079B (en) 2020-05-27 2020-05-27 Image processing method, device, electronic equipment and medium

Publications (2)

Publication Number Publication Date
CN111954079A CN111954079A (en) 2020-11-17
CN111954079B true CN111954079B (en) 2023-05-26

Family

ID=73337705

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010464424.9A Active CN111954079B (en) 2020-05-27 2020-05-27 Image processing method, device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN111954079B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114968144A (en) * 2021-02-27 2022-08-30 华为技术有限公司 Splicing display method, electronic equipment and system
CN113691443B (en) * 2021-08-30 2022-11-11 维沃移动通信(杭州)有限公司 Image sharing method and device and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108432260A (en) * 2015-12-24 2018-08-21 三星电子株式会社 Electronic equipment and its display control method
WO2019228294A1 (en) * 2018-05-29 2019-12-05 维沃移动通信有限公司 Object sharing method and mobile terminal
CN110602565A (en) * 2019-08-30 2019-12-20 维沃移动通信有限公司 Image processing method and electronic equipment

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1997980B (en) * 2003-05-16 2011-07-06 谷歌公司 Networked chat and media sharing systems and methods
US8341544B2 (en) * 2007-12-14 2012-12-25 Apple Inc. Scroll bar with video region in a media system
WO2017087561A1 (en) * 2015-11-17 2017-05-26 Advisual, Inc. Methods and systems for dynamic chat background
CN106681623B (en) * 2016-10-26 2019-01-11 维沃移动通信有限公司 A kind of sharing method and mobile terminal of screenshotss image
CN107896279A (en) * 2017-11-16 2018-04-10 维沃移动通信有限公司 Screenshotss processing method, device and the mobile terminal of a kind of mobile terminal
CN108228715A (en) * 2017-12-05 2018-06-29 深圳市金立通信设备有限公司 A kind of method, terminal and computer readable storage medium for showing image
CN109782976B (en) * 2019-01-15 2020-12-22 Oppo广东移动通信有限公司 File processing method, device, terminal and storage medium
CN110134306B (en) * 2019-04-08 2021-12-14 努比亚技术有限公司 Data sharing method and device and computer readable storage medium
CN110865745A (en) * 2019-10-28 2020-03-06 维沃移动通信有限公司 Screen capturing method and terminal equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108432260A (en) * 2015-12-24 2018-08-21 三星电子株式会社 Electronic equipment and its display control method
WO2019228294A1 (en) * 2018-05-29 2019-12-05 维沃移动通信有限公司 Object sharing method and mobile terminal
CN110602565A (en) * 2019-08-30 2019-12-20 维沃移动通信有限公司 Image processing method and electronic equipment

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Medical high-resolution image sharing and electronic whiteboard system: A pure-web-based system for accessing and discussing lossless original images in telemedicine;Liang Qiao 等;《Computer Methods and Programs in Biomedicine》;第77-91页 *
基于用户体验的微信界面交互设计分析;李炳琰;;铜陵学院学报(01);第1-5页 *
基于目标语义特征的图像检索系统;高永英,章毓晋,罗云;电子与信息学报(10);第1341-1348页 *

Also Published As

Publication number Publication date
CN111954079A (en) 2020-11-17

Similar Documents

Publication Publication Date Title
CN105528388B (en) Search recommendation method and device
CN110826302A (en) Questionnaire creating method, device, medium and electronic equipment
CN111813284B (en) Application program interaction method and device
WO2017181528A1 (en) Search display method and device
CN111954079B (en) Image processing method, device, electronic equipment and medium
CN113194024B (en) Information display method and device and electronic equipment
CN112882623B (en) Text processing method and device, electronic equipment and storage medium
CN112333084B (en) File sending method and device and electronic equipment
CN113037925B (en) Information processing method, information processing apparatus, electronic device, and readable storage medium
CN113869063A (en) Data recommendation method and device, electronic equipment and storage medium
CN113220393A (en) Display method and device and electronic equipment
CN112100463A (en) Information processing method and device, electronic equipment and readable storage medium
CN112416212A (en) Program access method, device, electronic equipment and readable storage medium
CN113325986B (en) Program control method, program control device, electronic device and readable storage medium
CN112328149B (en) Picture format setting method and device and electronic equipment
CN112084151A (en) File processing method and device and electronic equipment
CN114416664A (en) Information display method, information display device, electronic apparatus, and readable storage medium
CN114398128A (en) Information display method and device
CN113885743A (en) Text content selection method and device
CN113783770A (en) Image sharing method, image sharing device and electronic equipment
CN112181570A (en) Background task display method and device and electronic equipment
CN111967430A (en) Message processing method and device, electronic equipment and readable storage medium
CN112818094A (en) Chat content processing method and device and electronic equipment
CN112416143A (en) Text information editing method and device and electronic equipment
CN111796733A (en) Image display method, image display device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant