CN111061530A - Image processing method, electronic device and storage medium - Google Patents

Image processing method, electronic device and storage medium Download PDF

Info

Publication number
CN111061530A
CN111061530A CN201911234273.1A CN201911234273A CN111061530A CN 111061530 A CN111061530 A CN 111061530A CN 201911234273 A CN201911234273 A CN 201911234273A CN 111061530 A CN111061530 A CN 111061530A
Authority
CN
China
Prior art keywords
image
input
information
target
application program
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911234273.1A
Other languages
Chinese (zh)
Inventor
郑伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201911234273.1A priority Critical patent/CN111061530A/en
Publication of CN111061530A publication Critical patent/CN111061530A/en
Priority to PCT/CN2020/132640 priority patent/WO2021109960A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0485Scrolling or panning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention discloses an image processing method, relates to the technical field of communication, and aims to solve the problem that the operation process for checking image-related application program information is complicated. The method comprises the following steps: receiving a first input of a user; displaying a first image in response to the first input; displaying a target image, wherein the target image comprises the first image and a second image, the second image is generated by a first information screenshot of a first application program, and the first application program is determined based on the first image.

Description

Image processing method, electronic device and storage medium
Technical Field
Embodiments of the present invention relate to the field of communications technologies, and in particular, to an image processing method, an electronic device, and a storage medium.
Background
Electronic devices and various applications are increasingly popular, and some information needs to be saved or shared through images in the process of using the electronic devices.
When a user views a certain image, the user may want to view information of a certain application program, and therefore the user needs to open the application program for viewing, for example, when the user views a ticket or an air ticket of a trip App screenshot, if the user wants to obtain riding information of a station or an airport, the user needs to open the riding App for querying, to avoid forgetting that the screenshot needs to be retained, or if the user wants to know weather information of the trip day, the user needs to open a weather App again for viewing, and therefore the user needs to switch back and forth between different programs, and the operation process is complicated.
Disclosure of Invention
The embodiment of the invention provides an image processing method and electronic equipment, and aims to solve the problem that the operation process of checking image related application program information is complex.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides an image processing method, which is applied to an electronic device, and the method includes: receiving a first input of a user; displaying a first image in response to the first input; displaying a target image, wherein the target image comprises the first image and a second image, the second image is generated by a first information screenshot of a first application program, and the first application program is determined based on the first image.
In a second aspect, an embodiment of the present invention provides an electronic device, including: the device comprises a receiving module and a display module; the receiving module is used for receiving a first input of a user; the display module is used for responding to the first input and displaying a first image; the display module is further configured to display a target image, where the target image includes the first image and a second image, the second image is generated from a first information screenshot of a first application, and the first application is determined based on the first image.
In a third aspect, an embodiment of the present invention provides an electronic device, including a processor, a memory, and a computer program stored on the memory and executable on the processor, where the computer program, when executed by the processor, implements the steps of the image processing method according to the first aspect.
In a fourth aspect, the present invention provides a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the steps of the image processing method as in the first aspect.
In the embodiment of the invention, the first input of the user is received; displaying a first image in response to the first input; and displaying a target image, wherein the target image comprises the first image and a second image, the second image is generated by a first information screenshot of a first application program, the first application program is determined based on the first image, namely, the first image is displayed by receiving first input of a user, then the first application program is determined based on the first image, the second image is generated by the first information screenshot of the first application program, and finally the target image comprising the first image and the second image is displayed. Therefore, the user can conveniently view the first application program information related to the first image without opening the first application program, the operation of the user is simplified, and in addition, the first image and the second image can be displayed in one image, so that the user can conveniently view the first application program information.
Drawings
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention;
FIG. 2 is a flowchart of an image processing method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an electronic device displaying a first image according to an embodiment of the invention;
FIG. 4 is a schematic diagram of an electronic device displaying a target image according to an embodiment of the present invention;
fig. 5 is a second schematic diagram of the electronic device according to the embodiment of the invention;
fig. 6 is a third schematic diagram of an electronic device displaying a target image according to an embodiment of the invention;
FIG. 7 is a fourth schematic diagram of an electronic device displaying a target image according to an embodiment of the invention;
FIG. 8 is a fifth schematic view of an electronic device displaying a target image according to an embodiment of the invention;
FIG. 9 is a sixth schematic view of an electronic device displaying a target image according to an embodiment of the invention;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
fig. 11 is a hardware schematic diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," "third," and "fourth," etc. in the description and in the claims of the present invention are used for distinguishing between different objects and not for describing a particular order of the objects. For example, the first input, the second input, the third input, the fourth input, etc. are used to distinguish between different inputs, rather than to describe a particular order of inputs.
In the embodiments of the present invention, words such as "exemplary" or "for example" are used to mean serving as examples, illustrations or descriptions. Any embodiment or design described as "exemplary" or "e.g.," an embodiment of the present invention is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present relevant concepts in a concrete fashion.
In the description of the embodiments of the present invention, unless otherwise specified, "a plurality" means two or more, for example, a plurality of processing units means two or more processing units; plural elements means two or more elements, and the like.
The embodiment of the invention provides an image processing method, wherein electronic equipment can receive first input of a user; displaying a first image in response to the first input; displaying a target image, wherein the target image comprises the first image and a second image, the second image is generated by a first information screenshot of a first application program, and the first application program is determined based on the first image. Therefore, the first application program information related to the first image can be conveniently viewed through the scheme.
The following describes a software environment applied to the image processing method provided by the embodiment of the present invention, by taking an android operating system as an example.
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention. In fig. 1, the architecture of the android operating system includes 4 layers, which are respectively: an application program layer, an application program framework layer, a system runtime library layer and a kernel layer (specifically, a Linux kernel layer).
The application program layer comprises various application programs (including system application programs and third-party application programs) in an android operating system.
The application framework layer is a framework of the application, and a developer can develop some applications based on the application framework layer under the condition of complying with the development principle of the framework of the application.
The system runtime layer includes libraries (also called system libraries) and android operating system runtime environments. The library mainly provides various resources required by the android operating system. The android operating system running environment is used for providing a software environment for the android operating system.
The kernel layer is an operating system layer of an android operating system and belongs to the bottommost layer of an android operating system software layer. The kernel layer provides core system services and hardware-related drivers for the android operating system based on the Linux kernel.
Taking an android operating system as an example, in the embodiment of the present invention, a developer may develop a software program for implementing the image processing method provided in the embodiment of the present invention based on the system architecture of the android operating system shown in fig. 1, so that the image processing method may operate based on the android operating system shown in fig. 1. Namely, the processor or the terminal can implement the image processing method provided by the embodiment of the invention by running the software program in the android operating system.
The electronic device in the embodiment of the invention can be a mobile electronic device or a non-mobile electronic device. The mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), etc.; the non-mobile electronic device may be a Personal Computer (PC), a Television (TV), a teller machine, a self-service machine, or the like; the embodiments of the present invention are not particularly limited.
An execution subject of the image processing method provided in the embodiment of the present invention may be the electronic device (including a mobile electronic device and a non-mobile electronic device) described above, or may also be a functional module and/or a functional entity capable of implementing the method in the electronic device, and specifically may be determined according to actual use requirements, which is not limited in the embodiment of the present invention. The following takes an electronic device as an example to exemplarily describe the image processing method provided by the embodiment of the present invention.
Referring to fig. 2, an embodiment of the present invention provides an image processing method applied to an electronic device, and the method may include steps 201 to 203 described below.
Step 201, receiving a first input of a user;
optionally, the first input includes, but is not limited to: touch input and/or voice input. The touch input described herein may be a touch input to a target object, and specifically includes: a slide input, a drag input, a single click input, a double click input, a rotation input, a long press input, or the like to the target object. When the input is executed, the input may be a single-point touch input, such as a sliding input, a dragging input, a rotating input, or a double-click input on the target object with a single finger; the first input may also be a first operation, such as a sliding input, a dragging input, a double-click input, a rotation input, or a long-press input on the target object with two fingers.
The target object may include, but is not limited to, a virtual key, a physical key, any interface of any application, and the like.
Step 202, responding to the first input, and displaying a first image;
optionally, the first image comprises: and capturing the generated image, the shot image, the pre-stored image or the image sent by the target equipment for the fifth information of the fifth application program.
Optionally, the fifth information of the fifth application may include, but is not limited to, content information displayed in interfaces such as a main interface, a function interface, or a shortcut interface of the fifth application.
For example, the first image may be an image generated by screenshot of the fifth information of the fifth application, such as an image generated by a user screenshot the interface when viewing train ticket information with the application. The first image may be a photographed image, such as a user photographing a train ticket. The first image may be a pre-stored image, such as any one of the images in an album. The first image may also be an image sent by the target device, such as an image sent by the sender from the target device while chatting using a chat-like application. Of course, the first image is not limited to the above listed cases, and can be determined according to actual situations.
Step 203, displaying a target image, where the target image includes the first image and a second image, the second image is generated by capturing a first information of a first application program, and the first application program is determined based on the first image.
Optionally, the first application program is determined based on the first image, and the content in the first image may be identified by an image identification technology, and the first application program is determined by using the image content. Other means of determination are also within the scope of the present invention.
Optionally, the second image is generated by screenshot of first information of the first application program, where the first information of the first application program may include, but is not limited to, content information displayed in a main interface, a function interface, a shortcut interface, and the like of the first application program.
For example, as shown in fig. 3, the first image 301 is an image generated by screenshot of the query interface when the user queries train ticket information by using the travel class App, based on the first image 301, the departure time in the first image 301 can be identified, and based on the departure date, the first application program can be determined to be a weather application program according to the association relationship between the prestored image content and the application program. It should be noted that the association relationship may also be set by a user, and may be determined according to actual situations. The second image may be an image generated by querying the weather information interface screenshot of 1 month and 1 day for the weather App, and is shown as 401 in fig. 4 as the second image.
Optionally, the target image comprises a first image and a second image. Illustratively, as shown in FIG. 4, the target image 402 includes a first image 301 and a second image 401.
In the embodiment of the invention, the first input of the user is received; displaying a first image in response to the first input; and displaying a target image, wherein the target image comprises the first image and a second image, the second image is generated by a first information screenshot of a first application program, the first application program is determined based on the first image, namely, the first image is displayed by receiving first input of a user, then the first application program is determined based on the first image, the second image is generated by the first information screenshot of the first application program, and finally the target image comprising the first image and the second image is displayed. Therefore, the user can conveniently view the first application program information related to the first image without opening the first application program, the operation of the user is simplified, and in addition, the first image and the second image can be displayed in one image, so that the user can conveniently view the first application program information.
Optionally, before the step 203 displays the target image, the method may further include:
step 2031, acquiring a first image content of the first image;
alternatively, the first image content may be a first image content characteristic, which may include, but is not limited to, characters, scenes, people, plants, animals, clothes, and the like, and may be determined according to actual situations.
Alternatively, the first image content of the first image may be obtained by performing image recognition on the first image according to an image recognition technology. Of course, other acquisition methods are also included in the scope of the present invention.
Step 2032, determining first information of the first application program based on the first image content;
optionally, the first information of the first application program may be determined according to a pre-stored association relationship between the first image content and the application program. It should be noted that the association relationship may also be set by a user, and may be determined according to actual situations. Illustratively, the first image content is clothing, which can be associated with a shopping-like App; the first image content is a scene and can be associated with the outgoing App; the first image content is a plant and can be associated with a search type App; the first image content is a character, searching can be carried out based on a keyword in the character, for example, the character has a date keyword, and the date keyword can be associated with a weather type App; the text contains a place keyword, and can be associated with the travel App. Other ways of determining are also within the scope of the embodiments of the present invention.
Optionally, the first information of the first application includes, but is not limited to, content information displayed in interfaces such as a main interface, a functional interface, a quickness interface, and the like of the first application, where the functional interface may be a functional interface associated with the first image content, for example, a text of the first image content is 2019, 1, and 1 day, the determined first application is a weather-type App, and the functional interface of the first application may be weather information of the weather-type App in 2019, 1, and 1 day.
Step 2033, generating a second image by capturing the first information of the first application program;
illustratively, as shown in fig. 3, a first image 301 is generated by screenshot of a query interface when querying train ticket information by using a travel class App, and the first image 301 may be identified, so as to obtain first image content, such as departure time, an origin station, a destination station, and the like. Based on the first image content, if the departure time is 2019, 1 month and 1 day, the first application program can be determined to be a weather type App according to the pre-stored association relationship between the first image content and the application program, so that the weather information of the first application program, 1 month and 1 day, can be queried for the weather type App by determining the first information of the first application program, and the first information can be subjected to screenshot to generate a second image.
Step 2034, the first image and the second image are synthesized into the target image. In this way, the first information of the first application program is determined through the first image content of the first image, then the first information of the first application program is intercepted to generate a second image, and finally the first image and the second image are synthesized into the target image. Therefore, the related information is stored in one image, so that the user can conveniently watch the related information, the user can directly watch the first image and the second image in the target image without opening the first application program to watch the first information of the first application program, and the operation steps of the user are simplified.
Optionally, the first image and the second image are synthesized into the target image, the first image and the second image may be directly spliced, or the first image or the second image may be spliced after being edited, where the editing may include, but is not limited to, adjusting parameters, such as enlarging, reducing, clipping, rotating and sharpening, defogging, mosaicing, adding a filter, adding a special effect, adding a text, and the like. For example, the first image may be enlarged and then stitched with the second image, the second image may be reduced and then stitched with the first image, or the first image and the second image may be simultaneously reduced and stitched. However, other ways of synthesizing the first image and the second image into the target image are also included in the scope of the present invention.
It will be appreciated that the first image and the second image may be stitched in any manner. Illustratively, the first image and the second image may partially overlap, where the overlapping area may be default, or may be set by a user, and may be specifically determined according to an actual situation, as shown in fig. 5, 301 is the first image, 401 is the second image, and the target image 402 is obtained after the 301 and 401 are partially overlapped and spliced. The first image and the second image may completely overlap. The first image and the second image may be adjacent to each other, as shown in fig. 6, 301 is the first image, 401 is the second image, and the target image 402 is obtained after the 301 and 401 are adjacently spliced. The first image and the second image may not intersect, that is, a preset distance may be set, where the preset distance may be default or set by a user, and may be determined according to an actual situation, as shown in fig. 4, 301 is the first image, 401 is the second image, and 301 and 401 have a preset distance, and the target image 402 is obtained after stitching.
Optionally, before the generating the second image, the screenshot of the first information of the first application further includes: and displaying a first interface of the first application program, wherein the first interface is an interface used for displaying the first information in the first application program. Thus, the next operation of the user is facilitated.
Optionally, the first interface of the first application may include, but is not limited to, a main interface, a function interface, a shortcut interface, etc. of the first application.
Optionally, the first interface is an interface used for displaying the first information in the first application program, and it should be noted that, when the first information is displayed on the first interface, the second image can be automatically generated on the screenshot of the first information, so that the operation of the user is simplified. And under the condition that the first information is not displayed on the first interface, displaying the first interface, so that the next operation of the user is facilitated.
Illustratively, the first application program is a weather class App, and the first information of the first application program is weather information of the weather class App on 1/2019. Before the second image is generated for the first information screenshot of the first application program, a main interface of the weather type App can be displayed, and first information can be displayed on the main interface.
Optionally, after displaying the target interface of the first application, the method further includes:
receiving a second input of the first interface from the user;
optionally, the second input may include, but is not limited to, a drag input, a click input, a long press input, a hover touch, and the like, and may also be a second operation.
Displaying the first information on the first interface in response to the second input. In this way, the first information may be determined from the user's input, and thus the second image may be determined.
Illustratively, the first application program may be a weather program, the first interface of the first application program may be a main interface of the weather program, the user may select a city with a date of 1 month and 1 day as a city C on the main interface, and the weather information with the first information of 1 month and 1 day and the city C is displayed on the first interface.
For example, the first application program may be a map program, the first interface of the first application program is a main interface of the map program, the user may input a starting point, an ending point, and the like on the main interface, may query a route from the starting point to the ending point, and the interface may be first information, and on the first interface, the first information is displayed.
Optionally, after the displaying the target image, the method further includes:
receiving a third input of the user;
alternatively, the third input may be an input for triggering updating of the target image, the third input may include, but is not limited to, a drag input, a click input, a long press input, a hover touch input, and the like, and the third input may also be a third operation. If an update control is displayed, the user may click on the control to trigger updating the target image.
Optionally, after triggering to update the target image, the target image may be made to be in an editable state, and the user may edit the target image.
In response to the third input, updating the target image. Thus, the target image can be updated conveniently.
Alternatively, the updating of the target image may be only the first image updating, only the second image updating, or both the first image and the second image updating, or the entire target image updating. Of course, the update target image is not limited to the above listed cases, and may be determined according to actual situations, which is not limited in any way by the embodiment of the present invention.
Optionally, the updating the target image includes: updating display positions of the first image and the second image. In this way, the display positions of the first image and the second image can be updated easily.
Optionally, here, the third input is for updating the display positions of the first image and the second image. The third input may include, but is not limited to, a drag input, a click input, a long press input, a hover touch input, etc., which may also be a third operation. Illustratively, the third input may be that the user clicks a control for updating the display positions of the first and second images, long-presses the target image, clicks the target image, or the like, and optionally, the user may update the display positions by dragging the first image or the second image while the target image is in an editable state.
Optionally, the updating of the display positions of the first image and the second image may be interchanging the display positions of the first image and the second image, or may be changing an arrangement manner of the first image and the second image, for example, the first image is located in a first column, the second image is located in a second column, and after the display positions of the first image and the second image are updated, the first image is located in a first row, and the second image is located in a second row. Or updating the position relationship between the first image and the second image, for example, the first image and the second image are partially overlapped, and after updating the display positions of the first image and the second image, the first image and the second image are not overlapped and are separated by a certain distance. Of course, updating the display positions of the first image and the second image is not limited to the above listed cases, and may be determined according to actual situations.
Optionally, before updating the target image, the method further includes:
acquiring second image content of the second image or third image content of the first image;
optionally, the obtaining of the second image content of the second image or the third image content of the first image may be to identify the second image according to an image identification technology to obtain the second image content, or may be to identify the first image according to the image identification technology to obtain the third image content.
Determining second information of a second application program based on the second image content or third image content of the first image;
optionally, the second information of the second application may include, but is not limited to, content information displayed in a main interface, a function interface, a shortcut interface, and the like of the second application.
Generating a third image for a second information screenshot of the second application program;
the updating the target image comprises: and synthesizing the target image and the third image into a fourth image, and updating the target image into the fourth image. In this way, the third information of the second application program is determined according to the second image content of the second image or the third image content of the first image, the third image is generated, the target object and the third image are synthesized into the fourth image, and finally the target image is updated into the fourth image. In this way, the target information of a plurality of application programs can be displayed more intuitively in one image, and the operation steps when the user views target information of different application programs can be simplified.
Optionally, the target image and the third image are synthesized into a fourth image, and the fourth image may be obtained by splicing the target object and the third object in any form.
Optionally, here, the third input is used to update the target image. The third input may include, but is not limited to, a drag input, a click input, a long press input, a hover touch input, etc., and may also be a third operation. Illustratively, the third input may be the user clicking a control to update the target image, long-pressing the target image, clicking the target image, etc., alternatively, the user may click a get more image control to get a third image.
Illustratively, as shown in fig. 3, by recognizing the first image 301, it is possible to obtain an originating station a, a terminating station B, and a departure time of 2019, 1 month and 1 day. According to the starting station, it can be determined that the first application program is a trip App, and as shown in fig. 7, the second image 701 can be an image generated by an interface screenshot in which the program is queried for the starting station to be C, the terminal station to be a, and the date to be 2019, 1 month and 1 day. The target image 702 can be obtained by stitching the first image 301 and the second image 701. The second image 701 may be subjected to image recognition to obtain a second image content of the second image 701. It may be determined that the second application is a travel class App based on the second image content, such as the originating station is C. A route from D to C of the travel class App query can be opened, screenshot is conducted on the interface, and a third image is generated. The third image 801 and the target image 702 are synthesized to obtain a fourth image 802, and the target object 702 is updated to the fourth image 802, as shown in fig. 8.
For example, as shown in fig. 3, the first image 301 is recognized, and the departure time is 2019, 1 month and 1 day, and the departure station is a, the destination station is B. According to the starting station, it can be determined that the first application program is a trip App, and as shown in fig. 7, the second image 701 can be an image generated by an interface screenshot in which the program is queried for the starting station to be C, the terminal station to be a, and the date to be 2019, 1 month and 1 day. The target image 702 can be obtained by stitching the first image 301 and the second image 701. The first image 301 may be subjected to image recognition, and a third image content of the first image 301 may be acquired. The second application may be determined to be a weather class App based on the third image content, such as day 1/1 in 2019. And opening a weather class App to inquire the weather information of 2019, 1 month and 1 day, and screenshot the inquiry interface to generate a third image. The target image 702 and the third image are synthesized into a fourth image 902, and the target image is updated to the fourth image, which is shown in fig. 9.
Optionally, before the updating the target image, the method further includes: determining third information of the target application program based on the first information;
optionally, here, the third input is used to update the second image. The third input may include, but is not limited to, a drag input, a click input, a long press input, a hover touch input, etc., and may also be a third operation. Illustratively, the third input may be the user clicking a control to update the target image, long-pressing the target image, clicking the target image, and so on. For example, the third input may be a click input to update the second image control, or a touch input to the second image by the user after the user clicks input to update the target image control, the target image being in an editable state. The concrete can be determined according to actual conditions.
Optionally, the second information of the first application program may include, but is not limited to, content information displayed in a main interface, a function interface, a shortcut interface, and the like of the first application program.
Generating a fifth image for the third information screenshot of the target application program;
the updating the target image comprises: updating the second image to the fifth image;
wherein the target application is the first application or a third application. In this way, the second image in the target image can be updated conveniently.
Illustratively, the first application is a weather application, and the first information is weather information of 1 month and 1 day, that is, the second image is generated by screenshot of the weather information of 1 month and 1 day. And determining that the third information of the first application program is weather information of 1 month and 2 days based on the first information, screenshot the information, generating a fifth image, and updating the second image in the target image into the fifth image.
Illustratively, as shown in fig. 6, the target image 402 includes a first image 301 and a second image 401. It may be determined that the third application is a map based on the first information, and the third information of the third application may be train information of which the originating station is C and which arrives at the station a. Generating a fifth image for the third information screenshot of the third application program, and updating the second image 401 in fig. 6 to the fifth image 701, which is shown in fig. 7.
Illustratively, the first application is a weather application, and the first information is weather information of 1 month and 1 day, that is, the second image is generated by screenshot on a weather information interface of 1 month and 1 day. After the user performs the left-sliding operation on the second image, based on the first information, it may be determined that the third information of the first application program is weather information of 1 month and 2 days, and the information is captured, so that a fifth image may be generated, and the second image in the target image may be updated to the fifth image.
Optionally, the updating the target image includes:
before the updating the target image, the method further comprises:
acquiring a sixth image;
wherein the sixth image comprises: and capturing the generated image, the shot image, the pre-stored image or the image sent by the target equipment for the fourth information of the fourth application program.
Optionally, here, the third input is for updating the first image. The third input may include, but is not limited to, a drag input, a click input, a long press input, a hover touch input, etc., and may also be a third operation. For example, the third input may be that the user clicks a control for updating the first image, and may be that the user presses the target image for a long time or clicks the target image, the target image is in an editable state, and then clicks the first image, which may be determined specifically according to the actual situation.
For example, the sixth image may be an image generated by screenshot of fourth information of a fourth application, such as an image generated by screenshot of the interface when the user views train ticket information by using the trip class App. The sixth image may be a photographed image, for example, if the user clicks the first image, the camera program is opened, and the photographed image is the sixth image. The sixth image may be a pre-stored image, such as a user clicking on the first image, and may select an arbitrary image from the album program as the sixth image. The sixth image may also be an image sent by the target device, such as an image sent by the sender through the target device while chatting using a chat program. Of course, the sixth image is not limited to the above listed cases, and may be determined according to actual situations.
The updating the target image comprises:
updating the first image to the sixth image. In this way, the first image in the target image can be updated according to the input of the user.
For example, if the first image is an image obtained by photographing a train ticket, a sixth image is obtained, and the sixth image may be an image obtained by photographing, and if the user takes the train ticket again to obtain the sixth image, the first image is updated to the sixth image.
Optionally, after the displaying the target image, the method further includes:
receiving a fourth input of the second image by the user;
optionally, the fourth input includes, but is not limited to, at least one of a drag input, a click input, a long press input, a hover touch, and the like, and the fourth input may also be a fourth operation.
And responding to the fourth input, and displaying a first interface of the first application program, wherein the first interface is an interface used for displaying the first information in the first application program. In this way, the first interface of the first application can be displayed in accordance with the user's input of the second image.
Optionally, the first interface is an interface used for displaying the first information in the first application.
For example, as shown in fig. 3, the first image 301 is an image generated by screenshot of the query interface when the user uses the trip type App to query that information includes a train ticket, based on the first image 301, the departure time in the first image 301 can be identified, and based on the departure date, it can be determined that the first application is the weather type App. The second image may be an image generated by querying the weather information interface screenshot of 1 month and 1 day for the weather App, and is shown as 401 in fig. 4 as the second image. The user clicks the second image 401 in fig. 4, and may display the weather information of day 1/month, where it should be noted that the weather interface may be different from the second image, such as the latest weather information of day 1/month.
Optionally, after displaying the target image, the method may further include: receiving a fifth input of the first image by the user;
optionally, the fifth input includes, but is not limited to, at least one of a drag input, a click input, a long press input, a hover touch, and the like, and the fifth input may also be a fifth operation.
And responding to the fifth input, and displaying a second interface of the fifth application program, wherein the second interface is an interface for acquiring the first image. In this way, the second interface of the fifth application can be displayed in accordance with the user input to the first image.
Optionally, the second interface is an interface for acquiring the first image, and the second interface includes: the third interface of the fifth application program is an interface for capturing a fifth information displayed in the third interface of the fifth application program to obtain the first image, the shooting interface is an interface for obtaining the first image, the storage interface is an interface for storing the first image in advance, and the sending interface is an interface for sending the first image by the target device.
For example, if the first image is a screenshot of an interface for inquiring weather information of a weather class App, clicking the first image in the target image can display any interface of the weather class App. If the first image is the shot image, clicking the first image in the target image can display any interface of the camera program. If the first image is an image in the album, clicking the first image in the target image may display any interface of the album application program, which is certainly not limited to the above listed cases, and may be determined according to actual situations, which is not limited in any way by the embodiment of the present invention.
In the embodiment of the invention, the first input of the user is received; displaying a first image in response to the first input; and displaying a target image, wherein the target image comprises the first image and a second image, the second image is generated by screenshot of first information of a first application program, and the first application program is determined based on the first image, so that the first application program information related to the first image can be conveniently viewed without opening the first application program, the operation of a user is simplified, and in addition, the first image and the second image can be displayed in one image, and the user can conveniently view the first application program information. The target image can be updated by updating the first image or the second image, or by updating the positions of the first image and the second image. Further, a corresponding interface can be displayed according to the input of the user to the first image or the second image in the target image.
As shown in fig. 10, an embodiment of the present invention provides an electronic device 120, where the electronic device 120 includes: a receiving module 121 and a display module 122.
The receiving module is used for receiving a first input of a user; the display module is used for responding to the first input and displaying a first image; the display module is further configured to display a target image, where the target image includes the first image and a second image, the second image is generated by capturing a first information of a first application program, and the first application program is determined based on the first image.
Optionally, the obtaining module is configured to obtain first image content of the first image; a determining module for determining first information of the first application based on the first image content; the generating module is used for generating a second image for a first information screenshot of the first application program; and the synthesis module is used for synthesizing the first image and the second image into the target image. Therefore, the related information is stored in one image, the user can conveniently view the related information, the user can directly view the first image and the second image in the target image without opening the first application program to view the first information of the first application program, and the operation steps of the user are simplified.
Optionally, the display module 122 is further configured to display a first interface of the first application, where the first interface is an interface used for displaying the first information in the first application.
Optionally, the receiving module 121 is further configured to receive a second input to the target interface from the user; the display module 122 is further configured to display the first information on the first interface in response to the second input. In this way, the first information may be determined from the user's input, and thus the second image may be determined.
Optionally, the receiving module 121 is further configured to receive a third input from the user; an update module to update the target image in response to the third input. Thus, the target image can be updated conveniently.
Optionally, the updating module is further configured to update display positions of the first image and the second image. In this way, the display positions of the first image and the second image can be updated easily.
Optionally, the obtaining module is further configured to obtain a second image content of the second image or a third image content of the first image; the determining module is further used for determining second information of a second application program based on the second image content or the third image content of the first image; the generating module is further used for generating a third image for a second information screenshot of the second application program; and the processing module is used for synthesizing the target image and the third image into a fourth image and updating the target image into the fourth image. In this way, the target information of a plurality of application programs can be displayed more intuitively in one image, and the operation steps when the user views target information of different application programs can be simplified.
Optionally, the determining module is further configured to determine third information of the target application based on the first information; the generation module is further used for generating a fifth image for the third information screenshot of the target application program; an update module further configured to update the second image to the fifth image; wherein the target application is the first application or a third application. In this way, the second image in the target image can be updated conveniently.
Optionally, the acquiring module is further configured to acquire a sixth image; wherein the sixth image comprises: and capturing the generated image, the shot image, the pre-stored image or the image sent by the target equipment for the fourth information of the fourth application program. And the updating module is also used for updating the first image into the sixth image.
Optionally, the first image comprises: and capturing the generated image, the shot image, the pre-stored image or the image sent by the target equipment for the fifth information of the fifth application program.
Optionally, the receiving module 121 is further configured to receive a fourth input of the second image by the user; the display module 122 is further configured to display a first interface of a first application, where the first interface is an interface used for displaying the first information in the first application.
Optionally, the receiving module 121 is further configured to receive a fifth input of the first image by the user; the display module 122 is further configured to display a second interface of the fifth application program in response to the fifth input, where the second interface is an interface for acquiring the first image. In this way, the target interface of the corresponding application program can be displayed according to the input of the image by the user.
The electronic device provided in the embodiment of the present invention can implement each process implemented by the electronic device in the method embodiments of fig. 2 to 9, and is not described herein again to avoid repetition. In the embodiment of the invention, the first input of the user is received; displaying a first image in response to the first input; and displaying a target image, wherein the target image comprises the first image and a second image, the second image is generated by a first information screenshot of a first application program, the first application program is determined based on the first image, namely, the first image is displayed by receiving a first input of a user, then the first application program is determined based on the first image, the second image is generated by the first information screenshot of the first application program, and finally the target image comprising the first image and the second image is displayed. In this way, the first application information related to the first image can be conveniently viewed.
Fig. 11 is a schematic diagram of a hardware structure of an electronic device implementing various embodiments of the present invention. As shown in fig. 11, the electronic device 100 includes, but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 11 does not constitute a limitation of electronic devices, which may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic device, a wearable device, a pedometer, and the like.
The user input unit 107 is used for receiving a first input of a user; a display unit 106 for displaying a first image in response to the first input; the display unit 106 is further configured to display a target image, where the target image includes the first image and a second image, the second image is generated by capturing a first information of a first application program, and the first application program is determined based on the first image.
According to the electronic equipment provided by the embodiment of the invention, the electronic equipment can receive the first input of a user; displaying a first image in response to the first input; and displaying a target image, wherein the target image comprises the first image and a second image, the second image is generated by a first information screenshot of a first application program, the first application program is determined based on the first image, namely, the first image is displayed by receiving a first input of a user, then the first application program is determined based on the first image, the second image is generated by the first information screenshot of the first application program, and finally the target image comprising the first image and the second image is displayed. In this way, the first application information related to the first image can be conveniently viewed.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 101 may be used for receiving and sending signals during a message transmission and reception process or a call process, and specifically, receive downlink data from a base station and then process the received downlink data to the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 102, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the network module 102 or stored in the memory 109 into an audio signal and output as sound. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the electronic device 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 includes a speaker, a buzzer, a receiver, and the like.
The input unit 104 is used to receive an audio or video signal. The input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the Graphics processor 1041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the network module 102. The microphone 1042 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of the phone call mode.
The electronic device 100 also includes at least one sensor 105, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or the backlight when the electronic device 100 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 105 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 107 includes a touch panel 1071 and other input devices 1072. The touch panel 1071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 1071 (e.g., operations by a user on the touch panel 1071 or near the touch panel 1071 using a finger, a stylus, or any other suitable object or attachment). The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and receives and executes commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. Specifically, other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 1071 may be overlaid on the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although in fig. 11, the touch panel 1071 and the display panel 1061 are two independent components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the electronic device, and is not limited herein.
The interface unit 108 is an interface for connecting an external device to the electronic apparatus 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the electronic apparatus 100 or may be used to transmit data between the electronic apparatus 100 and the external device.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, performs various functions of the electronic device and processes data by operating or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the electronic device. Processor 110 may include one or more processing units; alternatively, the processor 110 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The electronic device 100 may further include a power supply 111 (e.g., a battery) for supplying power to various components, and optionally, the power supply 111 may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, the electronic device 100 includes some functional modules that are not shown, and are not described in detail herein.
Optionally, an embodiment of the present invention further provides an electronic device, which may include the processor 110 shown in fig. 11, the memory 109, and a computer program stored on the memory 109 and capable of being executed on the processor 110, where the computer program, when executed by the processor 110, implements each process of the image processing method shown in any one of fig. 2 to fig. 9 in the foregoing method embodiments, and may achieve the same technical effect, and details are not repeated here to avoid repetition.
An embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the image processing method shown in any one of fig. 2 to 9 in the foregoing method embodiments, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling an electronic device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that many more modifications and variations can be made without departing from the spirit of the invention and the scope of the appended claims.

Claims (15)

1. An image processing method applied to an electronic device, comprising:
receiving a first input of a user;
displaying a first image in response to the first input;
displaying a target image, wherein the target image comprises the first image and a second image, the second image is generated by a first information screenshot of a first application program, and the first application program is determined based on the first image.
2. The method of claim 1, wherein prior to displaying the target image, further comprising:
acquiring first image content of the first image;
determining first information of the first application program based on the first image content;
generating the second image for the first information screenshot of the first application program;
and synthesizing the first image and the second image into the target image.
3. The method of claim 2, wherein the first screenshot of the first application, prior to generating the second image, further comprises:
and displaying a first interface of the first application program, wherein the first interface is an interface used for displaying the first information in the first application program.
4. The method of claim 3, wherein after displaying the target interface of the first application, further comprising:
receiving a second input of the first interface from the user;
displaying the first information on the first interface in response to the second input.
5. The method of claim 1, wherein after displaying the target image, further comprising:
receiving a third input of the user;
in response to the third input, updating the target image.
6. The method of claim 5, wherein the updating the target image comprises:
updating display positions of the first image and the second image.
7. The method of claim 5, wherein prior to said updating said target image, further comprising:
acquiring second image content of the second image or third image content of the first image;
determining second information of a second application program based on the second image content or third image content of the first image;
generating a third image for a second information screenshot of the second application program;
the updating the target image comprises:
and synthesizing the target image and the third image into a fourth image, and updating the target image into the fourth image.
8. The method of claim 5, wherein prior to said updating said target image, further comprising:
determining third information of the target application program based on the first information;
generating a fifth image for the third information screenshot of the target application program;
the updating the target image comprises:
updating the second image to the fifth image;
wherein the target application is the first application or a third application.
9. The method of claim 5, wherein prior to said updating said target image, further comprising:
acquiring a sixth image;
wherein the sixth image comprises: and capturing the generated image, the shot image, the pre-stored image or the image sent by the target equipment for the fourth information of the fourth application program.
The updating the target image comprises:
updating the first image to the sixth image.
10. The method of claim 1, wherein the first image comprises: and capturing the generated image, the shot image, the pre-stored image or the image sent by the target equipment for the fifth information of the fifth application program.
11. The method of claim 1, wherein after displaying the target image, further comprising:
receiving a fourth input of the second image by the user;
and responding to the fourth input, and displaying a first interface of a first application program, wherein the first interface is an interface used for displaying the first information in the first application program.
12. The method of claim 1, wherein after displaying the target image, further comprising:
receiving a fifth input of the first image by the user;
and responding to the fifth input, and displaying a second interface of the fifth application program, wherein the second interface is an interface for acquiring the first image.
13. An electronic device, comprising: the device comprises a receiving module and a display module;
the receiving module is used for receiving a first input of a user;
the display module is used for responding to the first input and displaying a first image;
the display module is further configured to display a target image, where the target image includes the first image and a second image, the second image is generated by capturing a first information of a first application program, and the first application program is determined based on the first image.
14. An electronic device, comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the image processing method according to any one of claims 1 to 12.
15. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the image processing method according to any one of claims 1 to 12.
CN201911234273.1A 2019-12-05 2019-12-05 Image processing method, electronic device and storage medium Pending CN111061530A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201911234273.1A CN111061530A (en) 2019-12-05 2019-12-05 Image processing method, electronic device and storage medium
PCT/CN2020/132640 WO2021109960A1 (en) 2019-12-05 2020-11-30 Image processing method, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911234273.1A CN111061530A (en) 2019-12-05 2019-12-05 Image processing method, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN111061530A true CN111061530A (en) 2020-04-24

Family

ID=70299924

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911234273.1A Pending CN111061530A (en) 2019-12-05 2019-12-05 Image processing method, electronic device and storage medium

Country Status (2)

Country Link
CN (1) CN111061530A (en)
WO (1) WO2021109960A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112700302A (en) * 2020-12-29 2021-04-23 维沃移动通信有限公司 Order management method and device
WO2021109960A1 (en) * 2019-12-05 2021-06-10 维沃移动通信有限公司 Image processing method, electronic device, and storage medium
CN115080163A (en) * 2022-06-08 2022-09-20 深圳传音控股股份有限公司 Display processing method, intelligent terminal and storage medium

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103064627A (en) * 2013-01-11 2013-04-24 广东欧珀移动通信有限公司 Application management method and device
CN103218133A (en) * 2013-03-28 2013-07-24 东莞宇龙通信科技有限公司 Startup method of associated application program and terminal
CN103645826A (en) * 2013-11-29 2014-03-19 宇龙计算机通信科技(深圳)有限公司 Method for displaying applications on unlocking interface and intelligent terminal
WO2014117338A1 (en) * 2013-01-30 2014-08-07 东莞宇龙通信科技有限公司 Terminal and method for quickly activating application program
CN105009081A (en) * 2013-12-04 2015-10-28 华为终端有限公司 Method for determining application associated with interface element, electronic device, and server
CN105354306A (en) * 2015-11-04 2016-02-24 魅族科技(中国)有限公司 Application recommendation method and terminal
WO2016154814A1 (en) * 2015-03-27 2016-10-06 华为技术有限公司 Method and apparatus for displaying electronic picture, and mobile device
CN106055416A (en) * 2016-05-23 2016-10-26 珠海市魅族科技有限公司 Cross-application data transfer method and apparatus
CN106201161A (en) * 2014-09-23 2016-12-07 北京三星通信技术研究有限公司 The display packing of electronic equipment and system
CN106663112A (en) * 2014-11-26 2017-05-10 谷歌公司 Presenting information cards for events associated with entities
CN107870712A (en) * 2016-09-23 2018-04-03 北京搜狗科技发展有限公司 A kind of screenshot processing method and device
CN108268559A (en) * 2017-01-04 2018-07-10 阿里巴巴集团控股有限公司 Information providing method and device based on ticketing service search
CN109246464A (en) * 2018-08-22 2019-01-18 Oppo广东移动通信有限公司 Method for displaying user interface, device, terminal and storage medium
CN109407936A (en) * 2018-09-21 2019-03-01 Oppo(重庆)智能科技有限公司 Screenshot method and relevant apparatus
CN109697091A (en) * 2017-10-23 2019-04-30 腾讯科技(深圳)有限公司 Processing method, device, storage medium and the electronic device of the page
CN110297681A (en) * 2019-06-24 2019-10-01 腾讯科技(深圳)有限公司 Image processing method, device, terminal and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102930263A (en) * 2012-09-27 2013-02-13 百度国际科技(深圳)有限公司 Information processing method and device
US20190114065A1 (en) * 2017-10-17 2019-04-18 Getac Technology Corporation Method for creating partial screenshot
CN107832377B (en) * 2017-10-30 2021-09-21 北京小米移动软件有限公司 Image information display method, device and system, and storage medium
CN111061530A (en) * 2019-12-05 2020-04-24 维沃移动通信有限公司 Image processing method, electronic device and storage medium

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103064627A (en) * 2013-01-11 2013-04-24 广东欧珀移动通信有限公司 Application management method and device
WO2014117338A1 (en) * 2013-01-30 2014-08-07 东莞宇龙通信科技有限公司 Terminal and method for quickly activating application program
CN103218133A (en) * 2013-03-28 2013-07-24 东莞宇龙通信科技有限公司 Startup method of associated application program and terminal
CN103645826A (en) * 2013-11-29 2014-03-19 宇龙计算机通信科技(深圳)有限公司 Method for displaying applications on unlocking interface and intelligent terminal
CN105009081A (en) * 2013-12-04 2015-10-28 华为终端有限公司 Method for determining application associated with interface element, electronic device, and server
CN106201161A (en) * 2014-09-23 2016-12-07 北京三星通信技术研究有限公司 The display packing of electronic equipment and system
CN106663112A (en) * 2014-11-26 2017-05-10 谷歌公司 Presenting information cards for events associated with entities
WO2016154814A1 (en) * 2015-03-27 2016-10-06 华为技术有限公司 Method and apparatus for displaying electronic picture, and mobile device
CN105354306A (en) * 2015-11-04 2016-02-24 魅族科技(中国)有限公司 Application recommendation method and terminal
CN106055416A (en) * 2016-05-23 2016-10-26 珠海市魅族科技有限公司 Cross-application data transfer method and apparatus
CN107870712A (en) * 2016-09-23 2018-04-03 北京搜狗科技发展有限公司 A kind of screenshot processing method and device
CN108268559A (en) * 2017-01-04 2018-07-10 阿里巴巴集团控股有限公司 Information providing method and device based on ticketing service search
CN109697091A (en) * 2017-10-23 2019-04-30 腾讯科技(深圳)有限公司 Processing method, device, storage medium and the electronic device of the page
CN109246464A (en) * 2018-08-22 2019-01-18 Oppo广东移动通信有限公司 Method for displaying user interface, device, terminal and storage medium
CN109407936A (en) * 2018-09-21 2019-03-01 Oppo(重庆)智能科技有限公司 Screenshot method and relevant apparatus
CN110297681A (en) * 2019-06-24 2019-10-01 腾讯科技(深圳)有限公司 Image processing method, device, terminal and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021109960A1 (en) * 2019-12-05 2021-06-10 维沃移动通信有限公司 Image processing method, electronic device, and storage medium
CN112700302A (en) * 2020-12-29 2021-04-23 维沃移动通信有限公司 Order management method and device
CN115080163A (en) * 2022-06-08 2022-09-20 深圳传音控股股份有限公司 Display processing method, intelligent terminal and storage medium

Also Published As

Publication number Publication date
WO2021109960A1 (en) 2021-06-10

Similar Documents

Publication Publication Date Title
CN110995923B (en) Screen projection control method and electronic equipment
CN111061574B (en) Object sharing method and electronic device
CN109743498B (en) Shooting parameter adjusting method and terminal equipment
CN109005286B (en) Display control method and folding screen terminal
CN110502163B (en) Terminal device control method and terminal device
CN110062105B (en) Interface display method and terminal equipment
CN111142723B (en) Icon moving method and electronic equipment
CN110489029B (en) Icon display method and terminal equipment
CN110245246B (en) Image display method and terminal equipment
CN111104029B (en) Shortcut identifier generation method, electronic device and medium
CN110752981B (en) Information control method and electronic equipment
CN109828731B (en) Searching method and terminal equipment
CN109358931B (en) Interface display method and terminal
CN111026299A (en) Information sharing method and electronic equipment
WO2021109960A1 (en) Image processing method, electronic device, and storage medium
CN110830713A (en) Zooming method and electronic equipment
CN111046211A (en) Article searching method and electronic equipment
CN111752450A (en) Display method and device and electronic equipment
CN111124231B (en) Picture generation method and electronic equipment
CN111090529B (en) Information sharing method and electronic equipment
CN110908750B (en) Screen capturing method and electronic equipment
CN111638822A (en) Icon operation method and device and electronic equipment
CN109284146B (en) Light application starting method and mobile terminal
CN111221602A (en) Interface display method and electronic equipment
CN109067975B (en) Contact person information management method and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination