WO2021109960A1 - Procédé de traitement d'image, dispositif électronique et support d'informations - Google Patents

Procédé de traitement d'image, dispositif électronique et support d'informations Download PDF

Info

Publication number
WO2021109960A1
WO2021109960A1 PCT/CN2020/132640 CN2020132640W WO2021109960A1 WO 2021109960 A1 WO2021109960 A1 WO 2021109960A1 CN 2020132640 W CN2020132640 W CN 2020132640W WO 2021109960 A1 WO2021109960 A1 WO 2021109960A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
application
input
information
target
Prior art date
Application number
PCT/CN2020/132640
Other languages
English (en)
Chinese (zh)
Inventor
郑伟
Original Assignee
维沃移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 维沃移动通信有限公司 filed Critical 维沃移动通信有限公司
Publication of WO2021109960A1 publication Critical patent/WO2021109960A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0485Scrolling or panning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

Definitions

  • the embodiments of the present invention relate to the field of communication technologies, and in particular, to an image processing method, electronic equipment, and storage medium.
  • the embodiments of the present invention provide an image processing method and an electronic device to solve the problem of the cumbersome operation process of viewing image-related application program information.
  • the present invention is implemented as follows:
  • an embodiment of the present invention provides an image processing method applied to an electronic device.
  • the method includes: receiving a first input from a user; displaying a first image in response to the first input; displaying a target image, so
  • the target image includes the first image and a second image, the second image is generated by taking a screenshot of the first information of a first application, and the first application is determined based on the first image.
  • an embodiment of the present invention provides an electronic device that includes: a receiving module and a display module; the receiving module is configured to receive a user's first input; and the display module is configured to respond to the first input An input is used to display a first image; the display module is also used to display a target image, the target image includes the first image and a second image, and the second image is a screenshot of the first information of the first application Generated, the first application program is determined based on the first image.
  • an embodiment of the present invention provides an electronic device, including a processor, a memory, and a computer program stored in the memory and capable of running on the processor.
  • the computer program is executed by the processor to achieve the following The steps of the image processing method in one aspect.
  • an embodiment of the present invention provides a computer-readable storage medium storing a computer program on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the image processing method in the first aspect are implemented.
  • the first image by receiving a user's first input; in response to the first input, displaying a first image; displaying a target image, the target image including the first image and the second image, the first image
  • the second image is generated by taking a screenshot of the first information of the first application.
  • the first application is determined based on the first image, that is, by receiving the first input of the user, displaying the first image, and then displaying the first image based on the first image.
  • the first image determines the first application, takes a screenshot of the first information of the first application to generate a second image, and finally displays the target image including the first image and the second image.
  • the user can conveniently view the first application information related to the first image without having to open the first application again, which simplifies the user's operation.
  • the first image and the second image can be displayed in one image. , Which is convenient for users to watch.
  • FIG. 1 is a schematic structural diagram of a possible Android operating system provided by an embodiment of the present invention
  • FIG. 3 is one of the schematic diagrams of displaying a first image by an electronic device according to an embodiment of the present invention
  • FIG. 5 is the second schematic diagram of displaying a target image on an electronic device according to an embodiment of the present invention.
  • FIG. 6 is the third schematic diagram of displaying a target image on an electronic device according to an embodiment of the present invention.
  • FIG. 7 is a fourth schematic diagram of displaying a target image on an electronic device according to an embodiment of the present invention.
  • FIG. 8 is the fifth schematic diagram of displaying a target image by an electronic device according to an embodiment of the present invention.
  • FIG. 9 is a sixth schematic diagram of displaying a target image on an electronic device according to an embodiment of the present invention.
  • FIG. 10 is a schematic structural diagram of an electronic device provided by an embodiment of the present invention.
  • FIG. 11 is a schematic diagram of hardware of an electronic device provided by an embodiment of the present invention.
  • words such as “exemplary” or “for example” are used as examples, illustrations, or illustrations. Any embodiment or design solution described as “exemplary” or “for example” in the embodiments of the present invention should not be construed as being more preferable or advantageous than other embodiments or design solutions. To be precise, words such as “exemplary” or “for example” are used to present related concepts in a specific manner.
  • multiple refers to two or more than two, for example, multiple processing units refers to two or more processing units; multiple elements Refers to two or more elements, etc.
  • the embodiment of the present invention provides an image processing method.
  • An electronic device can receive a first input from a user; display a first image in response to the first input; display a target image, the target image including the first image And a second image, the second image being generated by taking a screenshot of the first information of a first application, the first application being determined based on the first image. Therefore, through this solution, the first application information related to the first image can be conveniently viewed.
  • the following takes the Android operating system as an example to introduce the software environment to which the image processing method provided by the embodiment of the present invention is applied.
  • the architecture of the Android operating system includes 4 layers, which are: application layer, application framework layer, system runtime library layer, and kernel layer (specifically, it may be the Linux kernel layer).
  • the application framework layer is the framework of the application. Developers can develop some applications based on the application framework layer while complying with the development principles of the application framework.
  • developers can develop a software program that implements the image processing method provided by the embodiment of the present invention based on the system architecture of the Android operating system as shown in FIG.
  • the processing method can be run based on the Android operating system as shown in FIG. 1. That is, the processor or the terminal can implement the image processing method provided by the embodiment of the present invention by running the software program in the Android operating system.
  • the electronic device in the embodiment of the present invention may be a mobile electronic device or a non-mobile electronic device.
  • Mobile electronic devices can be mobile phones, tablet computers, notebook computers, handheld computers, vehicle terminals, wearable devices, ultra-mobile personal computers (UMPC), netbooks, or personal digital assistants (personal digital assistants, PDAs), etc.
  • the non-mobile electronic device may be a personal computer (PC), a television (television, TV), a teller machine, or a self-service machine, etc.; the embodiment of the present invention does not specifically limit it.
  • the execution subject of the image processing method provided by the embodiment of the present invention can be the above-mentioned electronic device (including mobile electronic device and non-mobile electronic device), or can be a functional module and/or functional entity in the electronic device that can implement the method, Specifically, it can be determined according to actual use requirements, and the embodiment of the present invention does not limit it.
  • the following takes an electronic device as an example to exemplarily describe the image processing method provided by the embodiment of the present invention.
  • an embodiment of the present invention provides an image processing method applied to an electronic device.
  • the method may include the following steps 201 to 203.
  • Step 201 Receive the user's first input
  • the first input includes but is not limited to: at least one of touch input and voice input.
  • the touch input described here may be a touch input to a target object, which specifically includes: sliding input, dragging input, single-click input, double-click input, rotation input, or long-press input to the target object.
  • it can be single-touch input, such as sliding input, dragging input, rotation input, or double-tap input with a single finger on the target object; it can also be multi-touch input, such as simultaneous use Two fingers perform sliding input, drag input, double-tap input, rotation input, or long-press input on the target object.
  • the first input may also be the first operation.
  • Step 202 In response to the first input, display a first image
  • the first image includes: an image generated by taking a screenshot of the fifth information of the fifth application program, an image obtained by shooting, an image stored in advance, or an image sent by the target device.
  • the fifth information of the fifth application program may include, but is not limited to, content information displayed on the main interface, function interface, or shortcut interface of the fifth application program.
  • the first image may be an image generated by taking a screenshot of the fifth information of the fifth application program, such as an image generated by taking a screenshot of the interface when the user uses the application program to view the train ticket information.
  • the first image may be an image obtained by shooting, for example, a user takes a picture of a train ticket to obtain an image.
  • the first image may be a pre-stored image, such as any image in an album.
  • the first image may also be an image sent by the target device, such as an image sent by the sender from the target device when using a chat application to chat.
  • the first image is not limited to the cases listed above, and can be specifically determined according to actual conditions.
  • Step 203 Display a target image, the target image includes the first image and the second image, the second image is generated by taking a screenshot of the first information of the first application, and the first application is based on the The first image is determined.
  • the second image is generated by taking a screenshot of the first information of the first application, where the first information of the first application may include, but is not limited to, the main interface, function interface, shortcut interface, etc. of the first application The content information displayed in the interface.
  • the first image 301 is an image generated by taking a screenshot of the query interface when the user uses a travel app to query train ticket information. Based on the first image 301, the first image 301 can be identified Based on the departure date in, the first application can be determined to be the weather application according to the pre-stored image content and the association relationship between the application. It should be noted that the association relationship can also be set by the user and can be determined according to actual conditions.
  • the second image may be an image generated by querying the weather App for a screenshot of the weather information interface on January 1, and 401 in FIG. 4 is the second image.
  • the first image by receiving a user's first input; in response to the first input, displaying a first image; displaying a target image, the target image including the first image and the second image, the first image
  • the second image is generated by taking a screenshot of the first information of the first application.
  • the first application is determined based on the first image, that is, by receiving the first input of the user, displaying the first image, and then displaying the first image based on the first image.
  • the first image determines the first application, takes a screenshot of the first information of the first application to generate a second image, and finally displays the target image including the first image and the second image.
  • the user can conveniently view the first application information related to the first image without having to open the first application again, which simplifies the user's operation.
  • the first image and the second image can be displayed in one image. , Which is convenient for users to watch.
  • step 203 before displaying the target image in step 203, it may further include:
  • the first image content may be a feature of the first image content, which may include, but is not limited to, text, scenery, people, plants, animals, clothing, etc., which may be specifically determined according to actual conditions.
  • Step 2032 Determine first information of the first application based on the content of the first image
  • the first information of the first application program may be determined according to the pre-stored association relationship between the first image content and the application program.
  • the association relationship may also be set by the user, and the specific relationship may be determined according to the actual situation.
  • the first image content is clothing, which can be associated with shopping apps; the first image content is scenery, which can be associated with travel apps; the first image content is plants, which can be associated with search apps; the first image content It is a text that can be searched based on keywords in the text. For example, if there are date keywords in the text, it can be associated with weather apps; if there are location keywords in the text, it can be associated with travel apps. Other determination methods are also included in the protection scope of the embodiments of the present invention.
  • the first information of the first application includes, but is not limited to, the content information displayed in the main interface, function interface, shortcut interface of the first application, and the function interface may be a function associated with the content of the first image.
  • the function interface may be a function associated with the content of the first image.
  • Interface if the text of the first image content is January 1, 2019, the first application is determined to be a weather app, and the functional interface of the first application can be the weather information of the weather app on January 1, 2019 .
  • Step 2033 Take a screenshot of the first information of the first application to generate the second image
  • the first image 301 is generated by taking a screenshot of the query interface when using a travel app to query train ticket information.
  • the first image 301 can be identified, and the first image can be obtained.
  • Content such as departure time, departure station, destination station, etc.
  • the first application can be determined as a weather app based on the pre-stored association relationship between the first image content and the application, so that the first application can be determined
  • the first information is a weather app that queries the weather information on January 1, and can take a screenshot of the first information to generate a second image.
  • Step 2034 Synthesize the first image and the second image into the target image.
  • the first information of the first application is determined, and then a screenshot of the first information of the first application is taken to generate the second image, and finally the first image and the second The image is combined into the target image.
  • the relevant information is saved in an image, which is convenient for the user to view.
  • the user does not need to open the first application to view the first information of the first application, and can directly view the first image and the second image in the target image, simplifying The user's operation steps.
  • synthesizing the first image and the second image into the target image may be directly stitching the first image and the second image, or after editing the first image or the second image
  • editing can include but is not limited to adjusting parameters, such as zooming in, zooming out, cropping, rotating sharpening, defogging, mosaicing, adding filters, adding special effects, adding text, and so on.
  • the first image may be enlarged and then spliced with the second image, the second image may be reduced and spliced with the first image, or the first image and the second image may be reduced and spliced at the same time.
  • other ways of synthesizing the first image and the second image into the target image are also included in the protection scope of the present invention.
  • the first image and the second image can be spliced in any form.
  • the first image and the second image may be partially overlapped.
  • the overlapped area may be the default or be set by the user, which can be determined according to the actual situation.
  • 301 is the first
  • 401 is the second image
  • the target image 402 is obtained after 301 and 401 are partially overlapped and spliced.
  • the first image and the second image may completely overlap.
  • the first image and the second image may be adjacent.
  • 301 is the first image
  • 401 is the second image
  • the target image 402 is obtained after 301 and 401 are adjacently stitched together.
  • the first image and the second image can also be disjoint, that is, they can be separated by a preset distance, where the preset distance can be a default or set by the user, which can be determined according to the actual situation, as shown in the figure
  • 301 is the first image
  • 401 is the second image
  • the target image 402 is obtained after stitching.
  • the screenshot of the first information of the first application program, before generating the second image further includes: displaying a first interface of the first application program, the first interface being the An interface for displaying the first information in the first application. In this way, it is convenient for the user to perform the next step.
  • the first interface of the first application program may include, but is not limited to, the main interface, function interface, shortcut interface, etc. of the first application program.
  • the first interface is an interface for displaying the first information in the first application. It should be noted that, in the case where the first information is displayed on the first interface, the first information can be automatically displayed. One information screenshot generates a second image, which simplifies the user's operation. When the first information is not displayed on the first interface, the first interface is displayed to facilitate the user's next operation.
  • the first application program is a weather app
  • the first information of the first application program is weather information of the weather app on January 1, 2019.
  • the main interface of the weather App may be displayed, and the first information may be displayed on the main interface.
  • the method further includes:
  • the second input may include, but is not limited to, a drag input, a click input, a long press input, a hover touch, etc., and the second input may also be a second operation.
  • the first information is displayed on the first interface.
  • the first information can be determined according to the user's input, so that the second image can be determined.
  • the first application program may be a weather program
  • the first interface of the first application program may be the main interface of the weather program.
  • the user On the main interface, the user may select the date as January 1 and the city as C city.
  • the first information displayed is the weather information of city C on January 1.
  • the first application program may be a map program, and the first interface of the first application program is the main interface of the map program.
  • the user can input the starting point, the end point, etc. on the main interface, and can query the route from the start point to the end point.
  • the interface may be the first information, and the first information is displayed on the first interface.
  • the method further includes:
  • the third input may be an input used to trigger the update of the target image.
  • the third input may include, but is not limited to, drag input, click input, long press input, floating touch input, etc.
  • the third input may also This is the third operation. If an update control is displayed, the user can click the control to trigger the update of the target image.
  • the target image can be placed in an editable state, and the user can edit the target image.
  • the target image is updated. In this way, the target image can be updated conveniently.
  • updating the target image may be only the first image, only the second image, or the first image and the second image at the same time, or the entire The target image is updated.
  • the update target image is not limited to the several situations listed above, and can be determined according to the actual situation, which is not limited in the embodiment of the present invention.
  • the updating the target image includes: updating the display positions of the first image and the second image.
  • the display positions of the first image and the second image can be updated conveniently.
  • the third input is used to update the display positions of the first image and the second image.
  • the third input may include, but is not limited to, drag input, click input, long press input, hover touch input, etc.
  • the third input may also be a third operation.
  • the third input may be that the user clicks on the control to update the display position of the first image and the second image, long presses the target image, clicks the target image, etc.
  • the user may click on the target image when the target image is in an editable state.
  • the display position can be updated by dragging the first image or the second image.
  • the method before updating the target image, the method further includes:
  • the second information of the second application program may include, but is not limited to, content information displayed in interfaces such as the main interface, function interface, and shortcut interface of the second application program.
  • the updating the target image includes: synthesizing the target image and the third image into a fourth image, and updating the target image to the fourth image.
  • the third information of the second application is determined, the third image is generated, and the target object and the third image are combined into a fourth image.
  • Image and finally the target image is updated to the fourth image.
  • the target information of multiple applications can be displayed in one image more intuitively, and the operation steps for the user when viewing the target information of different applications can be simplified.
  • synthesizing the target image and the third image into a fourth image may be stitching the target object and the third object in any form to obtain the fourth image.
  • the third input is used to update the target image.
  • the third input may include, but is not limited to, drag input, click input, long press input, hover touch input, etc.
  • the third input may also be a third operation.
  • the third input may be that the user clicks on the control to update the target image, long presses the target image, clicks on the target image, etc.
  • the user can click on the Get More Image control to obtain the third image.
  • the first image 301 is recognized, and it can be obtained that the departure station is A, the terminal station is B, and the departure time is January 1, 2019.
  • the originating station it can be determined that the first application is a travel app.
  • the second image 701 can query the program for the originating station is C, the terminal station is A, and the date is January 1, 2019.
  • the target image 702 can be obtained by splicing the first image 301 and the second image 701. Image recognition can be performed on the second image 701, and the second image content of the second image 701 can be obtained. Based on the content of the second image, for example, the originating station is C, it can be determined that the second application is a travel app.
  • the first image 301 by identifying the first image 301, it can be obtained that the starting station is A, the terminal station is B, and the departure time is January 1, 2019.
  • the first application is a travel app.
  • the second image 701 can query the program for the originating station is C, the terminal station is A, and the date is January 1, 2019.
  • the target image 702 can be obtained by splicing the first image 301 and the second image 701. Image recognition can be performed on the first image 301, and the third image content of the first image 301 can be obtained. Based on the content of the third image, for example, the date is January 1, 2019, the second application can be determined to be a weather application.
  • the method before the updating the target image, the method further includes: determining third information of the target application based on the first information;
  • the third input is used to update the second image.
  • the third input may include, but is not limited to, drag input, click input, long press input, hover touch input, etc.
  • the third input may also be a third operation.
  • the third input may be that the user clicks on a control to update the target image, long presses the target image, clicks on the target image, or the like.
  • the third input may be a click input to update the second image control, or it may be a touch input of the user to the second image after the user clicks input to update the target image control and the target image is in an editable state. The details can be determined according to the actual situation.
  • the second information of the first application program may include, but is not limited to, content information displayed in interfaces such as the main interface, function interface, and shortcut interface of the first application program.
  • the updating the target image includes: updating the second image to the fifth image;
  • the target application is the first application or the third application. In this way, the second image in the target image can be updated conveniently.
  • the first application is a weather application
  • the first information is weather information on January 1, that is, the second image is generated by taking a screenshot of the weather information on January 1.
  • the third information of the first application program is weather information on January 2nd
  • a screenshot of this information can generate a fifth image, and the second image in the target image can be updated to the fifth image.
  • the target image 402 includes a first image 301 and a second image 401.
  • the third application is a map
  • the third information of the third application may be the train information of the departure station C and the arrival station A.
  • the updating the target image includes:
  • the method further includes:
  • the sixth image includes: an image generated by taking a screenshot of the fourth information of the fourth application program, an image obtained by shooting, an image stored in advance, or an image sent by a target device.
  • the third input is used to update the first image.
  • the third input may include, but is not limited to, drag input, click input, long press input, hover touch input, etc.
  • the third input may also be a third operation.
  • the third input may be the user clicking on the control to update the first image. It may be that the user long presses the target image or clicks the target image, the target image is in an editable state, and then clicks on the first image, which can be specified according to actual conditions. determine.
  • the sixth image may be an image generated by taking a screenshot of the fourth information of the fourth application, such as an image generated by taking a screenshot of the interface when a user uses a travel app to view train ticket information.
  • the sixth image may be an image obtained by shooting. For example, the user clicks on the first image to open a camera program, and the image obtained by taking a photo is the sixth image.
  • the sixth image can be a pre-stored image. For example, if the user clicks on the first image, any image can be selected from the album program as the sixth image.
  • the sixth image may also be an image sent by the target device, such as an image sent by the sender through the target device when using a chat program to chat.
  • the sixth image is not limited to the cases listed above, and can be specifically determined according to actual conditions.
  • the updating of the target image includes:
  • the method further includes:
  • the fourth input includes but is not limited to at least one of a drag input, a click input, a long press input, and a hover touch.
  • the fourth input may also be a fourth operation.
  • a first interface of the first application is displayed, where the first interface is an interface for displaying the first information in the first application.
  • the first interface of the first application can be displayed according to the user's input to the second image.
  • the first interface is an interface for displaying the first information in the first application.
  • the first image 301 is an image generated by taking a screenshot of the query interface when the user uses a travel app to query information that contains train tickets. Based on the first image 301, the first image 301 can be identified. Based on the departure time in the image 301, it can be determined that the first application is a weather app based on the departure date.
  • the second image may be an image generated by querying the weather App for a screenshot of the weather information interface on January 1, and 401 in FIG. 4 is the second image. The user clicks on the second image 401 in FIG. 4 to display the weather information on January 1. It should be noted that the weather interface may be different from the second image, for example, it may be the latest weather information on January 1.
  • the target image may further include: receiving a fifth input from the user to the first image;
  • the fifth input includes but is not limited to at least one of a drag input, a click input, a long press input, and a hover touch.
  • the fifth input may also be a fifth operation.
  • a second interface of the fifth application is displayed, where the second interface is an interface for acquiring the first image.
  • the second interface of the fifth application can be displayed according to the user's input to the first image.
  • the first image is a screenshot of a weather information query interface for a weather app
  • clicking the first image in the target image can display any interface of the weather app.
  • the first image is a captured image
  • the first image is an image in an album
  • an embodiment of the present invention provides an electronic device 120.
  • the electronic device 120 includes a receiving module 121 and a display module 122.
  • the receiving module is configured to receive a user's first input; the display module is configured to display a first image in response to the first input; the display module is also configured to display a target image, the target image It includes the first image and a second image, the second image is generated by taking a screenshot of the first information of a first application, and the first application is determined based on the first image.
  • an acquiring module is configured to acquire the first image content of the first image; a determining module is configured to determine the first information of the first application based on the first image content; a generating module is configured to To take a screenshot of the first information of the first application program to generate the second image; a synthesis module for synthesizing the first image and the second image into the target image.
  • the relevant information is saved in an image, which is convenient for the user to view. The user does not need to open the first application to view the first information of the first application, and can directly view the first image and the second image in the target image, simplifying The user's operation steps.
  • the receiving module 121 is further configured to receive a second input of the user on the target interface; the display module 122 is further configured to display the first interface on the first interface in response to the second input One information.
  • the first information can be determined according to the user's input, so that the second image can be determined.
  • the update module is further configured to update the display positions of the first image and the second image. In this way, the display positions of the first image and the second image can be updated conveniently.
  • the acquiring module is further configured to acquire the second image content of the second image or the third image content of the first image; the determining module is further configured to acquire based on the second image content or the first image
  • the third image content of an image determines the second information of the second application; the generating module is also used to take a screenshot of the second information of the second application to generate the third image; the processing module is used to convert the The target image and the third image are synthesized into a fourth image, and the target image is updated to the fourth image.
  • the target information of multiple applications can be displayed in one image more intuitively, and the operation steps for the user when viewing the target information of different applications can be simplified.
  • the determining module is further configured to determine the third information of the target application based on the first information; the generating module is further configured to take a screenshot of the third information of the target application to generate the first information Five images; an update module, which is also used to update the second image to the fifth image; wherein, the target application is the first application or the third application. In this way, the second image in the target image can be updated conveniently.
  • the acquiring module is also used to acquire a sixth image; wherein, the sixth image includes: an image generated by taking a screenshot of the fourth information of the fourth application program, a captured image, a pre-stored image, or a target The image sent by the device.
  • the update module is also used to update the first image to the sixth image.
  • the first image includes: an image generated by taking a screenshot of the fifth information of the fifth application program, an image obtained by shooting, an image stored in advance, or an image sent by the target device.
  • the electronic device provided by the embodiment of the present invention can implement each process implemented by the electronic device in the method embodiments of FIG. 2 to FIG. 9. In order to avoid repetition, details are not described herein again.
  • the first image by receiving a user's first input; in response to the first input, displaying a first image; displaying a target image, the target image including the first image and the second image, the first image
  • the second image is generated by taking a screenshot of the first information of the first application.
  • the first application is determined based on the first image, that is, by receiving the first input of the user, displaying the first image, and then displaying the first image based on the first image.
  • the first image determines the first application, takes a screenshot of the first information of the first application to generate a second image, and finally displays the target image including the first image and the second image. In this way, the first application information related to the first image can be conveniently viewed.
  • FIG. 11 is a schematic diagram of the hardware structure of an electronic device that implements various embodiments of the present invention.
  • the electronic device 100 includes but is not limited to: a radio frequency unit 101, a network module 102, an audio output unit 103, an input unit 104, a sensor 105, a display unit 106, a user input unit 107, an interface unit 108, and a memory 109 , The processor 110, and the power supply 111 and other components.
  • the structure of the electronic device shown in FIG. 11 does not constitute a limitation on the electronic device.
  • the electronic device may include more or fewer components than those shown in the figure, or a combination of certain components, or different components. Layout.
  • electronic devices include, but are not limited to, mobile phones, tablet computers, notebook computers, palmtop computers, in-vehicle electronic devices, wearable devices, and pedometers.
  • the user input unit 107 is used to receive the first input of the user; the display unit 106 is used to display the first image in response to the first input; the display unit 106 is also used to display the target image, the target image It includes the first image and a second image, the second image is generated by taking a screenshot of the first information of a first application, and the first application is determined based on the first image.
  • the electronic device can receive a first input from a user; display a first image in response to the first input; display a target image, the target image including the first image and the second image.
  • Image the second image is generated by taking a screenshot of the first information of the first application, and the first application is determined based on the first image, that is, by receiving the first input of the user, displaying the first Then, the first application is determined based on the first image, a second image is generated by taking a screenshot of the first information of the first application, and finally the target image including the first image and the second image is displayed.
  • the first application information related to the first image can be conveniently viewed.
  • the radio frequency unit 101 can be used for receiving and sending signals in the process of sending and receiving information or talking. Specifically, the downlink data from the base station is received and processed by the processor 110; in addition, Uplink data is sent to the base station.
  • the radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
  • the radio frequency unit 101 can also communicate with the network and other devices through a wireless communication system.
  • the electronic device provides users with wireless broadband Internet access through the network module 102, such as helping users to send and receive emails, browse web pages, and access streaming media.
  • the audio output unit 103 can convert the audio data received by the radio frequency unit 101 or the network module 102 or stored in the memory 109 into an audio signal and output it as sound. Moreover, the audio output unit 103 may also provide audio output related to a specific function performed by the electronic device 100 (for example, call signal reception sound, message reception sound, etc.).
  • the audio output unit 103 includes a speaker, a buzzer, a receiver, and the like.
  • the input unit 104 is used to receive audio or video signals.
  • the input unit 104 may include a graphics processing unit (GPU) 1041 and a microphone 1042, and the graphics processor 1041 is configured to monitor images of still pictures or videos obtained by an image capture device (such as a camera) in a video capture mode or an image capture mode. Data is processed.
  • the processed image frame can be displayed on the display unit 106.
  • the image frame processed by the graphics processor 1041 may be stored in the memory 109 (or other storage medium) or sent via the radio frequency unit 101 or the network module 102.
  • the microphone 1042 can receive sound, and can process such sound into audio data.
  • the processed audio data can be converted into a format that can be sent to a mobile communication base station via the radio frequency unit 101 for output in the case of a telephone call mode.
  • the electronic device 100 further includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors.
  • the light sensor includes an ambient light sensor and a proximity sensor.
  • the ambient light sensor can adjust the brightness of the display panel 1061 according to the brightness of the ambient light.
  • the proximity sensor can close the display panel 1061 and the display panel 1061 when the electronic device 100 is moved to the ear. / Or backlight.
  • the accelerometer sensor can detect the magnitude of acceleration in various directions (usually three axes), and can detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of electronic devices (such as horizontal and vertical screen switching, related games) , Magnetometer posture calibration), vibration recognition related functions (such as pedometer, percussion), etc.; sensor 105 can also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, Infrared sensors, etc., will not be repeated here.
  • the display unit 106 is used to display information input by the user or information provided to the user.
  • the display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (OLED), etc.
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • the user input unit 107 may be used to receive inputted numeric or character information, and generate key signal input related to user settings and function control of the electronic device.
  • the user input unit 107 includes a touch panel 1071 and other input devices 1072.
  • the touch panel 1071 also called a touch screen, can collect the user's touch operations on or near it (for example, the user uses any suitable objects or accessories such as fingers, stylus, etc.) on the touch panel 1071 or near the touch panel 1071. operating).
  • the touch panel 1071 may include two parts: a touch detection device and a touch controller.
  • the touch panel 1071 can be overlaid on the display panel 1061.
  • the touch panel 1071 detects a touch operation on or near it, it transmits it to the processor 110 to determine the type of the touch event, and then the processor 110 determines the type of the touch event according to the touch.
  • the type of event provides corresponding visual output on the display panel 1061.
  • the touch panel 1071 and the display panel 1061 are used as two independent components to realize the input and output functions of the electronic device, but in some embodiments, the touch panel 1071 and the display panel 1061 can be integrated
  • the implementation of the input and output functions of the electronic device is not specifically limited here.
  • the interface unit 108 is an interface for connecting an external device with the electronic device 100.
  • the external device may include a wired or wireless headset port, an external power source (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device with an identification module, audio input/output (Input/Output, I/O) port, video I/O port, headphone port, etc.
  • the interface unit 108 can be used to receive input (for example, data information, power, etc.) from an external device and transmit the received input to one or more elements in the electronic device 100 or can be used to connect to the electronic device 100 and the external device. Transfer data between devices.
  • the memory 109 can be used to store software programs and various data.
  • the memory 109 may mainly include a program storage area and a data storage area.
  • the program storage area may store an operating system, an application program required by at least one function (such as a sound playback function, an image playback function, etc.), etc.; Data created by the use of mobile phones (such as audio data, phone book, etc.), etc.
  • the memory 109 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.
  • the processor 110 is the control center of the electronic device, which uses various interfaces and lines to connect the various parts of the entire electronic device, runs or executes software programs and/or modules stored in the memory 109, and calls data stored in the memory 109 , Perform various functions of electronic equipment and process data, so as to monitor the electronic equipment as a whole.
  • the processor 110 may include one or more processing units; optionally, the processor 110 may integrate an application processor and a modem processor, where the application processor mainly processes the operating system, user interface, and application programs, etc.
  • the adjustment processor mainly deals with wireless communication. It can be understood that the foregoing modem processor may not be integrated into the processor 110.
  • the electronic device 100 may also include a power source 111 (such as a battery) for supplying power to various components.
  • a power source 111 such as a battery
  • the power source 111 may be logically connected to the processor 110 through a power management system, so as to manage charging, discharging, and power consumption through the power management system. Management and other functions.
  • the electronic device 100 includes some functional modules not shown, which will not be repeated here.
  • an embodiment of the present invention further provides an electronic device, which may include the aforementioned processor 110 shown in FIG. 11, a memory 109, and a computer program stored in the memory 109 and running on the processor 110,
  • the computer program is executed by the processor 110, each process of the image processing method shown in any one of FIG. 2 to FIG. 9 in the above method embodiment is realized, and the same technical effect can be achieved. To avoid repetition, it will not be repeated here. .
  • the embodiment of the present invention also provides a computer-readable storage medium, and a computer program is stored on the computer-readable storage medium.
  • a computer program is stored on the computer-readable storage medium.
  • the computer program When the computer program is executed by a processor, the computer program shown in any one of FIG. 2 to FIG. 9 in the foregoing method embodiment is implemented.
  • the computer-readable storage medium such as read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk, or optical disk, etc.
  • the disclosed device and method may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or It can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the technical solution of the present invention essentially or the part that contributes to the existing technology can be embodied in the form of a software product, and the computer software product is stored in a storage medium (such as ROM/RAM, magnetic disk, The optical disc) includes a number of instructions to enable an electronic device (which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to execute the method described in each embodiment of the present invention.
  • the program can be stored in a computer readable storage medium, and the program can be stored in a computer readable storage medium. When executed, it may include the procedures of the above-mentioned method embodiments.
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory (Read-Only Memory, ROM), or a random access memory (Random Access Memory, RAM), etc.
  • modules, units, and sub-units can be implemented in one or more Application Specific Integrated Circuits (ASIC), Digital Signal Processor (DSP), Digital Signal Processing Device (DSP Device, DSPD) ), programmable logic devices (Programmable Logic Device, PLD), Field-Programmable Gate Array (Field-Programmable Gate Array, FPGA), general-purpose processors, controllers, microcontrollers, microprocessors, used to execute the present disclosure Other electronic units or a combination of the functions described above.
  • ASIC Application Specific Integrated Circuits
  • DSP Digital Signal Processor
  • DSP Device Digital Signal Processing Device
  • DSPD Digital Signal Processing Device
  • PLD programmable logic devices
  • Field-Programmable Gate Array Field-Programmable Gate Array
  • FPGA Field-Programmable Gate Array
  • the technology described in the embodiments of the present disclosure can be implemented by modules (for example, procedures, functions, etc.) that perform the functions described in the embodiments of the present disclosure.
  • the software codes can be stored in the memory and executed by the processor.
  • the memory can be implemented in the processor or external to the processor.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

La présente invention concerne un procédé de traitement d'image. Le procédé comprend les étapes consistant à : recevoir une première entrée d'un utilisateur (201) ; afficher une première image en réponse à la première entrée (202) ; et afficher une image cible, l'image cible comprenant la première image et une seconde image, la seconde image étant créée en prenant une capture d'écran de premières informations d'une première application, et la première application étant déterminée sur la base de la première image (203).
PCT/CN2020/132640 2019-12-05 2020-11-30 Procédé de traitement d'image, dispositif électronique et support d'informations WO2021109960A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911234273.1 2019-12-05
CN201911234273.1A CN111061530A (zh) 2019-12-05 2019-12-05 一种图像处理方法、电子设备及存储介质

Publications (1)

Publication Number Publication Date
WO2021109960A1 true WO2021109960A1 (fr) 2021-06-10

Family

ID=70299924

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/132640 WO2021109960A1 (fr) 2019-12-05 2020-11-30 Procédé de traitement d'image, dispositif électronique et support d'informations

Country Status (2)

Country Link
CN (1) CN111061530A (fr)
WO (1) WO2021109960A1 (fr)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111061530A (zh) * 2019-12-05 2020-04-24 维沃移动通信有限公司 一种图像处理方法、电子设备及存储介质
CN112700302A (zh) * 2020-12-29 2021-04-23 维沃移动通信有限公司 订单管理方法及装置
CN115080163A (zh) * 2022-06-08 2022-09-20 深圳传音控股股份有限公司 显示处理方法、智能终端及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102930263A (zh) * 2012-09-27 2013-02-13 百度国际科技(深圳)有限公司 一种信息处理方法及装置
CN107832377A (zh) * 2017-10-30 2018-03-23 北京小米移动软件有限公司 图像信息显示方法、装置及系统、存储介质
US20190114065A1 (en) * 2017-10-17 2019-04-18 Getac Technology Corporation Method for creating partial screenshot
CN111061530A (zh) * 2019-12-05 2020-04-24 维沃移动通信有限公司 一种图像处理方法、电子设备及存储介质

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103064627B (zh) * 2013-01-11 2016-03-02 广东欧珀移动通信有限公司 一种应用程序管理方法及装置
CN104854849B (zh) * 2013-01-30 2018-04-10 东莞宇龙通信科技有限公司 终端和应用程序的快速启动方法
CN103218133B (zh) * 2013-03-28 2016-12-28 东莞宇龙通信科技有限公司 关联应用程序的启动方法和终端
CN103645826A (zh) * 2013-11-29 2014-03-19 宇龙计算机通信科技(深圳)有限公司 解锁界面上显示应用的方法和智能终端
CN105009081B (zh) * 2013-12-04 2019-09-13 华为终端有限公司 一种与界面元素关联应用程序的方法及电子设备、服务器
CN106201161B (zh) * 2014-09-23 2021-09-03 北京三星通信技术研究有限公司 电子设备的显示方法及系统
CN106663112A (zh) * 2014-11-26 2017-05-10 谷歌公司 呈现与实体相关联的事件的信息卡
WO2016154814A1 (fr) * 2015-03-27 2016-10-06 华为技术有限公司 Procédé et appareil d'affichage d'image électronique, et dispositif mobile
CN105354306A (zh) * 2015-11-04 2016-02-24 魅族科技(中国)有限公司 一种应用推荐方法以及终端
CN106055416B (zh) * 2016-05-23 2020-08-18 珠海市魅族科技有限公司 数据跨应用转移的方法和装置
CN107870712B (zh) * 2016-09-23 2021-11-09 北京搜狗科技发展有限公司 一种截图处理方法及装置
CN108268559A (zh) * 2017-01-04 2018-07-10 阿里巴巴集团控股有限公司 基于票务搜索的信息提供方法和装置
CN109697091A (zh) * 2017-10-23 2019-04-30 腾讯科技(深圳)有限公司 页面的处理方法、装置、存储介质和电子装置
CN109246464B (zh) * 2018-08-22 2021-03-16 Oppo广东移动通信有限公司 用户界面显示方法、装置、终端及存储介质
CN109407936B (zh) * 2018-09-21 2021-07-16 Oppo(重庆)智能科技有限公司 截图方法及相关装置
CN110297681A (zh) * 2019-06-24 2019-10-01 腾讯科技(深圳)有限公司 图像处理方法、装置、终端及存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102930263A (zh) * 2012-09-27 2013-02-13 百度国际科技(深圳)有限公司 一种信息处理方法及装置
US20190114065A1 (en) * 2017-10-17 2019-04-18 Getac Technology Corporation Method for creating partial screenshot
CN107832377A (zh) * 2017-10-30 2018-03-23 北京小米移动软件有限公司 图像信息显示方法、装置及系统、存储介质
CN111061530A (zh) * 2019-12-05 2020-04-24 维沃移动通信有限公司 一种图像处理方法、电子设备及存储介质

Also Published As

Publication number Publication date
CN111061530A (zh) 2020-04-24

Similar Documents

Publication Publication Date Title
WO2021104365A1 (fr) Procédé de partage d'objets et dispositif électronique
WO2021083052A1 (fr) Procédé de partage d'objet et dispositif électronique
WO2021104195A1 (fr) Procédé d'affichage d'images et dispositif électronique
WO2020258929A1 (fr) Procédé de commutation d'interface de dossier et dispositif terminal
WO2020156466A1 (fr) Procédé de photographie et dispositif de terminal
WO2020063091A1 (fr) Procédé de traitement d'image et dispositif terminal
WO2021082711A1 (fr) Procédé d'affichage d'image et dispositif électronique
WO2021083132A1 (fr) Procédé de déplacement d'icônes et dispositif électronique
WO2020134744A1 (fr) Procédé de déplacement d'icônes et terminal mobile
WO2019206036A1 (fr) Procédé et terminal de gestion de messages
WO2021057337A1 (fr) Procédé de fonctionnement et dispositif électronique
WO2021109960A1 (fr) Procédé de traitement d'image, dispositif électronique et support d'informations
WO2021129536A1 (fr) Procédé de déplacement d'icône et dispositif électronique
CN107943390B (zh) 一种文字复制方法及移动终端
WO2021083087A1 (fr) Procédé de capture d'écran et dispositif terminal
WO2020151460A1 (fr) Procédé de traitement d'objet et dispositif terminal
KR20140112920A (ko) 사용자 기기의 오브젝트 운용 방법 및 장치
WO2020151525A1 (fr) Procédé d'envoi de message et dispositif terminal
WO2021104163A1 (fr) Procédé d'agencement d'icônes et dispositif électronique
WO2021004327A1 (fr) Procédé de définition d'autorisation d'application, et dispositif terminal
WO2021109961A1 (fr) Procédé de génération d'icône de raccourci, appareil électronique et support
WO2019184947A1 (fr) Procédé de visualisation d'image et terminal mobile
WO2021057290A1 (fr) Procédé de commande d'informations et dispositif électronique
WO2020199783A1 (fr) Procédé d'affichage d'interface et dispositif terminal
WO2020181945A1 (fr) Procédé d'affichage d'identifiant et borne

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20897045

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20897045

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 200223)

122 Ep: pct application non-entry in european phase

Ref document number: 20897045

Country of ref document: EP

Kind code of ref document: A1