CN112734882B - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN112734882B
CN112734882B CN202011617936.0A CN202011617936A CN112734882B CN 112734882 B CN112734882 B CN 112734882B CN 202011617936 A CN202011617936 A CN 202011617936A CN 112734882 B CN112734882 B CN 112734882B
Authority
CN
China
Prior art keywords
image
file
input
module
cells
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011617936.0A
Other languages
Chinese (zh)
Other versions
CN112734882A (en
Inventor
刘朝辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202011617936.0A priority Critical patent/CN112734882B/en
Publication of CN112734882A publication Critical patent/CN112734882A/en
Application granted granted Critical
Publication of CN112734882B publication Critical patent/CN112734882B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range

Abstract

The application discloses an image processing method and device, and belongs to the technical field of communication. The method can solve the problems that the process of editing the image is complicated and the display effect of the edited image is poor, and comprises the following steps: receiving a first input of a first image thumbnail, wherein the first image thumbnail is used for indicating a first image, and the first image is an image corresponding to an object except for a first editable object in a first file; generating a first object according to the stored characteristic information of the first object in response to the first input; displaying the first image and displaying the first object in an editable form on the first image; wherein the first object comprises any one of: the user triggers the selected object in the first file, all the objects which can be edited in the first file, and the objects which are edited in the first file within the preset time length. The method and the device are suitable for editing the scene of the part of objects in the image.

Description

Image processing method and device
Technical Field
The application belongs to the technical field of communication, and particularly relates to an image processing method and device.
Background
With the development of electronic technology, the functions of electronic devices are becoming more and more abundant. Currently, a user can edit an image through an electronic device and save the edited image.
By way of example, a user may add text content 1 and text content 2 to image a via an electronic device and save image a after adding text content 1 and text content 2 to obtain image b (i.e., including text content 1 and text content 2 in image b). After obtaining the image b, if the user needs to modify the text content 1 in the image b into the text content 3, one way is that the user can trigger the display device to delete the text content 1 in the image b and then add the text content 3 in the image b; alternatively, the user may trigger the display device to block the text content 1 in the image b with a sticker, and then add the text content 3 in the image b.
However, according to the above method, the above mode 1 causes the background image of the area where the text content 1 is located to be deleted synchronously, and the above mode 2 causes the background image of the area where the text content 1 is located to be blocked synchronously.
Therefore, the new content can be added into the image after the original displayed content in the image is shielded or scratched, so that the image editing process is complicated, and the display effect of the edited image is poor.
Disclosure of Invention
The embodiment of the application aims to provide an image processing method and device, which can solve the problems that the process of editing an image is complicated and the display effect of the edited image is poor.
In order to solve the technical problems, the application is realized as follows:
in a first aspect, an embodiment of the present application provides an image processing method, including: receiving a first input of a first image thumbnail, wherein the first image thumbnail is used for indicating a first image, and the first image is an image corresponding to an object except for a first editable object in a first file; generating a first object according to the characteristic information of the first object stored in advance in response to the first input; and displaying the first image and displaying the first object in an editable form on the first image; wherein the first object comprises any one of: the user triggers the selected object in the first file, all the objects which can be edited in the first file, and the objects which are edited in the first file within the preset time length.
In a second aspect, embodiments of the present application provide an image processing apparatus, which may include: the device comprises a receiving module, a generating module and a display module. The receiving module is used for receiving a first input of a first image thumbnail, wherein the first image thumbnail is used for indicating a first image, and the first image is an image corresponding to an object except for the editable first object in the first file; the generating module is used for responding to the first input received by the receiving module and generating a first object according to the characteristic information of the first object stored in advance; the display module is used for displaying the first image and displaying the first object generated by the generation module on the first image in an editable form; wherein the first object comprises any one of: the user triggers the selected object in the first file, all the objects which can be edited in the first file, and the objects which are edited in the first file within the preset time length.
In a third aspect, embodiments of the present application provide an electronic device comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, which when executed by the processor, implement the steps of the method as in the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor perform the steps of the method as in the first aspect.
In a fifth aspect, embodiments of the present application provide a chip comprising a processor and a communication interface, the communication interface being coupled to the processor, the processor being configured to execute programs or instructions to implement a method as in the first aspect.
In the embodiment of the application, a first input to a first image thumbnail may be received, where the first image thumbnail is used to indicate a first image, and the first image is an image corresponding to an object except for the first editable object in the first file; generating a first object according to the characteristic information of the first object stored in advance in response to the first input; and displaying the first image and displaying the first object in an editable form on the first image. Wherein the first object comprises any one of: the user triggers the selected object in the first file, all the objects which can be edited in the first file, and the objects which are edited in the first file within the preset time length. According to the image processing method, when a user triggers and displays the first image corresponding to the object except the first object which can be edited in the first file, the first image can be displayed, and the first object generated according to the characteristic information of the first object which is stored in advance can be displayed on the first image in an editable mode, so that the first object can be edited without carrying out matting or shielding on any object in the first image under the condition that the first image is kept unchanged, and therefore in a scene that a certain object (such as the first object) related to the image is required to be edited for multiple times, the image processing method provided by the embodiment of the invention not only can simplify the image editing process, but also can improve the display effect of the edited image.
Drawings
Fig. 1 is a schematic diagram of an image processing method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an interface to which the image processing method according to the embodiment of the present application is applied;
FIG. 3 is a second schematic diagram of an interface of an application of the image processing method according to the embodiment of the present application;
FIG. 4 is a third exemplary interface diagram of an application of the image processing method according to the embodiment of the present application;
FIG. 5 is a fourth exemplary diagram of an interface for applying the image processing method according to the embodiments of the present application;
FIG. 6 is a schematic diagram of an image processing apparatus in an embodiment of the present application;
fig. 7 is a schematic diagram of an electronic device in an embodiment of the present application;
fig. 8 is a hardware schematic diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The embodiment of the application provides an image processing method, an image processing device and electronic equipment, which can receive first input of a user on a first image thumbnail, wherein the first image thumbnail is used for indicating a first image, and the first image is an image corresponding to an object except for a first editable object in a first file; generating a first object according to the characteristic information of the first object stored in advance in response to the first input; and displaying the first image and displaying the first object in an editable form on the first image; wherein the first object comprises any one of: the user triggers the selected object in the first file, all the objects which can be edited in the first file, and the objects which are edited in the first file within the preset time length. According to the image processing method, when a user triggers and displays the first image corresponding to the object except the first object which can be edited in the first file, the first image can be displayed, and the first object generated according to the characteristic information of the first object which is stored in advance can be displayed on the first image in an editable mode, so that the first object can be edited without carrying out matting or shielding on any object in the first image under the condition that the first image is kept unchanged, and therefore in a scene that a certain object (such as the first object) related to the image is required to be edited for multiple times, the image processing method provided by the embodiment of the invention not only can simplify the image editing process, but also can improve the display effect of the edited image.
The image processing method, the image processing device and the electronic equipment provided by the embodiment of the application are described in detail through specific embodiments and application scenes thereof with reference to the accompanying drawings.
Currently, in a process of using an electronic device, a user may trigger the electronic device to perform a screen capturing process on an interface displayed by the electronic device to obtain a screen capturing image, or store an image displayed by the electronic device. Specifically, in some scenarios, a user may need to trigger the input of content in an interface or image before triggering the screen capture or saving the image, and after saving the image, may for some reason need to modify some of the previously added content while keeping other content in the image unchanged.
Illustratively, in the holiday, parents of the students need to measure the body temperature of the students every day, then find a body temperature table in the electronic device, edit the date and the body temperature in the body temperature table, keep the information except the date and the body temperature in the body temperature table unchanged, trigger screen capturing of the body temperature table, and finally send the images obtained by screen capturing to a teacher. Therefore, the body temperature table is required to be opened first, the date and the body temperature are modified in the body temperature table, and a series of complicated operation steps such as re-screen capturing and the like are required to be carried out on the body temperature table to realize the requirements of the user, so that the editing process is very complicated, and the operation cost of the user is increased.
Also for example, it may be necessary to add text contents (e.g., text contents 1 and 2) to one image (e.g., image a) and then save the image after adding the text contents to obtain image b. However, after saving the image b, if the user desires to modify the text content 1 in the image b to the text content 3. In the related art, then, one way is that the user may trigger the display device to delete the text content 1 in the image b and then add the text content 3 in the image b; alternatively, the user may trigger to block (e.g., using a decal or a new text pattern to block) the text content 1 in the image b before adding the text content 3 to the image b. In yet another way, the user may find image a first and then re-add text content 2 and text content 3 in image a. In this way, in the above-mentioned one mode and the other mode, since the new content needs to be added to the image after the original content displayed in the image is blocked or scratched, the process of editing the image is complicated, and the display effect of the edited image is poor. In the above-described still another mode, since re-editing is required on the original, it is necessary to re-edit the content (for example, the above-described text content 2) that does not require modification; thus, the process of editing the image is complicated and the flexibility is poor.
In the image processing method provided in the embodiment of the present application, when a user triggers the image processing device to store an image corresponding to an object in one file (for example, a first file in the embodiment of the present application), the image processing device may store an image corresponding to an object (for example, a first image in the embodiment of the present application) in the first file that does not need to be modified again, and store feature information of an object (for example, a first object in the embodiment of the present application) in the first file that may need to be modified again. Thus, when the image processing apparatus receives the first input of the thumbnail image of the first image, the image processing apparatus may display the first image, generate the first object according to the feature information of the first object, and display the first object in the first image, so that the user may directly trigger the image processing apparatus to edit the first object, and after editing the first object, may directly trigger the image processing apparatus to save the first image and the edited object. Therefore, the editing process can be simplified, and the image obtained after editing can be ensured to have a good display effect.
Furthermore, the image processing method provided by the embodiment of the application can quickly erase the editable content and add new content under the condition of not affecting other contents in the image, so that the user experience can be improved.
As shown in fig. 1, an embodiment of the present application provides an image processing method, which may include steps 101 to 103 described below.
Step 101, an image processing apparatus receives a first input of a first image thumbnail by a user.
The first image thumbnail is used for indicating a first image, and the first image is an image corresponding to an object except the first editable object in the first file. For convenience of description, in the following embodiments, objects other than the first object that can be edited in the first file are referred to as second objects, and the two objects have the same meaning and are interchangeable.
Optionally, in the embodiment of the present application, the first image may be a screen capturing image corresponding to the second object, or may be an image generated according to feature information of the second object obtained from the first file, which may specifically be determined according to actual use requirements, and the embodiment of the present application is not limited.
In this embodiment of the present application, the first file includes at least one editable object.
Alternatively, in an embodiment of the present application, the first file may be an image and at least one editable object displayed in the image editing interface. Alternatively, the first file may be a document, such as a word document, a slide, a spreadsheet, or the like.
Alternatively, in the embodiment of the present application, the editable object may be any possible object such as a sticker, a cell, a text box, or the like.
Alternatively, in the embodiment of the present application, the first object may be any editable object such as a cell, a sticker, a text box, or the like in the first file.
Step 102, the image processing device responds to the first input and generates a first object according to the feature information of the first object stored in advance.
In this embodiment of the present application, after receiving a first input of a user to a first image thumbnail, the display device may obtain, in response to the first input, feature information of a first object stored in the electronic device, and then generate the first object according to the feature information of the first object.
Optionally, in an embodiment of the present application, the feature information of the first object is stored when the first image is stored. The specific method for storing the feature information of the first image and the first object will be described in detail in the following embodiments, and in order to avoid repetition, a detailed description is omitted here.
It may be appreciated that in the embodiment of the present application, the feature information of the first object may be stored in association with the first image, so that when the image processing apparatus receives the first input, the stored feature information of the first object may be automatically retrieved. And generating a first object according to the characteristic information of the first object, and displaying the first object on the first image.
Optionally, in an embodiment of the present application, the feature information of the first object may include at least one of the following: first location information, first type information, first size information, first content, first hierarchical information.
Wherein the first location information indicates a display area of the first object in the first file; the first type information indicates a type (e.g., text box type, sticker type) of the first object in the first file; the first size information is the area and/or the shape of the first object in the first file; the first content is content in a first object; the first hierarchy information indicates a relative hierarchy of individual sub-objects in the first object in the first file.
For example, the first file includes an image, a text box, and a sticker, where the text box and the sticker are both located on the image, and a layer where the text box is located above the layer where the sticker is located; then, if the first object includes a text box and a sticker, the first hierarchical information may indicate that: the layer on which the sticker is located is above the layer on which the text box is located, and the layer on which the image (i.e., the first image) is located is above the layer on which the text box is located.
Step 103, the image processing apparatus displays the first image, and displays the first object on the first image in an editable form.
In this embodiment of the present application, the image processing apparatus may display at least one layer on a layer on which the first image is located, and then display the first object in an editable form in the at least one layer.
Alternatively, in embodiments of the present application, the editable form is a text box form, a sticker form, or any other possible form.
Exemplarily, assume that (a) as in fig. 2 is a schematic view of a first file, and the first file includes an original image 20, a text box 21, and a sticker 22 therein; if the first object is a text box 21 in the first file, as shown in fig. 2 (b), the first image 23 includes the original image 20 and the image of the sticker 22. In this way, when the user performs the first input on the thumbnail of the first image 23, as shown in (c) of fig. 2, the image processing apparatus may display the first image 23 and display the text box 21 in the form of a text box on the first image 23.
Alternatively, as shown in fig. 3, when the image processing apparatus displays the text box 21 on the first image 23 in an editable form, a "delete" control 24 and a "rotate" control 25 may also be displayed on the text box 21, the "delete" control 24 being used to trigger deletion of the text box 21, and the "rotate" control 25 being used to trigger control of rotation of the text box 21 relative to the first image. It will be appreciated that when the "delete" control and the "rotate" control are displayed on the text box 21, this indicates that the text box is in an editable state.
Optionally, in this embodiment of the present application, the first image may include one layer, or may include multiple layers, which may be specifically determined according to actual use requirements, which is not limited in this embodiment of the present application.
In the image processing device provided by the embodiment of the application, when the user triggers the display of the first image corresponding to the object except the first object which can be edited in the first file, the first image can be displayed, and the first object generated according to the characteristic information of the first object which is stored in advance can be displayed on the first image in an editable form, so that the first object can be edited without carrying out matting or shielding on any object in the first image under the condition that the first image is kept unchanged, and therefore, in a scene that a certain object (such as the first object) related to the image is required to be edited for multiple times, the image processing method provided by the embodiment of the application not only can simplify the process of editing the image, but also can improve the display effect of the edited image.
Alternatively, in the embodiment of the present application, when the first file is a spreadsheet, the feature information of the first object may be feature information of N cells in the spreadsheet, and N may be a positive integer. The step 102 may be specifically implemented by the following step 102 a; the above step 103 may be specifically implemented by the following step 103 a.
In step 102a, the image processing apparatus generates N cells from the feature information of N cells stored in advance.
Step 103a, the image processing device displays the first image, and displays the N cells in the form of text boxes on the first image.
The display positions of the N cells in the first image correspond to the display positions of the N cells in the first file.
Illustratively, as shown in (a) of fig. 4, the image processing apparatus displays a spreadsheet 41 in the interface 40 of the first office application, and the first object is 12 (n=12) target cells in the spreadsheet (12 cells filled with preset transparency as shown in (a) of fig. 4). Then, the image processing apparatus may acquire the feature information of the target cell from the electronic form 12, and acquire the first image; so that the image processing apparatus can store the characteristic information of the first image and the target cell in association, it is understood that the first image stored by the image processing apparatus can be the image 42 shown in (b) of fig. 4. Thus, when the user performs the first input on the thumbnail image of the first image, the image processing apparatus may generate a target cell from the feature information of the target cell stored in advance, and as shown in (c) of fig. 4, the image processing apparatus may display the first image 42 on the interface 43 of the second office application, and display the target cell (a cell indicated by the filled region as shown in (c) of fig. 4) in the form of a text box on the first image 42. It can be seen that the display position of the target cell in the first image corresponds to the display position of the target cell in the first file, and that the target cell displayed on the first image 42 includes the content in the target cell in the first file.
It should be noted that, in the embodiment of the present application, when the first file is a spreadsheet, the first object may further include a chart (such as a histogram, a meta-pie chart, a scatter chart, etc.) in the spreadsheet.
In the embodiment of the application, when the user triggers the display of the first image generated according to the electronic form, the image processing device can generate N cells according to the feature information of the N cells stored in advance, and display the first image, and the N cells are displayed on the first image in the form of text boxes, so that the N cells can be edited under the condition of ensuring that the content in the first image is unchanged; therefore, the editing efficiency can be improved, and the man-machine interaction performance is improved.
Alternatively, in the embodiment of the present application, the above step 102 may be specifically implemented by the following step 102 b.
In step 102b, the image processing apparatus displays a first image, and displays a first object on the first image in an editable form in case the second biometric information is consistent with the second preset biometric information.
Wherein the second biometric information is biometric information of the user performing the first input.
It will be appreciated that in the embodiments of the present application, one biometric information is consistent with another biometric information, which means: the matching degree of the one piece of biological characteristic information and the other piece of biological characteristic information is larger than or equal to a preset threshold value.
For other descriptions of the second biometric information and the second preset biometric information, reference may be specifically made to the descriptions related to the first biometric information and the first preset biometric information in the following embodiments, and in order to avoid repetition, the descriptions are omitted here.
Alternatively, in the embodiment of the present application, in a case where the image processing apparatus displays the first file in the interface of the target application, the target application may actively report the feature information of the editable object in the first file to the image processing apparatus, or the image processing apparatus may access the target application to obtain the feature information of the editable object in the first file.
Illustratively, the target application proactively reports. When the first file is displayed in the target application, the target application may identify the editable objects in the first file, acquire feature information of the editable objects, and report the feature information of the editable objects to the image processing apparatus. Then, if the position or number of the editable objects in the first file is changed, the target application may re-report the feature information of the changed editable objects so that the image processing apparatus updates the feature information of the editable objects in the first file.
The image processing method provided by the embodiment of the application is mainly applied to the scene of repeatedly modifying the input content of some inputtable areas in the file, and the original content of at least part of objects in the image can be rapidly erased and new content can be input again.
Illustratively, in the embodiment of the present application, before the step 101, the image processing method provided in the embodiment of the present application may further include steps 104 to 106 described below.
Step 104, the image processing device receives a second input of the user to the first file.
Optionally, in the embodiment of the present application, the second input may be a touch input, a voice input or an input of a preset gesture, which may specifically be determined according to an actual use requirement, and the embodiment of the present application is not limited.
Step 105, the image processing apparatus acquires the feature information of the first object from the first file in response to the second input, and acquires the first image.
In this embodiment of the present application, after receiving the second input of the user, the image processing apparatus may determine the first object from the first file in response to the second input, and then acquire the feature information of the first object from the first file, and acquire the first image.
Alternatively, the first object determined by the image processing apparatus in the embodiment of the present application from the first file may specifically include any one of the following (1), (2), and (3):
(1) The user triggers the selected object in the first file.
Optionally, in an embodiment of the present application, the second input may specifically include a first sub-input, a second sub-input, and a third sub-input. The first sub-input is used for triggering and displaying a 'determination' control and displaying a check box on each editable object in the first file; the second sub-input is used for selecting the first object from the first file; the third sub-input is used to trigger a confirmation that the selection of the editable object has been completed.
Alternatively, in the embodiment of the present application, the first sub-input may be an input that the user presses on the first file for a long time, or may be an input that the user touches the first file with multiple fingers. The second sub-input may be user input to the check box described above. The third sub-input may be input by the user to a "determine" control displayed in the interface of the target application. The method can be specifically determined according to actual use requirements, and the embodiment of the application is not limited.
Illustratively, as shown in fig. 5 (a), the image processing apparatus displays an image 51, a sticker 52, and a first text box 53 and a second text box 54 (i.e., a first file) at an image editing interface 50 of an album application (i.e., a target application); if the user presses long on the image editing interface (i.e., the first sub-input), the image processing apparatus may display one check box on the decal 52 and the first and second text boxes 53 and 54, respectively, as shown in (b) of fig. 5. If the user desires to edit the second text box 54 at a later time, the user may click on a check box (i.e., a second sub-input) displayed on the second text box 54 to trigger the image processing apparatus to check the second text box 54, as shown in (b) of fig. 5. The user may then click on the "ok" control to trigger the image processing apparatus to obtain the feature information of the second text box from the first file and to obtain the image 51, the sticker 52 and the image corresponding to the first text box 53.
(2) All objects editable in the first file.
In this embodiment of the present application, after the image processing apparatus receives the second input, all the objects that can be edited in the first file may be determined as the first object, and the feature information of the first object may be obtained from the first file.
For example, referring to (a) in fig. 5, the image processing apparatus may determine the sticker 52 and the first text box 53 and the second text box 54 as the first object.
(3) The edited object in the first file is in a preset time period.
Optionally, in the embodiment of the present application, the preset duration may specifically be any duration of 1 hour, 1 day, or the preset duration may be a duration between when the first file is opened last time and when the user performs the input for triggering to acquire the image of the first file. The method can be specifically determined according to actual use requirements, and the embodiment of the application is not limited.
In this embodiment of the present invention, after receiving the second input of the user, the image processing apparatus may determine, in response to the second input, an editable object edited in the first file within a preset duration as the first object, and acquire feature information of the first object from the first file, and acquire the first image.
For example, after the image processing apparatus receives the second input, the image processing apparatus may intelligently filter out the editable objects for which the user has not performed the input operation among all the editable objects in the first file, in response to the second input, according to whether the user has performed the input operation among the editable objects in the first file, and take the editable objects after the filtering as the first objects. In the method, the user does not need to select autonomously, and the object which is possibly not edited again by the user can be filtered, so that the operation process of determining the first object can be simplified, and the determined first object can be ensured to meet the actual use requirement of the user.
In the embodiment of the application, since the image processing device can determine the first object in different modes, the flexibility of determining the first object can be improved, and the man-machine interaction performance can be improved.
Alternatively, in the embodiment of the present application, the image processing apparatus obtains the feature information of the first object from the first file, which may specifically be the feature information of the first object obtained from the first file through the above-mentioned target application.
For the description of the feature information of the first object, reference may be specifically made to the description related to the feature information of the first object in the foregoing embodiment, and in order to avoid repetition, a description is omitted here.
Alternatively, in the embodiment of the present application, the image processing apparatus may obtain feature information of objects other than the first object in the first file, and generate the first image according to the obtained feature information; alternatively, the image processing apparatus may screen-capture other objects than the first object in the first file to obtain the first image; alternatively, the image processing apparatus may synthesize other objects than the first object in the first file into the first image. The method can be specifically determined according to actual use requirements, and the embodiment of the application is not limited.
Step 106, the image processing apparatus stores the feature information of the first object and the first image.
In this embodiment of the present application, the image processing apparatus may store the feature information of the first object and the first image in the electronic device in an associated manner, or store the feature information and the first image in a server of the electronic device, and may specifically be determined according to an actual use requirement, which is not limited in this embodiment of the present application.
For example, the image processing apparatus may associate information of the storage address of the feature information of the first object with information of the storage address of the first image to enable associated storage of the feature information of the first object and the first image.
In this embodiment of the present application, when a user desires to repeatedly modify some editable objects in a file, the user may perform a second input on the file to trigger the image processing apparatus to acquire and store feature information of an object (for example, a first object) that requires repeated modification, and acquire and store images (for example, the first image) corresponding to other objects in the file. Therefore, in a scene requiring multiple editing of a certain/some object (such as the first object) related to the image, the image processing method provided by the embodiment of the application not only can simplify the image editing process, but also can improve the display effect of the edited image.
Alternatively, in the embodiment of the present application, after the image processing apparatus displays the first image and displays the first object on the first image in an editable form, the user may edit the first object displayed on the first image according to the actual use requirement thereof.
Illustratively, in the embodiment of the present application, after the step 103, the image processing method provided in the embodiment of the present application may further include the following steps 107 and 108.
Step 107, the image processing apparatus receives a third input of the first object from the user.
Step 108, the image processing device responds to the third input and executes target operation corresponding to the third input on the first object;
wherein the target operation may include any one of: modifying the content in the first object, updating the display position of the first object, deleting the first object.
In this embodiment of the present application, the updating of the display position of the first object by the image processing apparatus is specifically to update the display position of the first object in the first image.
For example, when the first object includes a plurality of sub-objects, the display positions of the respective sub-objects in the first image may be readjusted in the process of re-editing the display of the first object on the first image; the overlapping relation between the sub-objects can also be readjusted, for example, the sub-object 1 and the sub-object 2 are partially overlapped before being adjusted, and then no overlapping area exists between the sub-object 1 and the sub-object 2 after being adjusted; the hierarchical relationship between the sub-objects may also be adjusted, for example, the layer of the object 1 before adjustment is located above the layer of the object 2, and then the layer of the object 1 after adjustment may be located below the layer of the object 2.
In the embodiment of the application, the user can trigger the image processing device to edit the first object displayed in the editable form on the first image under the condition that the content in the first image is kept unchanged, so that not only can the editing efficiency be improved, but also the display effect of the image obtained after the first object is edited can be improved.
Optionally, in the embodiment of the present application, when receiving the third input, the image processing apparatus may first acquire the first biometric information, and compare the first biometric information with the first preset biometric information, and if the first biometric information is inconsistent with the first preset biometric information (i.e. the matching degree of the first biometric information and the first biometric information is different or is smaller than a preset threshold), the image processing apparatus does not execute the target operation, and at this time, the image processing apparatus may display a prompt message to prompt the user that there is no editing authority or prompt that the first object is not editable; the image processing apparatus may perform the target operation if the first biometric information is identical to the first preset biometric information (i.e., the same as the first biometric information, or the degree of matching of the first biometric information and the second biometric information is greater than or equal to a preset threshold).
Alternatively, in the embodiment of the present application, the above step 107 may be specifically implemented by the following step 107 a.
In step 107a, the image processing apparatus performs a target operation corresponding to the third input on the first object in response to the third input, in a case where the first biometric information is identical to the first preset biometric information.
Wherein the first biometric information may be biometric information of the user performing the third input.
In the embodiment of the present application, if the first biometric information is consistent with the first preset biometric information, the user performing the third input is indicated as an authorized user; if the first biometric information is inconsistent with the first preset biometric information, the user executing the third input is an unauthorized user.
Optionally, in the embodiment of the present application, the first preset biometric information may be any possible biometric information such as iris information, fingerprint information, facial image information, voiceprint feature information, and the like.
Optionally, in the embodiment of the present application, the first preset biometric information may store biometric information recorded when the first image and the first object are stored; alternatively, the first preset biometric information may be biometric information of a user who has authority to use the electronic device storing the characteristic information of the first image and the first object.
The first preset biometric information is exemplified as biometric information of a user who has authority to use the electronic device storing the feature information of the first image and the first object, for example. Assuming that the user a triggers feature information of the image 1 and the sticker to be stored in the electronic device of the user a, when the electronic device displays a first image and displays the sticker on the first image in an editable form, if the user B inputs fingerprint information in the electronic device, the user B is an authorized user of the electronic device, so that the user B can trigger editing of the sticker on the electronic device; if user C does not enter fingerprint information on user A's electronic device, user C is an unauthorized user and therefore user C cannot trigger editing of the decal.
In the embodiment of the application, the authorized user can trigger the editing of the first object displayed in the editable form in the first image, so that the safety can be improved on the basis of simplifying the image editing process.
It should be noted that, in the image processing method provided in the embodiment of the present application, the execution subject may be an image processing apparatus, or a control module in the image processing apparatus for executing the image processing apparatus method. In the embodiment of the present application, an image processing apparatus provided in the embodiment of the present application will be described by taking an example in which the image processing apparatus executes an image processing method.
As shown in fig. 6, an embodiment of the present application provides an image processing apparatus 60, the image processing apparatus 60 may include: a receiving module 61, a generating module 62 and a display module 63. The receiving module 61 may be configured to receive a first input from a user on a first image thumbnail, where the first image thumbnail is used to indicate a first image, and the first image may be an image corresponding to an object other than the first editable object in the first file; a generating module 62, configured to generate a first object according to the feature information of the first object stored in advance in response to the first input received by the receiving module 61; a display module 63 operable to display the first image and display the first object generated by the generation module 62 in an editable form on the first image; wherein the first object comprises any one of: the user triggers the selected object in the first file, all the objects which can be edited in the first file, and the objects which are edited in the first file within the preset time length.
In the image processing apparatus provided in the embodiment of the present application, when a user triggers the display of a first image corresponding to an object other than an editable first object in a first file, the image processing apparatus may display the first image, and display the first object generated according to feature information of the first object stored in advance on the first image in an editable form, so that the first object may be edited without matting or shielding any object in the first image while the first image is kept unchanged, and therefore, in a scenario where multiple edits are required to be performed on a certain object (e.g., the first object) related to the image, the image processing method provided in the embodiment of the present application may not only simplify a process of editing the image, but also may improve a display effect of the edited image.
Optionally, in an embodiment of the present application, the image processing apparatus may further include an acquisition module and a storage module. The receiving module is further used for receiving a second input of the first file by a user before receiving the first input of the first image thumbnail; the acquisition module can be used for responding to the second input received by the receiving module, acquiring the characteristic information of the first object from the first file and acquiring the first image; and the storage module can be used for storing the characteristic information of the first object and the first image acquired by the acquisition module.
In the image processing apparatus provided in the embodiment of the present application, when a user desires to repeatedly modify some editable objects in one file, the user may perform a second input on the file, so as to trigger the image processing apparatus to acquire and store feature information of an object (for example, a first object) whose desire is repeatedly modified, and acquire and store images (for example, the first image) corresponding to other objects in the file. Therefore, in a scene requiring multiple editing of a certain/some object (such as the first object) related to the image, the image processing method provided by the embodiment of the application not only can simplify the image editing process, but also can improve the display effect of the edited image.
Optionally, in the embodiment of the present application, when the first file is a spreadsheet, the feature information of the first object is feature information of N cells in the spreadsheet, and N may be a positive integer. The generating module is specifically configured to generate N cells according to the feature information of the N cells stored in advance; the display module can be specifically used for displaying the first image and displaying the N cells generated by the generation module on the first image in a text box form; the display positions of the N cells in the first image correspond to the display positions of the N cells in the first file.
In the image processing device provided by the embodiment of the application, when the image processing device displays the first image generated according to the electronic form, since the image processing device can generate N cells according to the feature information of the N cells stored in advance, and display the first image, and the N cells are displayed on the first image in the form of text boxes, the N cells can be edited under the condition of ensuring that the content in the first image is unchanged; therefore, the editing efficiency can be improved, and the man-machine interaction performance is improved.
Optionally, in an embodiment of the present application, the image processing apparatus may further include an execution module. The receiving module is further used for receiving a third input of the user on the first object after the first image is displayed on the display module and the first object is displayed on the first image in an editable form; an execution module, configured to execute a target operation corresponding to the third input on the first object in response to the third input received by the receiving module; wherein the target operation may include any one of: modifying the content in the first object, updating the display position of the first object, deleting the first object.
In the embodiment of the application, the image processing device can edit the first object displayed in the first image under the condition that the content in the first image is unchanged, so that not only can the editing efficiency be improved, but also the display effect of the image obtained after the first object is edited can be improved.
Optionally, in the embodiment of the present application, the executing module may be specifically configured to execute, when the first biometric information is consistent with the first preset biometric information, a target operation corresponding to the third input on the first object, where the first biometric information is biometric information of a user who executes the third input.
In the embodiment of the application, the authorized user can trigger the editing of the first object displayed in the editable form in the first image, so that the safety of the image can be improved on the basis of simplifying the image editing process.
The image processing apparatus in the embodiment of the present application may be an electronic device, or may be a component, an integrated circuit, or a chip in an electronic device. The electronic device may be a mobile electronic device or a non-mobile electronic device. By way of example, the mobile electronic device may be a cell phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, wearable device, ultra-mobile personal computer (ultra-mobile personal computer, UMPC), netbook or personal digital assistant (personal digital assistant, PDA), etc., and the non-mobile electronic device may be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
The image processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system, an iOS operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.
The image processing apparatus 60 provided in the embodiment of the present application can implement each process implemented by the image processing method in the method embodiment shown in fig. 1 to 5, and in order to avoid repetition, a detailed description is omitted here.
As shown in fig. 7, the embodiment of the present application further provides an electronic device 200, including a processor 202, a memory 201, and a program or an instruction stored in the memory 201 and capable of running on the processor 202, where the program or the instruction implements each process of the embodiment of the image processing method when executed by the processor 202, and the same technical effects can be achieved, and for avoiding repetition, a detailed description is omitted herein.
It should be noted that, the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 8 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 1000 includes, but is not limited to: radio frequency unit 1001, network module 1002, audio output unit 1003, input unit 1004, sensor 1005, display unit 1006, user input unit 1007, interface unit 1008, memory 1009, and processor 1010.
Those skilled in the art will appreciate that the electronic device 1000 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 1010 by a power management system to perform functions such as managing charge, discharge, and power consumption by the power management system. The electronic device structure shown in fig. 8 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
The user input unit 1007 may be configured to receive a first input of a first image thumbnail by a user, where the first image thumbnail is used to indicate a first image, and the first image may be an image corresponding to an object other than the first editable object in the first file; a processor 1010 operable to generate a first object from feature information of the first object stored in advance in response to a first input received by the user input unit 1007; a display unit 1006 operable to display a first image and display the first object generated by the processor 1010 in an editable form on the first image; wherein the first object comprises any one of: the user triggers the selected object in the first file, all the objects which can be edited in the first file, and the objects which are edited in the first file within the preset time length.
In the electronic device provided by the embodiment of the application, when the user triggers the display of the first image corresponding to the object other than the first object capable of being edited in the first file, the first image can be displayed, and the first object generated according to the characteristic information of the first object stored in advance can be displayed on the first image in an editable form, so that the first object can be edited without matting or shielding any object in the first image under the condition that the first image is kept unchanged, and therefore, in a scene that a certain/some objects (such as the first object) related to the image are required to be edited for multiple times, the image processing method provided by the embodiment of the application not only can simplify the process of editing the image, but also can improve the display effect of the edited image.
Optionally, in the embodiment of the present application, the user input unit 1007 may be further configured to receive a second input of the first file by the user before receiving the first input of the first image thumbnail; a processor 1010 operable to acquire feature information of a first object from a first file and acquire a first image in response to a second input received by the user input unit 1007; the memory 1009 may be used to store the feature information of the first object and the first image acquired by the processor 1010.
In the electronic device provided in the embodiment of the present application, when a user needs to repeatedly modify some editable objects in a file, the user may perform a second input on the file, so as to trigger the electronic device to acquire and store feature information of an object (for example, a first object) that needs to be repeatedly modified, and acquire and store images (for example, the first image) corresponding to other objects in the file. Therefore, in a scene requiring multiple editing of a certain/some object (such as the first object) related to the image, the image processing method provided by the embodiment of the application not only can simplify the image editing process, but also can improve the display effect of the edited image.
Optionally, in the embodiment of the present application, when the first file is a spreadsheet, the feature information of the first object is feature information of N cells in the spreadsheet, and N may be a positive integer. The processor 1010 may be specifically configured to generate N cells according to the feature information of the N cells stored in advance; a display unit 1006, which may be specifically configured to display the first image, and display N cells generated by the processor 1010 in the form of text boxes on the first image; the display positions of the N cells in the first image correspond to the display positions of the N cells in the first file.
In the electronic device provided by the embodiment of the application, when the electronic device displays the first image generated according to the electronic form, since the electronic device can generate N cells according to the feature information of the N cells stored in advance, and display the first image, and the N cells are displayed on the first image in the form of text boxes, the N cells can be edited under the condition that the content in the first image is ensured to be unchanged; therefore, the editing efficiency can be improved, and the man-machine interaction performance is improved.
Alternatively, in the embodiment of the present application, the user input unit 1007 may be further configured to display a first image on the display unit 1006, and after displaying the first object on the first image in an editable form, receive a third input of the first object from the user; a processor 1010 operable to perform a target operation corresponding to a third input on the first object in response to the third input received by the user input unit 1007; wherein the target operation may include any one of: modifying the content in the first object, updating the display position of the first object, deleting the first object.
In the embodiment of the application, the electronic device can edit the first object displayed in the first image under the condition that the content in the first image is unchanged, so that the editing efficiency can be improved, and the display effect of the image obtained after the first object is edited can be improved.
Optionally, in the embodiment of the present application, the processor 1010 may be specifically configured to perform, on the first object, a target operation corresponding to the third input if the first biometric information is consistent with the first preset biometric information, where the first biometric information is biometric information of the user performing the third input.
In the embodiment of the application, the authorized user can trigger the editing of the first object displayed in the editable form in the first image, so that the safety of the image can be improved on the basis of simplifying the image editing process.
It should be understood that in the embodiment of the present application, the input unit 1004 may include a graphics processor (Graphics Processing Unit, GPU) 10041 and a microphone 10042, and the graphics processor 10041 processes image data of still pictures or videos obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1006 may include a display panel 10061, and the display panel 10061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1007 includes a touch panel 10071 and other input devices 10072. The touch panel 10071 is also referred to as a touch screen. The touch panel 10071 can include two portions, a touch detection device and a touch controller. Other input devices 10072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein. Memory 1009 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 1010 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 1010.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored, where the program or the instruction realizes each process of the embodiment of the image processing method when executed by a processor, and the same technical effects can be achieved, so that repetition is avoided, and no redundant description is given here.
The processor is a processor in the electronic device in the above embodiment. The readable storage medium includes a computer readable storage medium such as a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk or an optical disk.
The embodiment of the application further provides a chip, which includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to run a program or instructions, so as to implement each process of the above embodiment of the image processing method, and achieve the same technical effects, so that repetition is avoided, and no further description is given here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), including several instructions for causing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method described in the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are also within the protection of the present application.

Claims (8)

1. An image processing method, the method comprising:
receiving a first input of a first image thumbnail, wherein the first image thumbnail is used for indicating a first image, and the first image is an image corresponding to an object except for a first editable object in a first file;
generating the first object according to the pre-stored characteristic information of the first object in response to the first input;
displaying the first image and displaying the first object in an editable form on the first image;
wherein the first object comprises any one of the following: triggering the selected objects, all the objects which can be edited in the first file and the objects which are edited in the first file in a preset time period by a user in the first file;
before the receiving the first input to the first image thumbnail, the method further comprises:
receiving a second input to the first file;
acquiring characteristic information of the first object from the first file in response to the second input, and acquiring the first image according to objects except the first object in the first file;
feature information of the first object and the first image are stored.
2. The method of claim 1, wherein the first file is a spreadsheet, the characteristic information of the first object is characteristic information of N cells in the spreadsheet, and N is a positive integer;
the generating the first object according to the pre-stored characteristic information of the first object includes:
generating the N cells according to the stored characteristic information of the N cells;
the displaying the first image and displaying the first object in an editable form on the first image includes:
displaying the first image, and displaying the N cells on the first image in a text box form;
the display positions of the N cells in the first image correspond to the display positions of the N cells in the first file.
3. The method of claim 1, wherein after the displaying the first image and displaying the first object in an editable form on the first image, the method further comprises:
receiving a third input to the first object;
responsive to the third input, performing a target operation on the first object corresponding to the third input;
Wherein the target operation includes any one of: modifying content in the first object, updating a display position of the first object, and deleting the first object.
4. A method according to claim 3, wherein said performing a target operation on said first object corresponding to said third input comprises:
and executing a target operation corresponding to the third input on the first object under the condition that the first biological characteristic information is consistent with first preset biological characteristic information, wherein the first biological characteristic information is biological characteristic information of a user executing the third input.
5. An image processing apparatus, characterized in that the apparatus comprises: the device comprises a receiving module, a generating module, a display module, an acquisition module and a storage module;
the receiving module is used for receiving a first input of a first image thumbnail, wherein the first image thumbnail is used for indicating a first image, and the first image is an image corresponding to an object except for a first editable object in a first file;
the generating module is used for responding to the first input received by the receiving module and generating the first object according to the pre-stored characteristic information of the first object;
The display module is used for displaying the first image and displaying the first object generated by the generation module on the first image in an editable form;
wherein the first object comprises any one of the following: triggering the selected objects, all the objects which can be edited in the first file and the objects which are edited in the first file in a preset time period by a user in the first file;
the receiving module is further configured to receive a second input to the first file before receiving the first input to the first image thumbnail;
the acquisition module is used for responding to the second input received by the receiving module, acquiring the characteristic information of the first object from the first file and acquiring the first image according to the objects except the first object in the first file;
the storage module is used for storing the characteristic information of the first object and the first image acquired by the acquisition module.
6. The apparatus of claim 5, wherein the first file is a spreadsheet, the characteristic information of the first object is characteristic information of N cells in the spreadsheet, and N is a positive integer;
The generating module is specifically configured to generate the N cells according to the pre-stored feature information of the N cells;
the display module is specifically configured to display the first image, and display the N cells generated by the generating module in a text box form on the first image;
the display positions of the N cells in the first image correspond to the display positions of the N cells in the first file.
7. The apparatus of claim 5, further comprising an execution module;
the receiving module is further configured to receive a third input to the first object after the first image is displayed on the display module and the first object is displayed on the first image in an editable form;
the execution module is used for responding to the third input received by the receiving module and executing target operation corresponding to the third input on the first object;
wherein the target operation includes any one of: modifying content in the first object, updating a display position of the first object, and deleting the first object.
8. The apparatus of claim 7, wherein the device comprises a plurality of sensors,
the execution module is specifically configured to execute, on the first object, a target operation corresponding to the third input when the first biometric information is consistent with the first preset biometric information, where the first biometric information is biometric information of a user executing the third input.
CN202011617936.0A 2020-12-30 2020-12-30 Image processing method and device Active CN112734882B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011617936.0A CN112734882B (en) 2020-12-30 2020-12-30 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011617936.0A CN112734882B (en) 2020-12-30 2020-12-30 Image processing method and device

Publications (2)

Publication Number Publication Date
CN112734882A CN112734882A (en) 2021-04-30
CN112734882B true CN112734882B (en) 2024-03-05

Family

ID=75607984

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011617936.0A Active CN112734882B (en) 2020-12-30 2020-12-30 Image processing method and device

Country Status (1)

Country Link
CN (1) CN112734882B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007134918A (en) * 2005-11-10 2007-05-31 Pioneer Electronic Corp Image data processing apparatus, method therefor, and program therefor
CN102890604A (en) * 2011-07-21 2013-01-23 腾讯科技(深圳)有限公司 Method and device for marking target object at machine side in man-machine interaction
CN106126053A (en) * 2016-05-27 2016-11-16 努比亚技术有限公司 Mobile terminal control device and method
CN108010106A (en) * 2017-11-22 2018-05-08 努比亚技术有限公司 A kind of method for displaying image, terminal and computer-readable recording medium
JP2019193148A (en) * 2018-04-26 2019-10-31 キヤノン株式会社 Information processing device and control method and program thereof
CN111857512A (en) * 2020-07-17 2020-10-30 维沃移动通信有限公司 Image editing method and device and electronic equipment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6903751B2 (en) * 2002-03-22 2005-06-07 Xerox Corporation System and method for editing electronic images
KR20200101036A (en) * 2019-02-19 2020-08-27 삼성전자주식회사 Electronic device and method providing content associated with image to application

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007134918A (en) * 2005-11-10 2007-05-31 Pioneer Electronic Corp Image data processing apparatus, method therefor, and program therefor
CN102890604A (en) * 2011-07-21 2013-01-23 腾讯科技(深圳)有限公司 Method and device for marking target object at machine side in man-machine interaction
CN106126053A (en) * 2016-05-27 2016-11-16 努比亚技术有限公司 Mobile terminal control device and method
CN108010106A (en) * 2017-11-22 2018-05-08 努比亚技术有限公司 A kind of method for displaying image, terminal and computer-readable recording medium
JP2019193148A (en) * 2018-04-26 2019-10-31 キヤノン株式会社 Information processing device and control method and program thereof
CN111857512A (en) * 2020-07-17 2020-10-30 维沃移动通信有限公司 Image editing method and device and electronic equipment

Also Published As

Publication number Publication date
CN112734882A (en) 2021-04-30

Similar Documents

Publication Publication Date Title
CN113766064B (en) Schedule processing method and electronic equipment
CN113079316B (en) Image processing method, image processing device and electronic equipment
CN112672061A (en) Video shooting method and device, electronic equipment and medium
CN114518822A (en) Application icon management method and device and electronic equipment
CN112698762B (en) Icon display method and device and electronic equipment
CN112734882B (en) Image processing method and device
CN116107531A (en) Interface display method and device
CN113360060B (en) Task realization method and device and electronic equipment
CN112162805B (en) Screenshot method and device and electronic equipment
CN115037874A (en) Photographing method and device and electronic equipment
CN114679546A (en) Display method and device, electronic equipment and readable storage medium
CN114845171A (en) Video editing method and device and electronic equipment
CN114416269A (en) Interface display method and display device
CN114302009A (en) Video processing method, video processing device, electronic equipment and medium
CN117331469A (en) Screen display method, device, electronic equipment and readable storage medium
CN117149019A (en) File editing method, terminal, electronic device and storage medium
CN117311885A (en) Picture viewing method and device
CN117369934A (en) Interface display method, device, equipment and storage medium
CN115904095A (en) Information input method and device, electronic equipment and readable storage medium
CN117082056A (en) File sharing method and electronic equipment
CN116774882A (en) File display method and file display device
CN113568553A (en) Display method and device
CN114979482A (en) Shooting method, shooting device, electronic equipment and medium
CN115470185A (en) File naming method and device, electronic equipment and storage medium
CN114840109A (en) Information display method and device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant