CN113093960A - Image editing method, editing device, electronic device and readable storage medium - Google Patents

Image editing method, editing device, electronic device and readable storage medium Download PDF

Info

Publication number
CN113093960A
CN113093960A CN202110409466.7A CN202110409466A CN113093960A CN 113093960 A CN113093960 A CN 113093960A CN 202110409466 A CN202110409466 A CN 202110409466A CN 113093960 A CN113093960 A CN 113093960A
Authority
CN
China
Prior art keywords
image
edited
input
editing
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110409466.7A
Other languages
Chinese (zh)
Other versions
CN113093960B (en
Inventor
陈琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Weiwo Software Technology Co ltd
Original Assignee
Nanjing Weiwo Software Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Weiwo Software Technology Co ltd filed Critical Nanjing Weiwo Software Technology Co ltd
Priority to CN202110409466.7A priority Critical patent/CN113093960B/en
Publication of CN113093960A publication Critical patent/CN113093960A/en
Application granted granted Critical
Publication of CN113093960B publication Critical patent/CN113093960B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04812Interaction techniques based on cursor appearance or behaviour, e.g. being affected by the presence of displayed objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses an image editing method, an editing device, electronic equipment and a readable storage medium, and belongs to the technical field of image processing. The image editing method comprises the following steps: receiving a first input under the condition of displaying an image to be edited; responding to the first input, displaying an editable object, wherein the editable object is obtained by recognition according to an image to be edited; receiving a second input to the editable object; and responding to the second input, and displaying a target image, wherein the target image is obtained by editing the image to be edited based on the second input.

Description

Image editing method, editing device, electronic device and readable storage medium
Technical Field
The application belongs to the technical field of image processing, and particularly relates to an image editing method, an editing device, an electronic device and a readable storage medium.
Background
In the related art, when a user uses an electronic device such as a mobile phone, the user often captures a screen and shares the screen when encountering an interested content. The current screen capture is to capture the whole screen or a partial area of the screen, and only simple annotation can be carried out on the screen capture picture after the screen capture.
When private contents exist in the screenshot picture or contents which a user does not want to share are edited only through third-party software, for example, the contents are painted through paintbrushes, mosaics and the like, on one hand, the operation is complex, the third-party software is required to support, on the other hand, other contents are easily painted, the painted pictures can become non-uniform in style, and attractiveness is affected.
How to edit the screenshot quickly and conveniently is a technical problem to be solved urgently.
Disclosure of Invention
An object of the embodiments of the present application is to provide an image editing method, an editing apparatus, an electronic device, and a readable storage medium, which can quickly and conveniently edit a picture without using third-party software.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides an image editing method, including:
receiving a first input under the condition of displaying an image to be edited;
responding to the first input, displaying an editable object, wherein the editable object is obtained by recognition according to an image to be edited;
receiving a second input to the editable object;
and responding to the second input, and displaying a target image, wherein the target image is obtained by editing the image to be edited based on the second input.
In a second aspect, an embodiment of the present application provides an image editing apparatus, including:
a receiving unit configured to receive a first input in a case where an image to be edited is displayed;
the display unit is used for responding to the first input and displaying an editable object, and the editable object is obtained by recognition according to the image to be edited;
the receiving unit is also used for receiving a second input of the editable object;
the display unit is also used for responding to the second input and displaying the target image, and the target image is obtained by editing the image to be edited based on the second input.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a memory and a processor, where the memory stores a program or instructions, and the processor is configured to implement the steps of the image editing method provided in the first aspect when executing the program or instructions.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor, implement the steps of the image editing method as provided in the first aspect.
In the embodiment of the application, in the case of displaying an image to be edited, a user may determine an editable object according to the image to be edited through a first input, where the image to be edited may be a screen capture image obtained by controlling an electronic device to perform screen capture and screenshot input, and correspondingly, the editable object is an image portion or image content obtained according to the screen capture image.
Further, the user edits the image to be edited according to the editable object and the corresponding preset editing mode through second input, and then obtains the edited target image. For example, when the image to be edited is a screenshot image, the preset editing mode may be to quickly edit text information in the screenshot image, or to perform operations such as cropping and splicing on the whole or a part of the screenshot image itself.
Specifically, in the case that the preset editing mode is to edit the text information in the image to be edited, the editable object corresponds to the text information, at this time, an editing frame corresponding to the text information may be displayed, and the user may modify, add, delete, or the like the text in the image to be edited in the editing frame. If the preset editing mode is the condition of cutting and splicing the images to be edited, a 'selection box' can be displayed according to the first input of a user, the user can operate the 'selection box' through dragging, pulling or gesture operation and the like, so that an area needing to be deleted or reserved is selected from the frame in the images to be edited, the area is an editable object, if the user selects and deletes the picture content in the area, the system automatically combines and edits the screen capturing parts outside the area to obtain the edited screen capturing picture, and the edited screen capturing picture does not include the content of the 'deleted' part of the user.
According to the image editing method and device, after the image to be edited is obtained, the preset editing mode associated with the image to be edited is displayed, and a user is instructed to carry out quick and convenient editing operation on the image to be edited according to the self requirement, third-party software does not need to be called in the process, the usability and efficiency of image editing are obviously improved, and the use experience of the user is improved.
Drawings
FIG. 1 shows a flow diagram of an image editing method according to an embodiment of the present application;
FIG. 2 shows one of the schematic diagrams of an image editing method according to an embodiment of the present application;
FIG. 3 is a second schematic diagram of an image editing method according to an embodiment of the present application;
FIG. 4 is a third diagram illustrating an image editing method according to an embodiment of the present application;
FIG. 5 shows a fourth schematic diagram of an image editing method according to an embodiment of the present application;
FIG. 6 shows a fifth schematic diagram of an image editing method according to an embodiment of the present application;
FIG. 7 shows a sixth schematic of an image editing method according to an embodiment of the present application;
FIG. 8 shows a seventh schematic diagram of an image editing method according to an embodiment of the present application;
FIG. 9 shows an eighth schematic diagram of an image editing method according to an embodiment of the present application;
FIG. 10 shows a ninth schematic of an image editing method according to an embodiment of the present application;
fig. 11 is a block diagram showing a configuration of an image editing apparatus according to an embodiment of the present application;
FIG. 12 shows a block diagram of an electronic device according to an embodiment of the application;
fig. 13 is a schematic hardware structure diagram of an electronic device implementing an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The image editing method, the editing apparatus, the electronic device, and the readable storage medium provided in the embodiments of the present application are described in detail below with reference to the accompanying drawings by specific embodiments and application scenarios thereof.
In some embodiments of the present application, there is provided an image editing method, and fig. 1 shows a flowchart of an image editing method according to an embodiment of the present application, as shown in fig. 1, the method including:
102, receiving a first input under the condition of displaying an image to be edited;
104, responding to the first input, and displaying an editable object which is obtained by recognition according to the image to be edited;
step 106, receiving a second input to the editable object;
and step 108, responding to the second input, and displaying a target image, wherein the target image is obtained by editing the image to be edited based on the second input.
In the embodiment of the application, in the case of displaying an image to be edited, a user may determine an editable object according to the image to be edited through a first input, where the image to be edited may be a screen capture image obtained by controlling an electronic device to perform screen capture and screenshot input, and correspondingly, the editable object is an image portion or image content obtained according to the screen capture image.
Further, the user edits the image to be edited according to the editable object and the corresponding preset editing mode through second input, and then obtains the edited target image. For example, when the image to be edited is a screenshot image, the preset editing mode may be to quickly edit text information in the screenshot image, or to perform operations such as cropping and splicing on the whole or a part of the screenshot image itself.
Specifically, in the case that the preset editing mode is to edit the text information in the image to be edited, the editable object further corresponds to the text information, at this time, an editing frame corresponding to the text information may be displayed, and the user may modify, add, delete, or the like the text in the image to be edited in the editing frame. If the preset editing mode is the condition of cutting and splicing the images to be edited, a 'selection box' can be displayed according to the first input of a user, the user can operate the 'selection box' through dragging, pulling or gesture operation and the like, so that an area needing to be deleted or reserved is selected from the frame in the images to be edited, the area is an editable object, if the user selects and deletes the picture content in the area, the system automatically combines and edits the screen capturing parts outside the area to obtain the edited screen capturing picture, and the edited screen capturing picture does not include the content of the 'deleted' part of the user.
According to the image editing method and device, after the image to be edited is obtained, the preset editing mode associated with the image to be edited is displayed, and a user is instructed to carry out quick and convenient editing operation on the image to be edited according to the self requirement, third-party software does not need to be called in the process, the usability and efficiency of image editing are obviously improved, and the use experience of the user is improved.
In some embodiments of the present application, the editable object comprises a first image portion and a second image portion, the second input being an editing input, in response to the editing input, displaying the target image, comprising:
in response to a second input, determining a target area on the image to be edited, wherein the width of the target area is the same as that of the image to be edited;
dividing an image to be edited into a first image part located in the target area and at least one second image part located outside the target area through the target area;
editing the first image part and the second image part according to an editing mode corresponding to editing input;
and displaying the edited target image.
In an embodiment of the present application, the editable image comprises a first image and a second image, wherein both the first image and the second image are obtained from the editable image. Specifically, after the first input is received, a target area is further determined on an image to be edited displayed on a display interface of the current electronic device.
The first input may be, for example, a frame selection, a gesture operation, a selection frame pulling, or the like, and a target area is determined on the image to be edited. It can be understood that, in order to ensure the integrity of the edited image, the width of the target area should be consistent with the width of the image to be edited.
After dividing the target area, the target area divides the image to be edited into a first image portion located inside the target area and a second image portion located outside the target area.
Specifically, fig. 2 shows one of schematic diagrams of an image editing method according to an embodiment of the present application, and as shown in fig. 2, in section a, a chat window displayed in a current display interface is subjected to screen capture and formed into an image to be edited. Here, the image to be edited 302 is a chat window currently displayed. Further, by the first input, the target area 304 is selected in the chat window, and as shown in part B of fig. 2, the target area 304 can be displayed distinctively. Meanwhile, through gesture operation, the position and size of the target area 304 can be adjusted, so that the target object needing to be edited is determined.
After the target area is determined, as shown in part B of fig. 2, the image to be edited at this time is divided into three parts, a first image part located inside the target area 304 shown in fig. 2, and two second image parts located outside the target area 304 and respectively located above and below the target area 304.
Meanwhile, preset editing modes corresponding to the target area 304, such as selection, reverse selection, deletion, combination and the like, are displayed at the bottom of the screen. And selecting a corresponding editing mode through the second input, namely correspondingly editing the first image part and the second image part to finally obtain an edited target image.
By applying the method and the device, the image to be edited can be quickly and conveniently edited, and in the whole editing process, a user does not need to copy the image to third-party software, so that the usability and efficiency of image editing can be obviously improved, and the use experience of the user is improved.
In some embodiments of the present application, displaying at least one editable object associated with a target editing mode includes: the first image portion and the second image portion are displayed differently.
In the embodiment of the present application, the first image portion and the second image portion may be displayed distinctively, thereby marking an editable object. For example, in the case where the user performs cropping, merging editing on an image to be edited, when the user wishes to retain a first image portion, the first image portion is the editable object, and when the user wishes to retain a second image portion, the second image portion is the editable object.
Fig. 3 shows a second schematic diagram of an image editing method according to an embodiment of the present application. The first image portion 402 is shaded in addition to the original image, and it can be understood that the display modes of the first image portion 402 and the second image portion 404 can be switched.
In some embodiments of the present application, the presetting an editing mode includes deleting an image, the editable object includes a first image portion and a second image portion, and editing an image to be edited according to an editing input to the editable object to obtain a corresponding target image includes:
determining the second image portion as the first target image in a case where the number of the second image portions is one;
and under the condition that the number of the second image parts is multiple, splicing and editing the multiple second image parts according to the position sequence of each second image part in the image to be edited to obtain the edited first target image.
In the embodiment of the present application, the editable object includes a first image portion, and the target editing mode is to delete the image, that is, to delete the first image portion in the target area in the image to be edited.
After deleting the first image portion, if the number of the remaining second image portions is one, that is, there is only one second image portion, the second image portion is the first target image obtained after the editing operation is performed. If the number of the second image portions is multiple, as shown in fig. 3, the first image portion is located in the middle of the image to be edited, after the first image portion is deleted, the image to be edited is divided into two second image portions, and at this time, the two second image portions are spliced and edited to obtain a spliced image, i.e., the first target image. Fig. 4 shows a third schematic diagram of an image editing method according to an embodiment of the present application, and a spliced image is shown in fig. 4.
In some embodiments of the present application, the editing mode includes retaining an image, and editing an image to be edited according to an editing input to an editable object to obtain a corresponding target image, including:
determining the first image portion as a fifth target image in a case where the number of the first image portions is one;
and under the condition that the number of the first image parts is multiple, splicing and editing the multiple first image parts according to the position sequence of each first image part in the image to be edited to obtain an edited second target image.
In the embodiment of the present application, the editable object includes a first image portion, and the target editing mode is a reserved image, that is, the first image portion in the target area in the image to be edited is reserved and spliced, and then a second image portion outside the target area is deleted.
Specifically, the user frames one or more target areas on the image to be edited, thereby obtaining one or more first image portions. If the number of the first image parts selected by the user is one, the first image part is the first target image obtained after editing operation after all the second image parts are deleted. And if the number of the first image parts selected by the user is multiple, respectively intercepting the first image parts, and splicing and editing the first image parts according to the position sequence of the first image parts in the image to be edited to obtain a spliced image, namely a first target image.
In some embodiments of the present application, the image to be edited includes first text information, the first input includes an image recognition input, the editable object includes editable text corresponding to the first text information, and the displaying at least one editable object associated with the target editing mode includes:
responding to image recognition input, performing image recognition on an image to be edited to acquire first character information in the image to be edited and determining a display sequence corresponding to the first character information;
and displaying the edit boxes, and displaying the editable texts in the edit boxes according to the display sequence.
In the embodiment of the application, the characters in the image to be edited, which are obtained after screenshot, can be identified, so that the first character information contained in the image to be edited is edited. The image to be edited may be a screen capture image obtained through a screen capture operation, or may be another image.
Specifically, after the user intercepts all or part of the image of the current display interface as the image to be edited, the text editing is selected as a target editing mode. At this time, the system recognizes and extracts the characters in the image to be edited by means of, for example, an OCR (Optical Character Recognition) algorithm, and displays an editable object corresponding to the Character editing function, that is, an editable text of the first Character information in the edit box.
Fig. 5 shows a fourth schematic diagram of an image editing method according to an embodiment of the present application, as shown in fig. 5, wherein a first text message 702 and an edit box 704 are displayed on an image to be edited, and an editable text corresponding to the first text message 702 is displayed in the edit box 704, wherein a display sequence of the editable text in the edit box 704 is the same as a display sequence of the first text message 702 in the image to be edited.
Alternatively, when the user moves the input cursor over a sentence in the editable text, the position where the sentence appears in the first text information 702 may be highlighted at the same time.
In some embodiments of the present application, displaying the target image in response to the second input includes:
receiving a text editing input to the editable text, wherein the text editing input comprises changing a text, deleting a text and/or adding a new text;
editing the editable text according to the character editing input to obtain edited second character information;
and updating the first character information of the image to be edited according to the second character information to obtain an edited third target image.
In the embodiment of the application, the user can edit and input the editable characters in the edit box, such as deleting part of characters, modifying part of characters or adding part of characters. Wherein, when some characters are deleted, if the deleted characters are 'whole line', further adjustment mode can be determined according to user selection.
Specifically, under the condition that the image to be edited comprises the character information, line breaks corresponding to the character information are obtained, and the background image of the character information display area in the image to be edited is segmented according to the number of the line breaks, wherein each background image in the segmented plurality of background images corresponds to one line break.
When the editing input is a deletion input and the character information targeted by the deletion input is a whole line, acquiring a first line break character corresponding to the deleted character information and deleting a background image corresponding to the first line break character in the image to be edited;
and generating a second line feed character according to the position information of the newly added character in the first character information when the editing input is newly added and the newly added character comprises the whole line of characters, and generating a corresponding background image according to the position information and the second line feed character in the image to be edited.
Fig. 6 is a fifth schematic diagram illustrating an image editing method according to an embodiment of the present application, where if a user wants to keep an empty line after deleting a text, the height of the background line where the text appears in the original image is not adjusted, and the height of the target image is adjusted accordingly. The effect is shown in fig. 6. Fig. 7 shows a sixth schematic diagram of an image editing method according to an embodiment of the present application, where if a user does not want to keep blank lines after deleting a whole line of text, the height of the background line is adjusted to remove the blank lines, and the effect is as shown in fig. 7.
In some embodiments of the present application, a display manner of the second text information in the first target picture is the same as a display manner of the first text information in the image to be edited.
In the embodiment of the present application, specifically, after the text is modified and edited, the user clicks to save, and then the original first text information is replaced according to the edited second text information of the user. Based on the principle of filling and the principle of blending the text parts according to the background, background color features can be used for filling or gradient filling.
Based on the 'recognition graph', the font, size, color category and the like of each character in the graph are obtained, and the adjacent character attributes are used for expansion. In some exemplary embodiments, the text of the original image is displayed in the comment area, the background of the newly added text-adjacent text is specifically the background of the comment area, that is, the background of the newly added text is the same as the background of the comment area, and the text attribute corresponds to the text attribute of the adjacent text and serves as the own attribute. And when the characters and the background color are completely fused, judging the starting point and the ending point of the text area based on the newly added character content. For the deleted content, if the text content of the current line does not exist but the empty line still remains, the height of the background is not changed. If the current row is deleted (i.e., no empty rows remain), the height of the background is reduced. In other words, the height or width of the background is increased or decreased according to the number of the line breaks of the newly added content. And finally, the newly generated text contents and the respective corresponding areas thereof are stored, and the modified texts are output as pictures, namely the third target images, so that the text editing of the pictures is realized, third-party software is not required to be relied on in the process, and the efficiency of the image editing can be effectively improved.
In some embodiments of the present application, updating the first text information in the image to be edited to obtain an edited third target image includes:
acquiring a first incidence relation between first character information and corresponding first background information in an image to be edited;
generating a corresponding updated image according to the second text information and the first association relation, wherein the second background information in the updated image is matched with the first background information;
and updating the first character information of the image to be edited through the updating image to obtain a third target image.
In the embodiment of the application, after editing the characters, first associated relation between first character information included in an original image to be edited and background information of a position where the first character information is located is obtained. The background information includes, for example, a background color, a background pattern, a background image, and the like. The first association relationship is a combination manner of the first text information and the background information.
And processing the second text information according to the first association relationship, namely according to the combination mode of the first text information and the background information, specifically, combining the second text information and the original background information to obtain an updated image corresponding to the position of the first text information, wherein the updated image comprises the second text information and the background information, and the combination mode of the second text information and the background information is the same as the combination mode of the background information and the first text information.
And meanwhile, in the third target image, the display mode of the second character information and the combination mode with the background are the same as those of the original image, so that the target image with high quality and high conformity can be obtained.
In some embodiments of the present application, updating the first text information in the image to be edited to obtain an edited third target image includes:
acquiring a second association relation between the image to be edited and the first character information;
and generating a third target image according to the image to be edited, the second association relation and the second character information.
In the embodiment of the application, after editing the text, a second association relationship between the original image to be edited and the first text information therein is obtained first, where the second association relationship is a combination relationship between the first text information and the original image, that is, the remaining image information after the first text information is separated from the original image.
And combining the second text information and the image to be edited according to the second association relation, namely replacing the first text information in the original image to be edited with the updated second text information, so as to obtain a corresponding third target image. The display mode of the character information in the third target image is the same as that of the original image, so that a target image with high quality and high fitting degree can be obtained.
In some embodiments of the present application, the image to be edited is an expression image, and in response to the first input, displaying the editable object includes:
displaying at least one gesture image associated with the expression image according to the first input, wherein the size of the gesture image is smaller than or equal to that of the expression image;
in response to a second input, displaying a target image, comprising:
and according to the second input, determining a target gesture image in the gesture image, editing the target gesture image and the expression image to obtain an edited fourth target image, wherein the fourth target image comprises at least part of the expression image and the gesture image.
In the embodiment of the application, after clicking the emoticon in the chat process, a emoticon pops up, wherein a plurality of emoticons are arranged and displayed. And after the user selects one of the expression images, expanding and displaying a floating interface of the associated secondary menu through a second input, and displaying a plurality of matched gesture images in the floating interface.
It can be understood that the first input to the expression image is a gesture input, which can be distinguished according to the operation gesture, such as a slide-up operation, in which the selected expression image is directly sent, and a slide-down operation, in which the first input is considered to have been performed, shows the floating interface.
Further, one of the gesture images is selected through a second input, and the gesture image and the expression image are combined to obtain a combined fourth target image.
Specifically, the emotion images may be facial expression images such as "smile", "love", and after one expression image is selected and expression combination editing is selected as a target editing mode, an editable object associated therewith includes a plurality of gesture images, such as "scissor hand", "raising thumb", and other gestures. Furthermore, through expression combination editing, the facial expression of the expression image and the first gesture image can be combined, so that a combined expression image capable of further expressing the emotion of the user is obtained, and the emotion of the user can be expressed more strongly after the expression image is combined with the scissor hand.
Wherein the expression image is a "primary expression" and the gesture image is an "auxiliary expression", and thus, in some embodiments, the display size of the gesture image is not greater than the display size of the expression.
Fig. 8 shows a seventh schematic diagram of an image editing method according to an embodiment of the present application, in which a floating interface 1202 is displayed.
In some embodiments of the present application, the second input includes a drag input, and according to the second input, determining a target gesture image in the gesture image, and editing the gesture image and the expression image includes:
determining a target gesture image according to a dragging starting point of dragging input;
and under the condition that the dragging end point of the dragging input is an expression graph, carrying out image combination editing on the target gesture image and the expression image to obtain a fourth target image.
In the embodiment of the application, a user can select a target gesture image from a plurality of gesture images in a floating interface, and merge the gesture image and the expression image in a manner of dragging the target gesture image to a target expression, wherein a starting point of dragging input is the target gesture image, and an end point of the dragging input is the expression image to be edited. Fig. 9 shows an eighth schematic diagram of an image editing method according to an embodiment of the present application, and after merging and editing, a fourth target image 1402 is obtained, specifically as shown in fig. 9, so as to obtain a combined expression image capable of further expressing the emotion of the user, for example, after "happy" and "scissors hand" are combined, the emotion of the user can be expressed more strongly.
In some embodiments of the present application, the image to be edited includes portrait information, the preset editing mode includes portrait processing, the editable object includes a preset portrait processing program, and displaying at least one editable object associated with the target editing mode includes:
performing image recognition on an image to be edited so as to obtain at least one portrait information in the image to be edited;
and respectively displaying at least one portrait processing program corresponding to each portrait information.
In the embodiment of the application, the image editing method further comprises a portrait editing method. Specifically, the image to be edited may be a screenshot image, and when the screenshot image includes a portrait image, the portrait portion in the screenshot image may be subjected to a cropping editing method, such as a face-thinning and whitening beautifying cropping, or when a privacy sensitivity problem is involved, the facial portion is subjected to a specific processing to desensitize the image information.
Specifically, after determining the image to be edited, image recognition is first performed on the image to be edited, for example, by using a face recognition algorithm, and an image portion containing portrait information is determined in the image to be edited. When the image to be edited only contains one portrait information, the corresponding portrait processing program can be directly displayed, and if the image to be edited contains a plurality of portrait information, the corresponding portrait processing program can be respectively displayed corresponding to each portrait information.
It can be understood that when the image to be edited contains a plurality of portrait information, the portrait information part can be highlighted, and after a user selects a certain portrait information, the corresponding portrait processing program is displayed.
In some embodiments of the present application, editing an image to be edited to obtain a corresponding target image according to an editing input to an editable object includes:
receiving a third selection input for any portrait information;
responding to a third selection input, and selecting a first image area corresponding to the target portrait information in the image to be edited;
receiving a fourth selection input to any portrait processing program associated with the target portrait;
in response to a fourth selection input, determining a target portrait processing procedure;
and editing the first image area through a target portrait processing program to obtain an edited fifth target image.
In the embodiment of the application, a third selection input of portrait information is received first to determine portrait information needing to be subjected to portrait editing. After a target portrait needing portrait editing is selected, a first image area corresponding to target portrait information is selected from an image to be edited, wherein the first image area can be a face part or an overall outline of the first portrait information.
Further, a fourth selection input is received for a person image processing program, which may be a cosmetic retouching image such as face thinning and whitening, or desensitization processing such as adding a mosaic. And after the target portrait processing program is selected, editing the selected first image area to obtain a fifth target image after portrait processing.
Fig. 10 shows a ninth schematic diagram of an image editing method according to an embodiment of the present application, as shown in fig. 10, portrait information 1702 is first identified in an image to be processed, further, according to a user selection operation, a preset portrait processing program 1704 is popped up, and according to a selected target portrait processing program, automatic image editing is performed on the target portrait information.
In the image editing method provided in the embodiment of the present application, the execution subject may be an image editing apparatus, or a control module of the image editing apparatus for executing the method for editing the image. In the embodiment of the present application, a method for executing image editing by an image editing apparatus is taken as an example, and an apparatus of the image editing apparatus provided in the embodiment of the present application is described.
In some embodiments of the present application, fig. 11 shows a block diagram of an image editing apparatus according to an embodiment of the present application, and as shown in fig. 11, an image editing apparatus 1800 includes:
a receiving unit 1802 for receiving a first input in a case where an image to be edited is displayed;
a display unit 1804 configured to display, in response to the first input, an editable object, which is identified from the image to be edited;
the receiving unit 1802 is further configured to receive a second input to the editable object;
the display unit 1804 is further configured to display, in response to the second input, a target image resulting from editing of the image to be edited based on the second input.
In the embodiment of the application, a user may determine an editable object according to an image to be edited through a first input under the condition that the image to be edited is displayed, where the image to be edited may be a screen capture image obtained by controlling an electronic device to perform screen capture and screenshot input, and correspondingly, the editable object is an image portion or image content obtained according to the screen capture image.
Further, the user edits the image to be edited according to the editable object and the corresponding preset editing mode through second input, and then obtains the edited target image. For example, when the image to be edited is a screenshot image, the preset editing mode may be to quickly edit text information in the screenshot image, or to perform operations such as cropping and splicing on the whole or a part of the screenshot image itself.
Specifically, in the case that the preset editing mode is to edit the text information in the image to be edited, the editable object further corresponds to the text information, at this time, an editing frame corresponding to the text information may be displayed, and the user may modify, add, delete, or the like the text in the image to be edited in the editing frame. If the preset editing mode is the condition of cutting and splicing the images to be edited, a 'selection box' can be displayed according to the first input of a user, the user can operate the 'selection box' through dragging, pulling or gesture operation and the like, so that an area needing to be deleted or reserved is selected from the frame in the images to be edited, the area is an editable object, if the user selects and deletes the picture content in the area, the system automatically combines and edits the screen capturing parts outside the area to obtain the edited screen capturing picture, and the edited screen capturing picture does not include the content of the 'deleted' part of the user.
According to the image editing method and device, after the image to be edited is obtained, the preset editing mode associated with the image to be edited is displayed, and a user is instructed to carry out quick and convenient editing operation on the image to be edited according to the self requirement, third-party software does not need to be called in the process, the usability and efficiency of image editing are obviously improved, and the use experience of the user is improved. In some embodiments of the present application, the editable object comprises a first image portion and a second image portion, the second input is an editing input, the image editing apparatus 1800 further comprises:
a determining unit 1806, configured to determine, in response to a second input, a target region on the image to be edited, where a width of the target region is the same as a width of the image to be edited; dividing an image to be edited into a first image part located in the target area and at least one second image part located outside the target area through the target area;
an editing unit 1808, configured to edit the first image portion and the second image portion according to an editing manner corresponding to an editing input;
the display unit 1804 is further configured to display the edited target image.
In an embodiment of the present application, the editable image comprises a first image and a second image, wherein both the first image and the second image are obtained from the editable image. Specifically, after the first input is received, a target area is further determined on an image to be edited displayed on a display interface of the current electronic device.
The first input may be, for example, a frame selection, a gesture operation, a selection frame pulling, or the like, and a target area is determined on the image to be edited. It can be understood that, in order to ensure the integrity of the edited image, the width of the target area should be consistent with the width of the image to be edited.
After dividing the target area, the target area divides the image to be edited into a first image portion located inside the target area and a second image portion located outside the target area.
Specifically, fig. 2 shows one of schematic diagrams of an image editing method according to an embodiment of the present application, and as shown in fig. 2, in section a, a chat window displayed in a current display interface is subjected to screen capture and formed into an image to be edited. Here, the image to be edited 302 is a chat window currently displayed. Further, by the first input, the target area 304 is selected in the chat window, and as shown in part B of fig. 2, the target area 304 can be displayed distinctively. Meanwhile, through gesture operation, the position and size of the target area 304 can be adjusted, so that the target object needing to be edited is determined.
After the target area is determined, as shown in part B of fig. 2, the image to be edited at this time is divided into three parts, a first image part located inside the target area 304 shown in fig. 2, and two second image parts located outside the target area 304 and respectively located above and below the target area 304.
Meanwhile, preset editing modes corresponding to the target area 304, such as selection, reverse selection, deletion, combination and the like, are displayed at the bottom of the screen. And selecting a corresponding editing mode through the second input, namely correspondingly editing the first image part and the second image part to finally obtain an edited target image.
By applying the method and the device, the image to be edited can be quickly and conveniently edited, and in the whole editing process, a user does not need to copy the image to third-party software, so that the usability and efficiency of image editing can be obviously improved, and the use experience of the user is improved.
In some embodiments of the present application, the display unit 1804 is further configured to display the first image portion and the second image portion differently.
In the embodiment of the present application, the first image portion and the second image portion may be displayed distinctively, thereby marking an editable object. For example, in the case where the user performs cropping, merging editing on an image to be edited, when the user wishes to retain a first image portion, the first image portion is the editable object, and when the user wishes to retain a second image portion, the second image portion is the editable object.
Fig. 3 shows a second schematic diagram of an image editing method according to an embodiment of the present application. The first image portion 402 is shaded in addition to the original image, and it can be understood that the display modes of the first image portion 402 and the second image portion 404 can be switched.
In some embodiments of the present application, the preset editing mode includes deleting an image, the editable object includes a first image portion and a second image portion, and the determining unit 1806 is further configured to determine the second image portion as the first target image if the number of the second image portions is one;
the editing unit 1808 is further configured to, when the number of the second image portions is multiple, perform splicing editing on the multiple second image portions according to a position sequence of each second image portion in the image to be edited, so as to obtain an edited first target image.
In the embodiment of the present application, the editable object includes a first image portion, and the target editing mode is to delete the image, that is, to delete the first image portion in the target area in the image to be edited.
After deleting the first image portion, if the number of the remaining second image portions is one, that is, there is only one second image portion, the second image portion is the first target image obtained after the editing operation is performed. If the number of the second image portions is multiple, as shown in fig. 3, the first image portion is located in the middle of the image to be edited, after the first image portion is deleted, the image to be edited is divided into two second image portions, and at this time, the two second image portions are spliced and edited to obtain a spliced image, i.e., the first target image. Fig. 4 shows a third schematic diagram of an image editing method according to an embodiment of the present application, and a spliced image is shown in fig. 4.
In some embodiments of the present application, the editing mode includes retaining an image, and the determining unit 1806 is further configured to determine the first image portion as a fifth target image if the number of the first image portions is one;
the editing unit 1808 is further configured to, when the number of the first image portions is multiple, perform splicing editing on the multiple first image portions according to a position sequence of each first image portion in the image to be edited, so as to obtain an edited second target image.
In the embodiment of the present application, the editable object includes a first image portion, and the target editing mode is a reserved image, that is, the first image portion in the target area in the image to be edited is reserved and spliced, and then a second image portion outside the target area is deleted.
Specifically, the user frames one or more target areas on the image to be edited, thereby obtaining one or more first image portions. If the number of the first image parts selected by the user is one, the first image part is the first target image obtained after editing operation after all the second image parts are deleted. And if the number of the first image parts selected by the user is multiple, respectively intercepting the first image parts, and splicing and editing the first image parts according to the position sequence of the first image parts in the image to be edited to obtain a spliced image, namely a first target image.
In some embodiments of the present application, the image to be edited includes first text information, where the first input includes an image recognition input, and the determining unit 1806 is further configured to perform image recognition on the image to be edited in response to the image recognition input, so as to obtain the first text information in the image to be edited, and determine a display sequence corresponding to the first text information;
the display unit 1804 is further configured to display the edit boxes, and display the editable texts in the edit boxes according to the display order.
In the embodiment of the application, the characters in the image to be edited, which are obtained after screenshot, can be identified, so that the first character information contained in the image to be edited is edited. The image to be edited may be a screen capture image obtained through a screen capture operation, or may be another image.
Specifically, after the user intercepts all or part of the image of the current display interface as the image to be edited, the text editing is selected as a target editing mode. At this time, the system recognizes and extracts the characters in the image to be edited by means of, for example, an OCR (Optical Character Recognition) algorithm, and displays an editable object corresponding to the Character editing function, that is, an editable text of the first Character information in the edit box.
In some embodiments of the present application, the receiving unit 1802 is further configured to receive a text editing input for the editable text, wherein the text editing input includes changing text, deleting text, and/or adding text;
an editing unit 1808, further configured to edit the editable text according to the text editing input, so as to obtain edited second text information; and updating the first character information of the image to be edited according to the second character information to obtain an edited third target image.
In the embodiment of the application, the user can edit and input the editable characters in the edit box, such as deleting part of characters, modifying part of characters or adding part of characters. Wherein, when some characters are deleted, if the deleted characters are 'whole line', further adjustment mode can be determined according to user selection.
Specifically, under the condition that the image to be edited comprises the character information, line breaks corresponding to the character information are obtained, and the background image of the character information display area in the image to be edited is segmented according to the number of the line breaks, wherein each background image in the segmented plurality of background images corresponds to one line break.
When the editing input is a deletion input and the character information targeted by the deletion input is a whole line, acquiring a first line break character corresponding to the deleted character information and deleting a background image corresponding to the first line break character in the image to be edited;
and generating a second line feed character according to the position information of the newly added character in the first character information when the editing input is newly added and the newly added character comprises the whole line of characters, and generating a corresponding background image according to the position information and the second line feed character in the image to be edited.
In some embodiments of the present application, the determining unit 1806 is further configured to obtain, in the image to be edited, a first association relationship between the first text information and the corresponding first background information; generating a corresponding updated image according to the second text information and the first association relation, wherein the second background information in the updated image is matched with the first background information;
the editing unit 1808 is further configured to update the first text information of the image to be edited by using the update image to obtain a third target image.
In the embodiment of the application, after editing the characters, first associated relation between first character information included in an original image to be edited and background information of a position where the first character information is located is obtained. The background information includes, for example, a background color, a background pattern, a background image, and the like. The first association relationship is a combination manner of the first text information and the background information.
And processing the second text information according to the first association relationship, namely according to the combination mode of the first text information and the background information, specifically, combining the second text information and the original background information to obtain an updated image corresponding to the position of the first text information, wherein the updated image comprises the second text information and the background information, and the combination mode of the second text information and the background information is the same as the combination mode of the background information and the first text information.
And meanwhile, in the third target image, the display mode of the second character information and the combination mode with the background are the same as those of the original image, so that the target image with high quality and high conformity can be obtained.
In some embodiments of the present application, the determining unit 1806 is further configured to obtain a second association relationship between the image to be edited and the first text information;
the editing unit 1808 is further configured to generate a third target image according to the image to be edited, the second association relationship, and the second text information.
In the embodiment of the application, after editing the text, a second association relationship between the original image to be edited and the first text information therein is obtained first, where the second association relationship is a combination relationship between the first text information and the original image, that is, the remaining image information after the first text information is separated from the original image.
And combining the second text information and the image to be edited according to the second association relation, namely replacing the first text information in the original image to be edited with the updated second text information, so as to obtain a corresponding third target image. The display mode of the character information in the third target image is the same as that of the original image, so that a target image with high quality and high fitting degree can be obtained.
In some embodiments of the present application, the display unit 1804 is further configured to display at least one gesture image associated with the expression image according to the first input, wherein a size of the gesture image is smaller than or equal to a size of the expression image;
the editing unit 1808 is further configured to determine a target gesture image in the gesture image according to the second input, and edit the target gesture image and the expression image to obtain an edited fourth target image, where the fourth target image includes at least part of the expression image and the gesture image.
In the embodiment of the application, after clicking the emoticon in the chat process, a emoticon pops up, wherein a plurality of emoticons are arranged and displayed. And after the user selects one of the expression images, expanding and displaying a floating interface of the associated secondary menu through a second input, and displaying a plurality of matched gesture images in the floating interface.
It can be understood that the first input to the expression image is a gesture input, which can be distinguished according to the operation gesture, such as a slide-up operation, in which the selected expression image is directly sent, and a slide-down operation, in which the first input is considered to have been performed, shows the floating interface.
Further, one of the gesture images is selected through a second input, and the gesture image and the expression image are combined to obtain a combined fourth target image.
Specifically, the emotion images may be facial expression images such as "smile", "love", and after one expression image is selected and expression combination editing is selected as a target editing mode, an editable object associated therewith includes a plurality of gesture images, such as "scissor hand", "raising thumb", and other gestures. Furthermore, through expression combination editing, the facial expression of the expression image and the first gesture image can be combined, so that a combined expression image capable of further expressing the emotion of the user is obtained, and the emotion of the user can be expressed more strongly after the expression image is combined with the scissor hand.
Wherein the expression image is a "primary expression" and the gesture image is an "auxiliary expression", and thus, in some embodiments, the display size of the gesture image is not greater than the display size of the expression.
Fig. 8 shows a seventh schematic diagram of an image editing method according to an embodiment of the present application, in which a floating interface 1202 is displayed.
In some embodiments of the present application, the second input includes a drag input, and the determining unit 1806 is further configured to determine the target gesture image according to a drag start point of the drag input;
the editing unit 1808 is further configured to, when the dragging endpoint of the dragging input is an expression graph, perform image combination editing on the target gesture image and the expression image to obtain a fourth target image.
In the embodiment of the application, a user can select a target gesture image from a plurality of gesture images in a floating interface, and merge the gesture image and the expression image in a manner of dragging the target gesture image to a target expression, wherein a starting point of dragging input is the target gesture image, and an end point of the dragging input is the expression image to be edited. Fig. 9 shows an eighth schematic diagram of an image editing method according to an embodiment of the present application, and after merging and editing, a fourth target image 1402 is obtained, specifically as shown in fig. 9, so as to obtain a combined expression image capable of further expressing the emotion of the user, for example, after "happy" and "scissors hand" are combined, the emotion of the user can be expressed more strongly.
In the image editing method provided in the embodiment of the present application, the execution subject may be an image editing apparatus, or a control module of the image editing apparatus for executing the method for editing the image. In the embodiment of the present application, a method for executing image editing by an image editing apparatus is taken as an example, and an apparatus of the image editing apparatus provided in the embodiment of the present application is described.
The image editing apparatus in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The image editing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The image editing apparatus provided in the embodiment of the present application can implement each process implemented in the method embodiment of fig. 1, and is not described here again to avoid repetition.
Optionally, fig. 12 shows a block diagram of an electronic device according to an embodiment of the present application, as shown in fig. 12, an electronic device 1900 is further provided in this embodiment of the present application, which includes a processor 1902, a memory 1904, and a program or an instruction stored in the memory 1904 and executable on the processor 1902, and when the program or the instruction is executed by the processor 1902, the process of the embodiment of the image editing method is implemented, and the same technical effect can be achieved, and details are not repeated here to avoid repetition.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic device and the non-mobile electronic device described above.
Fig. 13 is a schematic hardware structure diagram of an electronic device implementing an embodiment of the present application.
The electronic device 2000 includes, but is not limited to: a radio frequency unit 2001, a network module 2002, an audio output unit 2003, an input unit 2004, a sensor 2005, a display unit 2006, a user input unit 2007, an interface unit 2008, a memory 2009, and a processor 2010.
Those skilled in the art will appreciate that the electronic device 2000 may further include a power source 2011 (e.g., a battery) for supplying power to various components, and the power source 2011 may be logically connected to the processor 2010 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 13 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
A user input unit 2007 for receiving a first input and a second input;
the display unit 2006 is configured to display, in response to a first input, an editable object, which is identified from an image to be edited; and in response to the second input, displaying a target image, the target image resulting from editing of the editable object based on the second input.
According to the image editing method and device, after the image to be edited is obtained, the preset editing mode associated with the image to be edited is displayed, and a user is instructed to carry out quick and convenient editing operation on the image to be edited according to the self requirement, third-party software does not need to be called in the process, the usability and efficiency of image editing are obviously improved, and the use experience of the user is improved.
Optionally, the processor 2010 is configured to determine a target area on the image to be edited in response to the second input, wherein a width of the target area is the same as a width of the image to be edited; dividing an image to be edited into a first image part located in the target area and at least one second image part located outside the target area through the target area; editing the first image part and the second image part according to an editing mode corresponding to editing input;
the display unit 2006 is used to display the edited target image.
Optionally, the processor 2010 is configured to determine the second image portion as the first target image if the editing mode is to delete the image and the number of the second image portions is one; when the editing mode is deleting the image and the number of the second image parts is multiple, splicing and editing the multiple second image parts according to the position sequence of each second image part in the image to be edited to obtain an edited first target image; determining the first image portion as a second target image in a case where the number of the first image portions is one; and under the condition that the editing mode is a reserved image and the number of the first image parts is multiple, splicing and editing the multiple first image parts according to the position sequence of each first image part in the image to be edited to obtain an edited second target image.
Optionally, the processor 2010 is configured to perform image recognition on the image to be edited in response to the image recognition input, so as to obtain first text information in the image to be edited and determine a display sequence corresponding to the first text information;
the display unit 2006 is configured to display edit boxes and display editable text in the edit boxes in the display order.
Optionally, the user input unit 2007 is used to receive text editing input for editable text, wherein the text editing input includes changing a text, deleting a text, or adding a text;
the processor 2010 is configured to edit the editable text according to the text editing input to obtain edited second text information; and updating the first character information in the image to be edited according to the second character information to obtain an edited third target image.
Optionally, the processor 2010 is configured to obtain, in the image to be edited, a first association relationship between the first text information and the corresponding first background information; generating a corresponding updated image according to the second text information and the first association relation, wherein the second background information in the updated image is matched with the first background information; and updating the first character information of the image to be edited through the updating image to obtain a third target image.
Optionally, the processor 2010 is configured to obtain a second association relationship between the image to be edited and the first text information; and generating a third target image according to the image to be edited, the second association relation and the second character information.
Optionally, the processor 2010 is configured to display at least one gesture image associated with the expression image according to the first input, wherein the size of the gesture image is smaller than or equal to the size of the expression image; and according to the second input, determining a target gesture image in the gesture image, editing the target gesture image and the expression image to obtain an edited fourth target image, wherein the fourth target image comprises at least part of the expression image and the gesture image.
Optionally, the processor 2010 is configured to determine a target gesture image according to a dragging start point of the dragging input; and under the condition that the dragging end point of the dragging input is an expression graph, carrying out image combination editing on the target gesture image and the expression image to obtain a fourth target image.
According to the image editing method and device, after the image to be edited is obtained, the preset editing mode associated with the image to be edited is displayed, and a user is instructed to carry out quick and convenient editing operation on the image to be edited according to the self requirement, third-party software does not need to be called in the process, the usability and efficiency of image editing are obviously improved, and the use experience of the user is improved.
It should be understood that in the embodiment of the present application, the input Unit 2004 may include a Graphics Processing Unit (GPU) 5082 and a microphone 5084, and the Graphics processor 5082 processes image data of still pictures or videos obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode.
The display unit 2006 may include a display panel 5122, and the display panel 5122 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 2007 includes a touch panel 5142 and other input devices 5144. A touch panel 5142 is also referred to as a touch screen. The touch panel 5142 may include two parts of a touch detection device and a touch controller. Other input devices 5144 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 2009 may be used to store software programs as well as various data, including but not limited to applications and operating systems. Processor 2010 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc. and a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 2010.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the image editing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the above-mentioned embodiment of the image editing method, and can achieve the same technical effect, and is not described here again to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (12)

1. An image editing method, comprising:
receiving a first input under the condition of displaying an image to be edited;
responding to the first input, and displaying an editable object which is obtained according to the image to be edited by identification;
receiving a second input to the editable object;
in response to the second input, displaying a target image, the target image resulting from editing of the editable object based on the second input.
2. The method of claim 1, wherein the editable object comprises a first image portion and a second image portion, wherein the second input is an editing input, and wherein displaying the target image in response to the editing input comprises:
in response to the second input, determining a target area on the image to be edited, wherein the width of the target area is the same as that of the image to be edited;
dividing the image to be edited into the first image part located in the target area and at least one second image part located outside the target area through the target area;
editing the first image part and the second image part according to an editing mode corresponding to the editing input;
and displaying the edited target image.
3. The method of claim 2, wherein the editing modes include a delete image and a reserve image, and wherein editing the first image portion and the second image portion according to the editing mode corresponding to the editing input comprises:
determining the second image part as a first target image when the editing mode is the deleted image and the number of the second image parts is one;
when the editing mode is the deleted image and the number of the second image parts is multiple, splicing and editing the multiple second image parts according to the position sequence of each second image part in the image to be edited to obtain the edited first target image;
determining the first image portion as a second target image in a case where the number of the first image portions is one;
and under the condition that the editing mode is the reserved image and the number of the first image parts is multiple, splicing and editing the multiple first image parts according to the position sequence of each first image part in the image to be edited to obtain the edited second target image.
4. The method of claim 1, wherein the image to be edited includes first textual information, the first input includes an image recognition input, the editable object includes editable text corresponding to the first textual information, and the displaying the editable object in response to the first input includes:
responding to the image recognition input, performing image recognition on the image to be edited so as to obtain the first character information in the image to be edited and determine a display sequence corresponding to the first character information;
and displaying an edit box, and displaying the editable text in the edit box according to the display sequence.
5. The method of claim 4, wherein the second input comprises a text editing input, and wherein displaying the target image in response to the second input comprises:
receiving the text editing input to the editable text, wherein the text editing input comprises changing a text, deleting a text or adding a text;
editing the editable text according to the character editing input to obtain edited second character information;
and updating the first character information in the image to be edited according to the second character information to obtain an edited third target image.
6. The method according to claim 5, wherein the updating the first text information in the image to be edited to obtain an edited third target image comprises:
acquiring a first incidence relation between the first character information and corresponding first background information in the image to be edited;
generating a corresponding updated image according to the second text information and the first incidence relation, wherein second background information in the updated image is matched with the first background information;
and updating the first text information of the image to be edited through the updated image to obtain the third target image.
7. The method according to claim 5, wherein the updating the first text information in the image to be edited to obtain an edited third target image comprises:
acquiring a second association relation between the image to be edited and the first character information;
and generating the third target image according to the image to be edited, the second association relation and the second character information.
8. The method according to any one of claims 1 to 7, wherein the image to be edited is an emoticon, and the displaying an editable object in response to the first input includes:
displaying at least one gesture image associated with the expression image according to the first input, wherein the size of the gesture image is smaller than or equal to that of the expression image;
the displaying, in response to the second input, a target image, comprising:
according to the second input, determining a target gesture image in the gesture image, editing the target gesture image and the expression image to obtain an edited fourth target image, wherein the fourth target image comprises at least part of the expression image and the gesture image.
9. The method of claim 8, wherein the second input comprises a drag input, wherein determining a target gesture image in the gesture image according to the second input, and wherein editing the gesture image and the expression image comprises:
determining the target gesture image according to a dragging starting point of the dragging input;
and under the condition that the dragging end point of the dragging input is the expression image, performing image combination editing on the target gesture image and the expression image to obtain the fourth target image.
10. An image editing apparatus characterized by comprising:
a receiving unit configured to receive a first input in a case where an image to be edited is displayed;
the display unit is used for responding to the first input and displaying an editable object, and the editable object is obtained by recognition according to the image to be edited;
the receiving unit is further used for receiving a second input of the editable object;
the display unit is further used for responding to the second input and displaying a target image, wherein the target image is obtained by editing the image to be edited based on the second input.
11. An electronic device comprising a memory having stored thereon a program or instructions, and a processor implementing the steps of the method according to any one of claims 1 to 9 when executing the program or instructions.
12. A readable storage medium on which a program or instructions are stored, which program or instructions, when executed by a processor, carry out the steps of the method according to any one of claims 1 to 9.
CN202110409466.7A 2021-04-16 2021-04-16 Image editing method, editing device, electronic device and readable storage medium Active CN113093960B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110409466.7A CN113093960B (en) 2021-04-16 2021-04-16 Image editing method, editing device, electronic device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110409466.7A CN113093960B (en) 2021-04-16 2021-04-16 Image editing method, editing device, electronic device and readable storage medium

Publications (2)

Publication Number Publication Date
CN113093960A true CN113093960A (en) 2021-07-09
CN113093960B CN113093960B (en) 2022-08-02

Family

ID=76678105

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110409466.7A Active CN113093960B (en) 2021-04-16 2021-04-16 Image editing method, editing device, electronic device and readable storage medium

Country Status (1)

Country Link
CN (1) CN113093960B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113691729A (en) * 2021-08-27 2021-11-23 维沃移动通信有限公司 Image processing method and device
CN115033169A (en) * 2022-05-20 2022-09-09 长沙朗源电子科技有限公司 Writing and erasing method and device for touch screen of electronic whiteboard and storage medium
WO2023060434A1 (en) * 2021-10-12 2023-04-20 中国科学院深圳先进技术研究院 Text-based image editing method, and electronic device
WO2023185785A1 (en) * 2022-03-28 2023-10-05 华为技术有限公司 Image processing method, model training method, and related apparatuses

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150146925A1 (en) * 2013-11-22 2015-05-28 Samsung Electronics Co., Ltd. Method for recognizing a specific object inside an image and electronic device thereof
CN106020647A (en) * 2016-05-23 2016-10-12 珠海市魅族科技有限公司 Picture content automatic extracting method and system
CN109460177A (en) * 2018-09-27 2019-03-12 维沃移动通信有限公司 A kind of image processing method and terminal device
CN109634494A (en) * 2018-11-12 2019-04-16 维沃移动通信有限公司 A kind of image processing method and terminal device
CN109859211A (en) * 2018-12-28 2019-06-07 努比亚技术有限公司 A kind of image processing method, mobile terminal and computer readable storage medium
CN111724455A (en) * 2020-06-15 2020-09-29 维沃移动通信有限公司 Image processing method and electronic device
CN112306347A (en) * 2020-10-30 2021-02-02 维沃移动通信有限公司 Image editing method, image editing device and electronic equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150146925A1 (en) * 2013-11-22 2015-05-28 Samsung Electronics Co., Ltd. Method for recognizing a specific object inside an image and electronic device thereof
CN106020647A (en) * 2016-05-23 2016-10-12 珠海市魅族科技有限公司 Picture content automatic extracting method and system
CN109460177A (en) * 2018-09-27 2019-03-12 维沃移动通信有限公司 A kind of image processing method and terminal device
CN109634494A (en) * 2018-11-12 2019-04-16 维沃移动通信有限公司 A kind of image processing method and terminal device
CN109859211A (en) * 2018-12-28 2019-06-07 努比亚技术有限公司 A kind of image processing method, mobile terminal and computer readable storage medium
CN111724455A (en) * 2020-06-15 2020-09-29 维沃移动通信有限公司 Image processing method and electronic device
CN112306347A (en) * 2020-10-30 2021-02-02 维沃移动通信有限公司 Image editing method, image editing device and electronic equipment

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113691729A (en) * 2021-08-27 2021-11-23 维沃移动通信有限公司 Image processing method and device
CN113691729B (en) * 2021-08-27 2023-08-22 维沃移动通信有限公司 Image processing method and device
WO2023060434A1 (en) * 2021-10-12 2023-04-20 中国科学院深圳先进技术研究院 Text-based image editing method, and electronic device
WO2023185785A1 (en) * 2022-03-28 2023-10-05 华为技术有限公司 Image processing method, model training method, and related apparatuses
CN115033169A (en) * 2022-05-20 2022-09-09 长沙朗源电子科技有限公司 Writing and erasing method and device for touch screen of electronic whiteboard and storage medium

Also Published As

Publication number Publication date
CN113093960B (en) 2022-08-02

Similar Documents

Publication Publication Date Title
CN113093960B (en) Image editing method, editing device, electronic device and readable storage medium
CN108279964B (en) Method and device for realizing covering layer rendering, intelligent equipment and storage medium
CN111612873A (en) GIF picture generation method and device and electronic equipment
CN113079316B (en) Image processing method, image processing device and electronic equipment
WO2023030306A1 (en) Method and apparatus for video editing, and electronic device
CN112817676A (en) Information processing method and electronic device
CN112162803A (en) Message display method and device and electronic equipment
CN114518822A (en) Application icon management method and device and electronic equipment
US8913076B1 (en) Method and apparatus to improve the usability of thumbnails
CN111857474B (en) Application program control method and device and electronic equipment
CN113313027A (en) Image processing method, image processing device, electronic equipment and storage medium
WO2023179539A1 (en) Video editing method and apparatus, and electronic device
CN111724455A (en) Image processing method and electronic device
JP2011192008A (en) Image processing system and image processing method
WO2023284640A1 (en) Picture processing method and electronic device
CN113783770B (en) Image sharing method, image sharing device and electronic equipment
CN116610243A (en) Display control method, display control device, electronic equipment and storage medium
WO2022228373A1 (en) Image management method and apparatus, electronic device, and readable storage medium
CN113362426B (en) Image editing method and image editing device
CN114995713A (en) Display control method and device, electronic equipment and readable storage medium
CN114518821A (en) Application icon management method and device and electronic equipment
CN112288835A (en) Image text extraction method and device and electronic equipment
CN113010072A (en) Searching method and device, electronic equipment and readable storage medium
CN111639474A (en) Document style reconstruction method and device and electronic equipment
CN112764632B (en) Image sharing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant