WO2023284640A1 - 图片处理方法和电子设备 - Google Patents

图片处理方法和电子设备 Download PDF

Info

Publication number
WO2023284640A1
WO2023284640A1 PCT/CN2022/104567 CN2022104567W WO2023284640A1 WO 2023284640 A1 WO2023284640 A1 WO 2023284640A1 CN 2022104567 W CN2022104567 W CN 2022104567W WO 2023284640 A1 WO2023284640 A1 WO 2023284640A1
Authority
WO
WIPO (PCT)
Prior art keywords
text
information
added
picture
input
Prior art date
Application number
PCT/CN2022/104567
Other languages
English (en)
French (fr)
Other versions
WO2023284640A9 (zh
Inventor
黄黎
冯明俐
Original Assignee
维沃移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 维沃移动通信有限公司 filed Critical 维沃移动通信有限公司
Publication of WO2023284640A1 publication Critical patent/WO2023284640A1/zh
Publication of WO2023284640A9 publication Critical patent/WO2023284640A9/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text

Definitions

  • the embodiments of the present application relate to the technical field of image processing, and in particular, to an image processing method and an electronic device.
  • pictures are widely used in various scenarios. For example, the pictures taken by the user using a mobile phone, the pictures obtained by the screenshot of the interface when the user uses a computing device such as a desktop computer or a smart phone, or the pictures received and displayed by other devices.
  • the demand for editing pictures is gradually increasing, typically editing text information of pictures.
  • the demand for rapid communication based on text information in pictures is gradually increasing. Therefore, the interactive function of quickly and efficiently editing and reusing image and text information has become an urgent need for most users.
  • an editing panel is usually displayed, and corresponding editing options are displayed in the editing panel, such as adding a frame to mark the content of the picture, or adding a cursor.
  • the location receives the text entered by the user and displays it added to the picture.
  • this image processing method is very inefficient, and the user needs to perform a lot of operation steps, and the reusability is extremely poor and needs to be improved.
  • the embodiments of the present application provide a picture processing method and electronic equipment, which solve the problems of low picture editing efficiency and complicated editing process operation, can improve the reusability of text information, and significantly optimize the picture editing method.
  • the embodiment of the present application provides an image processing method, including:
  • display multimedia information to be added In response to a first input of adding information to the first target picture, display multimedia information to be added, where the multimedia information to be added includes text information;
  • an image processing device including:
  • the adding response module is configured to display multimedia information to be added in response to the first input of adding information to the first target picture, and the multimedia information to be added includes text information;
  • a selection response module configured to determine target text information in the multimedia information to be added in response to a second input of selecting the multimedia information to be added;
  • a text adding module configured to add the target text information to the first target picture.
  • the embodiment of the present application provides an electronic device, including a processor, a memory, and a program or instruction stored on the memory and operable on the processor, and the program or instruction is processed by the The steps for implementing the image processing method as described in the first aspect are implemented when the device is executed.
  • an embodiment of the present application provides a readable storage medium, on which a program or an instruction is stored, and when the program or instruction is executed by a processor, the image processing method as described in the first aspect is implemented A step of.
  • the embodiment of the present application provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is used to run programs or instructions, so as to implement the first aspect The image processing method described above.
  • an embodiment of the present application provides a computer program product, the computer program product is stored in a non-volatile storage medium, and the computer program product is executed by at least one processor to implement the computer program product described in the first aspect.
  • the embodiment of the present application provides a communication device configured to execute the image processing method described in the first aspect.
  • the information adding mode of the first target picture is triggered by the first input, and the multimedia information to be added is displayed to the user, and the target text information in the multimedia information to be added selected by the user is determined by the second input, The target text information is added to the first target picture.
  • Fig. 1 is a schematic flow chart of an image processing method provided by an embodiment of the present application
  • Fig. 2 is the first schematic diagram of the mobile phone interface provided by the embodiment of the present application.
  • Fig. 3 is a second schematic diagram of the mobile phone interface provided by the embodiment of the present application.
  • FIG. 4 is a third schematic diagram of a mobile phone interface provided by an embodiment of the present application.
  • Fig. 5 is a fourth schematic diagram of the mobile phone interface provided by the embodiment of the present application.
  • Fig. 6 is a fifth schematic diagram of the mobile phone interface provided by the embodiment of the present application.
  • FIG. 7 is a sixth schematic diagram of a mobile phone interface provided by an embodiment of the present application.
  • FIG. 8 is a schematic flowchart of another image processing method provided by an embodiment of the present application.
  • FIG. 9 is a seventh schematic diagram of a mobile phone interface provided by an embodiment of the present application.
  • FIG. 10 is an eighth schematic diagram of a mobile phone interface provided by an embodiment of the present application.
  • FIG. 11 is a ninth schematic diagram of a mobile phone interface provided by an embodiment of the present application.
  • Fig. 12 is a schematic structural diagram of an image processing device provided by an embodiment of the present application.
  • Fig. 13 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • Fig. 14 is a schematic diagram of a hardware structure of an electronic device provided by an embodiment of the present application.
  • Fig. 1 is a schematic flow chart of an image processing method provided by an embodiment of the present application. Referring to Fig. 1, the image processing method includes:
  • Step 110 In response to the first input of adding information to the first target picture, display multimedia information to be added, where the multimedia information to be added includes text information.
  • the first target picture refers to a picture that needs to be edited currently. Such as screenshots, pictures to be sent to other devices, or pictures that need to be edited and saved for subsequent browsing, etc.
  • the first input refers to an information adding operation performed on adding information to the first target picture.
  • the first input includes a double-click operation, a zoom operation, a long press operation, etc. on the first target picture; it can also be below the first target picture (or any other position on the display interface) when the first target picture is opened.
  • a displayed edit button is operated by clicking on the edit button; of course, it can also be an operation of any combination of keyboard or smart device buttons, and the first input is not limited here.
  • the multimedia information to be added is correspondingly displayed.
  • the multimedia information to be added may be displayed below the first target picture, or the multimedia information to be added may be displayed in the form of a pop-up window on the side of the display interface, the specific display position and location of the multimedia information to be added
  • the display method is not limited.
  • the multimedia information to be added is material data for adding information to the first target picture, which includes corresponding text information.
  • the multimedia information to be added may be a plurality of different pictures to be added, wherein each picture to be added contains text information, and the text information is information that can be added to the first target picture .
  • the text information in the picture to be added may be text information displayed in the picture, or text information stored in association with the picture.
  • the multimedia information to be added may also be a piece of voice to be added, and the voice to be added may be displayed in the form of an icon, and the corresponding icon name may be a saved name of the voice to be added. Different voices have their associated text information.
  • the text information associated with the voice to be added can be displayed below the voice, or displayed when the voice information is clicked.
  • the following uses a scenario as an example for illustration. If the user wants to communicate with communication friends through the text information in multiple pictures on the instant messaging platform, in order to facilitate the reading of the communication friends, the user adds the text information in the multiple pictures to one picture. Because the text information in the picture is in the picture format, it cannot be edited twice. In the prior art, if the user wants to add the text information in one picture to another picture, he can only manually input the text information and add it. . At this time, the user can quickly add text information in another picture to the first target picture through the first input provided by this embodiment.
  • FIG. 2 is a first schematic diagram of a mobile phone interface provided by an embodiment of the present application.
  • the first target picture 12 selected by the user is displayed in the mobile phone interface 11, and the user clicks the information addition control 13 in the upper right corner of the mobile phone interface to trigger the first input of information addition to the first target picture 12, and the mobile phone responds
  • a picture preview area 14 is displayed at the bottom of the mobile phone interface 11, and a plurality of pictures 15 to be added are displayed side by side in the picture preview area 14, and the pictures 15 to be added in the picture preview area 14 can be slid left and right to view the pictures in the local gallery picture of.
  • Step 120 In response to the second input of selecting the multimedia information to be added, determine the target text information in the multimedia information to be added, and add the target text information to the first target picture.
  • the second input refers to a selection operation for selecting the multimedia information to be added, and the second input may be a click operation on the displayed multimedia information to be added.
  • the target text information is the text information selected by the user to be added to the first target picture. After the multimedia information to be added is displayed, correspondingly in response to the second input for selecting the multimedia information to be added, the multimedia information to be added selected by the second input is determined.
  • the picture to be added contains text information, and correspondingly after the picture to be added is selected, the target in the picture to be added is obtained text information.
  • the target text information may be text information displayed in the picture in an editable form after text recognition is performed on the picture, or text information integrated with the picture without text recognition.
  • the way to obtain the target text information includes performing optical character recognition on the picture to obtain the text information.
  • the target text information is correspondingly directly added to the first target picture.
  • it may be based on the detected movement of the multimedia information to be added, adding the target text information in the multimedia information to be added to the first target picture; it may also be such as detecting the multimedia information to be added
  • the Add button is clicked, the target text information in the multimedia information to be added is directly added and displayed at the preset position of the first target picture, wherein the preset position can be the middle, upper side, left side of the first target picture side or right etc.
  • the user selects the multimedia information to be added on the mobile phone interface according to his own needs.
  • the multimedia information to be added selected by the user is a picture to be added, with reference to FIG.
  • the text information in the picture 15 the text information in the picture 15 to be added is the target text information that the user wants to add to the first target picture 12 .
  • the text in the picture to be added is in the picture format
  • the text information in the picture format is not convenient for secondary editing. Therefore, the text recognition of the picture to be added is performed through the optical character recognition algorithm, the text data corresponding to the text in the picture to be added is obtained, and the text data is used as the target text information in the picture to be added.
  • the second input is a long-press operation of long-pressing the picture to be added, that is, long-pressing the selected picture to be added, a copy control pops up on the mobile phone interface, and clicking the copy control will copy the text corresponding to the selected picture to be added data. Press and hold the first target picture again, the mobile phone will pop up the paste control, click the paste control, and the text data will be pasted into the first target picture.
  • multiple pictures to be added and/or text information of multiple voices to be added may be added to the first target picture.
  • the text information corresponding to the picture to be added or the text information corresponding to the voice to be added is editable text data
  • the user can manually edit the content and format of the text data in the mobile phone interface.
  • the fifth input refers to a moving operation for moving the multimedia information to be added, and the user copies the target text information in the selected picture to be added through the fifth input. Accordingly, specific steps S1201 to S1203 in response to the fifth input include:
  • FIG. 3 is a second schematic diagram of a mobile phone interface provided by an embodiment of the present application.
  • the user selects the second picture to be added from right to left in the picture preview area 14 , and drags the picture to be added upwards. It can be understood that when the user drags the picture to be added, the user will touch the mobile phone interface, and the picture to be added selected by the user can be determined according to the starting position of the fifth input.
  • target text information in the selected picture to be added is acquired.
  • the selected picture to be added has associated target text information
  • the corresponding associated target text information is obtained; if the selected picture to be added does not have associated target text information, then the selected picture to be added is Text recognition of the picture to obtain the corresponding target text information.
  • the target text information associated with the picture to be added refers to the text data associated with the picture to be added after the text data is obtained during text recognition of the picture to be added, so that it is convenient for subsequent direct copying of the text associated with the picture to be added
  • the data is pasted into the first target picture to improve the speed of adding information. Further, if there is no associated target text information for the selected picture to be added, after text recognition is performed on the selected picture to be added to obtain the corresponding text data, then the text data is stored in association with the corresponding picture to be added .
  • FIG. 4 is a third schematic diagram of a mobile phone interface provided by an embodiment of the present application.
  • the text data 16 "view”, “development tool” and “help” in the picture to be added in the selected picture are displayed on At the contact position of the finger, the text data 16 moves as the finger is dragged.
  • FIG. 5 is a fourth schematic diagram of a mobile phone interface provided by an embodiment of the present application.
  • the finger leaves the screen of the mobile phone.
  • the mobile phone senses the user's finger leaving and determines the position when leaving, ends with the fifth input represented by the leaving action, and uses the leaving position as the end position of the fifth input to display the text data 16 at the end position.
  • the text data is set on the top layer of the layer, and the user can manually adjust the display position of the text data on the mobile phone interface.
  • the text information in the selected multimedia information to be added includes at least two
  • each text information is displayed in a list form, and in response to the sixth input of text information selection on the text list, the corresponding selection is obtained.
  • the sixth input refers to a text information selection operation for selecting text information in the text list.
  • the selected text information associated with the picture to be added includes three text data, which are "view", "development tool” and "help". For these three text data, the user may only want to add one of them to the first target picture 12.
  • the text data is displayed in the text list, and the text list is displayed on the mobile phone interface 11.
  • the text list selects the appropriate text data.
  • FIG. 1 Exemplarily, FIG.
  • FIG. 6 is a fifth schematic diagram of a mobile phone interface provided by an embodiment of the present application.
  • a text list 17 pops up on the mobile phone interface 11, and each text data is displayed in parallel in the text list 17, and a selection control 18 is arranged in the text list 17.
  • the selection control 18 turns green to indicate that the text data of the same row is selected.
  • the text list 17 can be closed by clicking any interface except the text list 17 in the mobile phone interface 11 .
  • the mobile phone responds to the sixth input of selecting text information in the text list 17, acquires the text data selected by the user, and displays the text data at the end position of the fifth input.
  • FIG. 7 is a sixth schematic diagram of a mobile phone interface provided by an embodiment of the present application. As shown in FIG. 7 , text data 19 "View” and “Development Tool” are displayed in the first object picture 12. As shown in FIG. 7 ,
  • the image processing method provided by this embodiment triggers the information adding mode of the first target image through the first input, and displays the multimedia information to be added to the user, and determines the multimedia information to be added selected by the user through the second input.
  • the target text information of the first target image is added to the acquired text information, so as to quickly add the text information of the first target image.
  • the picture to be added is displayed through the picture preview area, which is convenient for the user to directly drag and select a suitable picture to be added, and the target text information in the selected picture to be added is dragged and copied through the fifth input, and added in the first target image.
  • the interactive mode combining the picture preview area and the fifth input is conducive to the user to quickly select the picture to be added, which significantly optimizes the picture editing method, and the target text information saved in association with the picture to be added is conducive to the rapid acquisition and addition of text information, not only It can improve the efficiency of adding text information, and can also improve the reusability of text information.
  • FIG. 8 is a schematic flowchart of another image processing method provided by an embodiment of the present application. This embodiment is embodied on the basis of the above-mentioned embodiments. As shown in Figure 8, the image processing method provided in this embodiment also includes:
  • Step 210 in response to the third input of character recognition on the second target picture, recognize the text in the second target picture to obtain text data.
  • the second target picture refers to a picture that currently needs to be processed for character recognition.
  • the second target picture may be the first target picture that currently needs to be edited, or it may be a picture that is simply for character recognition.
  • the third input refers to a recognition trigger operation for triggering character recognition processing on the second target picture.
  • the third input can be a double-click operation, a zoom operation, a long press operation, etc. on the second target picture; ) displays a recognition button, by clicking on the recognition button; of course, it can also be an operation of any combination of keyboard or smart device buttons, and the third input is not limited here.
  • the third input and the above-mentioned first input are not the same trigger operation, for example, when the first input is a long press operation, the third input cannot be a long press operation.
  • the following uses a scenario as an example for illustration. If the text information in the first target picture selected by the user is blurred, the user zooms in or out on the first target picture to recognize the text information in the picture with the naked eye, but some blurred text is distorted after zooming in or out , the distorted text cannot fully express the corresponding text information. At this point, the user can use the third input provided by this embodiment to perform text recognition on such pictures with blurred text, and replace and display the blurred text in the original picture with the recognized clear text data to enhance the text display effect.
  • FIG. 9 is a seventh schematic diagram of a mobile phone interface provided by an embodiment of the present application.
  • Step 220 display text data on the second target picture.
  • FIG. 10 is an eighth schematic diagram of a mobile phone interface provided by an embodiment of the present application.
  • the first target picture in FIG. 10 is obtained.
  • the text in the first target picture is blurred, and it is necessary to replace the text data with the corresponding text in the first target picture.
  • the text data display step specifically includes S2201 to S2202:
  • the original text refers to the text displayed in the image format in the first target image.
  • the original text is composed of multiple pixels.
  • the area where the original text is located refers to the area corresponding to the pixel coordinates of the pixels of the original text in the first target image.
  • the local area refers to the surrounding area composed of pixels surrounding the pixels of the original text.
  • a text box is displayed in a floating area where the original text is located, and text data is displayed in the text box.
  • FIG. 11 is a ninth schematic diagram of a mobile phone interface provided by an embodiment of the present application. As shown in Figure 11, "File”, “Classic Menu”, “Start”, “Insert” and “Design” are displayed in the text box, and one text data corresponds to one text box. According to the pixel coordinates of the original text of "file”, the text box displaying "file” will be suspended and displayed in the area where the original text has replaced the pixel value, so that the blurred text in Figure 10 is replaced with the clear one in Figure 11 Word.
  • the pixel size of the original text can be determined according to the pixel coordinates of the original text
  • the font size of the text in the text box can be determined according to the pixel size
  • the text color in the text box can be determined according to the pixel mean value of the original text.
  • the filling color of the text box can be determined according to the pixel mean value of the surrounding area of the original text corresponding to the text data in the text box, and the color of the text displayed in the text box can be determined according to the pixel mean value of the original text.
  • the background color of the text data display is adjusted according to the pixel mean value, so that the display effect of the clear text is close to the display effect of the original text, but the clarity is obviously improved.
  • each text data is displayed correspondingly in a text box, without considering the space between each original text and adding a null character, only the text box is correspondingly displayed in the area where the original text is located.
  • Step 230 In response to the fourth input of text data editing, edit the text data.
  • the editing includes one or more of text size editing, text color editing, text type editing, text content editing and text data position editing.
  • misrecognized text may appear, for which this embodiment provides an editable text box.
  • Users can edit the displayed text through the text box. Specifically, according to the fourth input of editing the text box, the font size, color, type and content of the text in the text box are correspondingly adjusted, and the position of the text box in the first target picture is adjusted.
  • Step 240 associate and save the edited text data and the second target picture, and store it as multimedia information to be added.
  • the edited text data is text information in the first target picture.
  • the first input of information addition is performed on other pictures in the next picture processing process, and the picture to be added selected by the user is the picture currently undergoing character recognition processing, then the information related to the picture to be added can be obtained.
  • the text data associated with the picture does not need to perform text recognition processing on the picture to be added, which effectively improves the efficiency of adding text information. It should be noted that, if the user does not edit the recognized text data again, the recognized text data and the first target picture are directly stored in association.
  • the picture processing method recognizes the text in the picture to be edited by text, obtains the text in the picture to be edited, and replaces the text with the text in the picture to be edited, because the output of the text recognition algorithm
  • the text is clear text. Replacing the clear text with the blurred text picture can effectively improve the clarity of the picture text and prevent the blurred original text of the picture from affecting the user's reading efficiency.
  • the efficiency of subsequent information addition is improved.
  • the image processing method provided in the embodiment of the present application may be executed by an image processing device, or a control module in the image processing device for executing the image processing method.
  • the picture processing device provided in the embodiment of the present application is described by taking the picture processing device executing the picture processing method as an example.
  • Fig. 12 is a schematic structural diagram of an image processing device provided by an embodiment of the present application. As shown in Figure 12, the image processing device includes: an adding response module 301, a selection response module 302 and a text adding module 303.
  • the adding response module 301 is configured to display the multimedia information to be added in response to the first input of adding information to the first target picture, and the multimedia information to be added includes text information;
  • the selection response module 302 is configured to determine the target text information in the multimedia information to be added in response to the second input of selecting the multimedia information to be added;
  • the text adding module 303 is configured to add target text information to the first target picture.
  • the image processing device further includes: a character recognition module configured to, in response to a third input for character recognition on the second target picture, recognize the characters in the second target picture to obtain text data;
  • the text display module is configured to display text data on the second target picture.
  • the text display module includes: a replacement determination unit configured to determine the pixel mean value of the area where the original text containing text data is located, and replace the pixel value of the area where the original text is located with the pixel mean value; the text display unit, Configured to display text data in the same area as the original text.
  • the text display unit includes:
  • the text box display subunit is configured to display a text box in a floating area where the original text is located, and display text data in the text box.
  • the image processing device further includes: an editing module configured to edit the text data in response to a fourth input of text data editing, the editing includes text size editing, text color editing, text type editing 1. One or more of text content editing and text data location editing; the saving module is configured to associate and save the edited text data and the second target picture as multimedia information to be added.
  • an editing module configured to edit the text data in response to a fourth input of text data editing, the editing includes text size editing, text color editing, text type editing 1.
  • One or more of text content editing and text data location editing is configured to associate and save the edited text data and the second target picture as multimedia information to be added.
  • the second input includes a fifth input for moving the multimedia information to be added;
  • the selection response module includes: an information determination unit configured to determine the selected multimedia information to be added according to the starting position of the fifth input Multimedia information; an information acquisition unit configured to acquire target text information in the selected multimedia information to be added;
  • the text adding module includes: an information adding unit configured to add the target text information to the end position of the fifth input.
  • the image processing device triggers the information adding mode of the first target image through the first input, and displays the multimedia information to be added to the user, and determines the multimedia information to be added selected by the user through the second input.
  • the target text information is added to the first target picture.
  • the image processing apparatus in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal.
  • the device may be a mobile electronic device or a non-mobile electronic device.
  • the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a handheld computer, a vehicle electronic device, a wearable device, an ultra-mobile personal computer (ultra-mobile personal computer, UMPC), a netbook or a personal digital assistant (personal digital assistant).
  • non-mobile electronic devices can be servers, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (television, TV), teller machine or self-service machine, etc., this application Examples are not specifically limited.
  • Network Attached Storage NAS
  • personal computer personal computer, PC
  • television television
  • teller machine or self-service machine etc.
  • the picture processing device in the embodiment of the present application may be a device with an operating system.
  • the operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, which are not specifically limited in this embodiment of the present application.
  • the picture processing apparatus provided in the embodiment of the present application can realize various processes realized by the method embodiments in FIG. 1 to FIG. 11 , and details are not repeated here to avoid repetition.
  • the embodiment of the present application further provides an electronic device 40, including a processor 401, a memory 402, and programs or instructions stored in the memory 402 and operable on the processor 401,
  • the program or instruction is executed by the processor 401, the various processes of the above-mentioned image processing method embodiments can be achieved, and the same technical effect can be achieved. To avoid repetition, details are not repeated here.
  • the electronic devices in the embodiments of the present application include the above-mentioned mobile electronic devices and non-mobile electronic devices.
  • FIG. 14 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
  • the electronic device 40 includes but is not limited to: a radio frequency unit 403, a network module 404, an audio output unit 405, an input unit 406, a sensor 407, a display unit 408, a user input unit 409, an interface unit 410, a memory 402, and a processor 401, etc. part.
  • the electronic device 40 can also include a power supply (such as a battery) for supplying power to various components, and the power supply can be logically connected to the processor 401 through the power management system, so that the management of charging, discharging, and function can be realized through the power management system. Consumption management and other functions.
  • a power supply such as a battery
  • the structure of the electronic device shown in FIG. 14 does not constitute a limitation to the electronic device.
  • the electronic device may include more or fewer components than shown in the figure, or combine certain components, or arrange different components, and details will not be repeated here. .
  • the processor 401 is configured to display the multimedia information to be added in response to the first input of adding information to the first target picture, and the multimedia information to be added includes text information;
  • the second input is to determine the target text information in the multimedia information to be added; add the target text information to the first target picture.
  • the electronic device triggers the information adding mode of the first target picture through the first input, and displays the multimedia information to be added to the user, and determines the information in the multimedia information to be added selected by the user through the second input.
  • Target text information adding the target text information to the first target picture.
  • the processor 401 is further configured to, in response to a third input of character recognition on the second target picture, recognize the text in the second target picture to obtain text data; and display the text data on the second target picture.
  • the processor 401 is further configured to determine the average pixel value of the area where the original text contains the text data, replace the pixel value of the area where the original text is located with the average pixel value; and display the text data in the area where the original text is located.
  • the processor 401 is further configured to display a text box in a floating area where the original text is located, and display text data in the text box.
  • the processor 401 is further configured to edit the text data in response to the fourth input of text data editing, the editing includes text size editing, text color editing, text type editing, text content editing and text data position editing One or more of them; the edited text data and the second target picture are associated and stored as multimedia information to be added.
  • the second input includes a fifth input for moving the multimedia information to be added
  • the processor 401 is further configured to determine the selected multimedia information to be added according to the starting position of the second input; The target text information in the multimedia message; adding the target text information to the end position of the second input.
  • the electronic device recognizes the text in the picture to be edited by text, obtains the text in the picture to be edited, and replaces the text in the picture to be edited.
  • the text is clear text. Replacing the clear text with blurred text and pictures can effectively improve the clarity of the text in the picture and prevent the blurred original text of the picture from affecting the user's reading efficiency.
  • the efficiency of subsequent information addition is improved.
  • the input unit 406 may include a graphics processor (Graphics Processing Unit, GPU) 4061 and a microphone 4062, and the graphics processor 4061 is used for the image capture device (such as the image data of the still picture or video obtained by the camera) for processing.
  • the display unit 408 may include a display panel 4081, and the display panel 4081 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like.
  • the user input unit 409 includes a touch panel 4091 and other input devices 4092 .
  • the touch panel 4091 is also called a touch screen.
  • the touch panel 4091 may include two parts, a touch detection device and a touch controller.
  • Other input devices 409 may include, but are not limited to, physical keyboards, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, and joysticks, which will not be repeated here.
  • Memory 402 may be used to store software programs as well as various data, including but not limited to application programs and operating systems.
  • the processor 401 may integrate an application processor and a modem processor, wherein the application processor mainly processes operating systems, user interfaces, and application programs, and the modem processor mainly processes wireless communications. It can be understood that the foregoing modem processor may not be integrated into the processor 401 .
  • the embodiment of the present application also provides a readable storage medium.
  • the readable storage medium stores programs or instructions.
  • the program or instructions are executed by the processor, the various processes of the above-mentioned image processing method embodiments can be achieved, and the same To avoid repetition, the technical effects will not be repeated here.
  • the processor is the processor in the electronic device described in the above embodiments.
  • the readable storage medium includes computer readable storage medium, such as computer read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk, etc.
  • the embodiment of the present application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is used to run programs or instructions to implement the above image processing method embodiment Each process can achieve the same technical effect, so in order to avoid repetition, it will not be repeated here.
  • chips mentioned in the embodiments of the present application may also be called system-on-chip, system-on-chip, system-on-a-chip, or system-on-a-chip.
  • the embodiment of the present application further provides a computer program product, the computer program product is stored in a non-volatile storage medium, and the computer program product is executed by at least one processor to implement the various processes in the above image processing method embodiment , and can achieve the same technical effect, in order to avoid repetition, it will not be repeated here.
  • the embodiment of the present application also provides a communication device, which is configured to execute the various processes in the above image processing method embodiment, and can achieve the same technical effect. To avoid repetition, details are not repeated here.
  • the term “comprising”, “comprising” or any other variation thereof is intended to cover a non-exclusive inclusion such that a process, method, article or apparatus comprising a set of elements includes not only those elements, It also includes other elements not expressly listed, or elements inherent in the process, method, article, or device. Without further limitations, an element defined by the phrase “comprising a " does not preclude the presence of additional identical elements in the process, method, article, or apparatus comprising that element.
  • the scope of the methods and devices in the embodiments of the present application is not limited to performing functions in the order shown or discussed, and may also include performing functions in a substantially simultaneous manner or in reverse order according to the functions involved. Functions are performed, for example, the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请公开了一种图片处理方法和电子设备,属于图片处理技术领域,其包括:响应于对第一目标图片进行信息添加的第一输入,显示待添加的多媒体信息,待添加的多媒体信息包括文字信息;响应于对待添加的多媒体信息进行选择的第二输入,确定待添加的多媒体信息中的目标文字信息;将所述目标文字信息添加至所述第一目标图片中。

Description

图片处理方法和电子设备
相关申请的交叉引用
本申请主张在2021年7月15日在中国提交的中国专利申请No.202110799578.8的优先权,其全部内容通过引用包含于此。
技术领域
本申请实施例涉及图片处理技术领域,尤其涉及一种图片处理方法和电子设备。
背景技术
图片作为多媒体信息的一种,其被广泛应用于各个场景。如用户使用手机拍摄得到的图片,用户在使用计算设备如台式机或智能手机时的界面截图得到的图片,又或者是接收其它设备发送并进行显示的图片。随着图片应用的广泛,对图片进行编辑的需求也逐渐增加,典型的为对图片的文字信息进行编辑。如目前,随着用户人际交流需求的不断变化,用户在各种即时通讯平台进行聊天交流时,对基于图片中的文字信息进行快速交流的需求逐渐提升。因此快速高效编辑和复用图片文字信息的交互功能已成为多数用户的迫切需求。
现有的技术方案中,在对图片进行编辑时,通常是显示一编辑面板,在该编辑面板中显示相应的编辑选项,如添加框体以对图片内容进行标记,或者添加一光标,在光标位置接收用户录入的文字并显示添加至图片中。然而这种图片处理方式效率很低,用户需要执行很多的操作步骤,同时复用性极差需要改进。
发明内容
本申请实施例提供一种图片处理方法和电子设备,解决了图片编辑效率低,编辑流程操作复杂的问题,能够提升文字信息复用度,显著优化了图片编辑方式。
第一方面,本申请实施例提供了一种图片处理方法,包括:
响应于对第一目标图片进行信息添加的第一输入,显示待添加的多媒体信息,所述待添加的多媒体信息包括文字信息;
响应于对所述待添加的多媒体信息进行选择的第二输入,确定所述待添加的多媒体信息中的目标文字信息;
将所述目标文字信息添加至所述第一目标图片中。
第二方面,本申请实施例提供了一种图片处理装置,包括:
添加响应模块,被配置为响应于对第一目标图片进行信息添加的第一输入,显示待添加的多媒体信息,所述待添加的多媒体信息包括文字信息;
选择响应模块,被配置为响应于对所述待添加的多媒体信息进行选择的第二输入,确定所述待添加的多媒体信息中的目标文字信息;
文字添加模块,被配置为将所述目标文字信息添加至所述第一目标图片中。
第三方面,本申请实施例提供了一种电子设备,包括处理器,存储器及存储在所述存储器上并可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如第一方面所述的图片处理方法的步骤。
第四方面,本申请实施例提供了一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如第一方面所述的图片处理方法的步骤。
第五方面,本申请实施例提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如第一方面所述的图片处理方法。
第六方面,本申请实施例提供了一种计算机程序产品,所述计算机程序产品被存储在非易失的存储介质中,所述计算机程序产品被至少一个处理器执行以实现如第一方面所述的图片处理方法。
第七方面,本申请实施例提供了一种通信设备,被配置为执行如第一方面所述的图片处理方法。
在本申请实施例中,通过第一输入触发第一目标图片的信息添加模式,并向用户展示待添加的多媒体信息,通过第二输入确定用户选择的待添加的 多媒体信息中的目标文字信息,将目标文字信息添加至第一目标图片中。通过上述技术手段,可以方便快捷地在第一目标图片中添加多媒体信息中的文字信息,解决了图片编辑效率低,编辑流程操作复杂的问题,能够提升文字信息复用度,显著优化了图片编辑方式。
附图说明
图1是本申请一个实施例提供的一种图片处理方法的流程示意图;
图2是本申请实施例提供的手机界面的第一示意图;
图3是本申请实施例提供的手机界面的第二示意图;
图4是本申请实施例提供的手机界面的第三示意图;
图5是本申请实施例提供的手机界面的第四示意图;
图6是本申请实施例提供的手机界面的第五示意图;
图7是本申请实施例提供的手机界面的第六示意图;
图8是本申请一个实施例提供的另一种图片处理方法的流程示意图;
图9为本申请实施例提供的手机界面的第七示意图;
图10是本申请实施例提供的手机界面的第八示意图;
图11是本申请实施例提供的手机界面的第九示意图;
图12是本申请一个实施例提供的一种图片处理装置的结构示意图;
图13是本申请一个实施例提供的一种电子设备的结构示意图;
图14是本申请一个实施例提供的一种电子设备的硬件结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书和权利要求书中的术语“第一”、“第二”等是用于区别类似的对象,而不用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便本申请的实施例能够以除了在这里图示或描 述的那些以外的顺序实施,且“第一”、“第二”等所区分的对象通常为一类,并不限定对象的个数,例如第一对象可以是一个,也可以是多个。此外,说明书以及权利要求中“和/或”表示所连接对象的至少其中之一,字符“/”,一般表示前后关联对象是一种“或”的关系。
下面结合附图,通过具体的实施例及其应用场景对本申请实施例提供的图片处理方法进行详细地说明。
图1是本申请一个实施例提供的一种图片处理方法的流程示意图。参考图1,该图片处理方法包括:
步骤110、响应于对第一目标图片进行信息添加的第一输入,显示待添加的多媒体信息,待添加的多媒体信息包括文字信息。
其中,第一目标图片指当前需要进行编辑处理的图片。如屏幕截图图片、待发送给其他设备的图片或者需要编辑处理后保存以用于后续浏览的图片等。其中,第一输入是指对该第一目标图片添加信息进行的信息添加操作。第一输入包括对该第一目标图片进行的双击操作、缩放操作、长按操作等;还可以是当第一目标图片打开时,在该第一目标图片下方(或显示界面的任意其它位置)显示的一编辑按钮,通过对该编辑按钮的点击操作;当然还可以是任意的键盘或智能设备按钮组合的操作,此处对第一输入不做限定。
通过对第一输入进行响应,相应的进行待添加的多媒体信息的显示。可选的,可以是在第一目标图片下方进行待添加的多媒体信息的显示,或者在显示界面侧方以弹窗的形式显示待添加的多媒体信息,该待添加的多媒体信息具体的显示位置和显示方式不做限定。其中,该待添加的多媒体信息为对该第一目标图片进行信息添加的素材数据,其包括对应的文字信息。在一个实施例中,该待添加的多媒体信息可以是多张不同的待添加的图片,其中每张待添加的图片中均包含文字信息,该文字信息为可以添加至第一目标图片中的信息。其中,待添加的图片中的文字信息可以是图片中显示的文字信息,还可以是和图片关联存储的文字信息。该待添加的多媒体信息还可以是一段待添加的语音,待添加的语音可以以图标的形式显示,相应的图标命名可以是待添加的语音的保存名称。不同的语音有各自关联的文字信息,在一个实施例中,和待添加的语音关联的文字信息可以显示在该段语音的下方,或者 在点击该段语音信息时进行显示。
下面以一场景进行示例性说明。如用户想要在即时通讯平台上通过多张图片中的文字信息与通讯好友进行交流,为便于通讯好友阅读,用户将多张图片中的文字信息添加到一张图片中。由于图片中的文字信息为图片格式,其不能进行二次编辑,在现有技术中若用户想要将一图片中的文字信息添加到另一图片中,只能通过手动输入文字信息并进行添加。此时用户可通过本实施例提供的第一输入,以在第一目标图片中快速添加另一图片中的文字信息。首先用户通过手机界面选择第一目标图片,并在手机界面输入对第一目标图片进行的信息添加操作,以触发对第一目标图片进行信息添加的第一输入,手机响应第一输入,并在手机界面上显示至少一张待添加的图片。示例性的,图2是本申请实施例提供的手机界面的第一示意图。如图2所示,用户选择的第一目标图片12显示在手机界面11中,用户点击手机界面右上角的信息添加控件13即触发对第一目标图片12进行信息添加的第一输入,手机响应于第一输入,在手机界面11下方显示图片预览区域14,多个待添加的图片15并排显示在图片预览区域14,可左右滑动图片预览区域14中的待添加的图片15,查看本地图库中的图片。
步骤120、响应于对待添加的多媒体信息进行选择的第二输入,确定待添加的多媒体信息中的目标文字信息,将目标文字信息添加至第一目标图片中。
其中,第二输入是指对待添加的多媒体信息进行选择的选择操作,第二输入可以是对显示的待添加的多媒体信息进行的点击操作。目标文字信息为用户选择的添加至第一目标图片中的文字信息。在显示待添加的多媒体信息后,相应的响应于对待添加的多媒体信息进行选择的第二输入,确定该第二输入所选中的待添加的多媒体信息。在一个实施例中,以待添加的多媒体信息为待添加的图片为例,待添加的图片中包含有文字信息,相应的在待添加的图片被选择后以获取该待添加的图片中的目标文字信息。其中,目标文字信息可以是针对图片进行文字识别后以可编辑形式显示在图片中的文字信息,还可以是和图片为一体的未进行文字识别过的文字信息。针对未进行文字识别的目标文字信息,获取目标文字信息的方式包括对图片进行光学字符识别 后得到文字信息。
其中,在目标文字信息被获取后,相应的直接将目标文字信息添加至该第一目标图片中。示例性的,可以是根据检测到的对待添加的多媒体信息进行的移动,将待添加的多媒体信息中的目标文字信息添加至第一目标图片中;还可以是如检测到待添加的多媒体信息的添加按钮被点击,则将待添加的多媒体信息中的目标文字信息直接添加显示在该第一目标图片的预设位置,其中,该预设位置可以是第一目标图片的中间、上侧、左侧或右侧等。
示例性的,以待添加的多媒体信息显示在手机界面为例,用户根据自身需求在手机界面选择待添加的多媒体信息。假设用户选择的待添加的多媒体信息为待添加的图片,参考图2,用户在图片预览区域14内选择对应的待添加的图片15,根据用户选择的待添加的图片15,获取该待添加的图片15中的文字信息,该待添加的图片15中的文字信息即为用户想要添加至第一目标图片12中的目标文字信息。
在一个实施例中,由于待添加的图片中的文字为图片格式,图片格式的文字信息不便于二次编辑处理。因此通过光学字符识别算法对待添加的图片进行文字识别,获取到待添加的图片中文字对应的文本数据,将该文本数据作为待添加的图片中的目标文字信息。示例性的,第二输入为对待添加的图片进行长按的长按操作,即长按选择的待添加的图片,手机界面弹出复制控件,点击复制控件即复制选择的待添加的图片对应的文本数据。再长按第一目标图片,手机弹出粘贴控件,点击粘贴控件,即将文本数据粘贴至第一目标图片中。
其中,第一目标图片中可添加多张待添加的图片和/或多段待添加的语音的文字信息。在一个实施例中,无论是待添加的图片对应的文字信息还是待添加的语音对应的文字信息都是可编辑的文本数据,用户可在手机界面中手动编辑文本数据的内容和格式。
示例性的,以第二输入为对待添加的多媒体信息进行的第五输入为例。第五输入是指对待添加的多媒体信息进行移动的移动操作,用户通过第五输入将选择的待添加的图片中的目标文字信息复制出来。据此,响应于第五输入的具体步骤S1201到S1203包括:
S1201、根据第五输入的起始位置确定选择的待添加的多媒体信息。
以对手机界面中显示的待添加的图片进行移动的第五输入为例。图3是本申请实施例提供的手机界面的第二示意图。如图3所示,用户选中图片预览区域14中的从右往左排在第二的待添加的图片,将该待添加的图片往上拖动。可理解,用户拖动待添加的图片时,会触控手机界面,可根据第五输入的起始位置确定用户选择的待添加的图片。
S1202、获取选择的待添加的多媒体信息中的目标文字信息。
在确定用户选择的待添加的图片后,获取选择的待添加的图片中的目标文字信息。在一个实施例中,如果选择的待添加的图片存在关联的目标文字信息,则获取对应关联的目标文字信息,如果选择的待添加的图片不存在关联的目标文字信息,则对选择的待添加的图片进行文字识别得到对应的目标文字信息。其中,待添加的图片关联的目标文字信息是指在对待添加的图片进行文字识别时得到文本数据后,与待添加的图片关联保存的文本数据,这样便于后续直接复制待添加的图片关联的文本数据粘贴至第一目标图片中,提高信息添加的快捷性。进一步的,如果选择的待添加的图片不存在关联的目标文字信息,在对选择的待添加的图片进行文字识别得到对应的文本数据之后,则将该文本数据与对应的待添加的图片关联保存。
示例性的,图4是本申请实施例提供的手机界面的第三示意图。如图4所示,拖动的手指在离开图片预览区域14中选择的待添加的图片后,选择的待添加的图片中的文本数据16“视图”、“开发工具”和“帮助”显示在手指的接触位置处,文本数据16会随着手指的拖动而移动。
S1203、将目标文字信息添加至第五输入的结束位置。
示例性的,图5是本申请实施例提供的手机界面的第四示意图。如图5所示,用户将文本数据16拖动到第一目标图片12中后,手指离开手机屏幕。此时手机感应到用户手指离开的动作并确定离开时的位置,以离开动作表征的第五输入结束,将离开位置作为第五输入的结束位置,以将文本数据16显示在结束位置处。在一个实施例中,为保证文字信息能够有效显示在第一目标图片中,将文本数据设置在图层顶层,用户可在手机界面手动调整文本数据的显示位置。
在一个实施例中,如果选择的待添加的多媒体信息中的文字信息包括至少两个,则以列表形式显示每个文字信息,响应于对文本列表进行文字信息选择的第六输入,获取对应选择的目标文字信息。其中,第六输入是指对文本列表中的文字信息进行选择的文字信息选择操作。示例性的,参考图3,选择的待添加的图片关联的文字信息包含三个文本数据,其分别为“视图”、“开发工具”和“帮助”。对于这三个文本数据可能用户只想将其中一个添加至第一目标图片12中,对此本实施例将文本数据显示在文本列表中,并将文本列表显示在手机界面11上,便于用户通过文本列表选择合适的文本数据。示例性的,图6是本申请实施例提供的手机界面的第五示意图。如图6所示,在手指离开手机屏幕后,手机界面11上弹出文本列表17,每个文本数据并行显示在文本列表17中,文本列表17中设置有选择控件18,当用户点击选择控件18时,选择控件18变成绿色以表示同行的文本数据被选择。在用户完成对文本数据的选择后,任意点击手机界面11中除文本列表17以外的界面,即可关闭文本列表17。在关闭文本列表17后,手机响应于对文本列表17中的文字信息进行选择的第六输入,获取用户选择的文本数据,并将文本数据显示在第五输入的结束位置处。假设用户选择了“视图”和“开发工具”这两个文本数据,原先显示在结束位置处的“视图”、“开发工具”和“帮助”将替换成“视图”和“开发工具”。图7是本申请实施例提供的手机界面的第六示意图。如图7所示,第一目标图片12中显示文本数据19“视图”和“开发工具”。
综上,本实施例提供的图片处理方法,通过第一输入触发第一目标图片的信息添加模式,并向用户展示待添加的多媒体信息,通过第二输入确定用户选择的待添加的多媒体信息中的目标文字信息,将获取到的文字信息添加至第一目标图片中,以实现对第一目标图片的文字信息快速添加。除此之外,通过图片预览区域展示待添加的图片,便于用户直接拖动选取合适的待添加的图片,通过第五输入将选择的待添加的图片中的目标文字信息拖动复制出来,并添加在第一目标图片中。图片预览区域和第五输入结合的交互方式有利于用户快速选取待添加的图片,显著优化了图片编辑方式,而待添加的图片关联保存的目标文字信息有利于文字信息的快速获取和添加,不仅能够提 高文字信息添加效率,还够提升文字信息复用度。
图8是本申请一个实施例提供的另一种图片处理方法的流程示意图。本实施例是在上述实施例的基础上进行具体化。如图8所示,本实施例提供的图片处理方法还包括:
步骤210、响应于对第二目标图片进行文字识别的第三输入,对第二目标图片中的文字进行识别得到文本数据。
其中,第二目标图片是指当前需要进行文字识别处理的图片。第二目标图片可以是当前需要进行编辑处理的第一目标图片,也可以是单纯进行文字识别的图片。其中,第三输入是指触发对第二目标图片进行文字识别处理的识别触发操作。第三输入可以是对该第二目标图片进行的双击操作、缩放操作和长按操作等;还可以是当第一目标图片打开时,在该第一目标图片下方(或显示界面的任意其它位置)显示的一识别按钮,通过对该识别按钮的点击操作;当然还可以是任意的键盘或智能设备按钮组合的操作,此处第三输入不做限定。需要说明的是,第三输入和上述第一输入不为同一种触发操作,例如第一输入为长按操作时,第三输入不能为长按操作。
下面以一场景进行示例性说明。如用户选择的第一目标图片中的文字信息显示存在模糊的情况,用户对第一目标图片进行放大或缩小操作以通过肉眼识别图片中的文字信息,但有些模糊的文字在放大或缩小后失真,失真文字并不能完整表达对应的文字信息。此时,用户可通过本实施例提供的第三输入,以对这类存在模糊文字的图片进行文字识别,将识别得到的清晰文本数据替换显示原图片中的模糊文字,增强文字显示效果。首先用户在手机界面输入对手机界面中显示的第一目标图片进行的识别触发操作,以触发对第一目标图片进行文字识别的第三输入,手机响应第三输入并对第一目标图片进行文字识别。以开启第三输入为对第一目标图片进行缩放操作为例,用户对手机界面中的第一目标图片进行放大或缩小时,自动触发后台的文字识别程序对缩放后的第一目标图片进行光学字符识别,得到第一目标图片中的文本数据。示例性的,图9是本申请实施例提供的手机界面的第七示意图。如图9所示,当用户对手机界面11中的第一目标图片12进行放大操作时,通过后台设置的光学字符识别程序对第一目标图片中12的文字进行识别,得到 “文件”、“经典菜单”和“开始”等文本数据。
步骤220、在第二目标图片上显示文本数据。
示例性的,图10是本申请实施例提供的手机界面的第八示意图。当用户对第一目标图片12进行放大时,得到图10中的第一目标图片。如图10所示,第一目标图片中的文字显示模糊,需要将文本数据替换显示对应的第一目标图片中的文字。据此,文本数据显示步骤具体包括S2201到S2202:
S2201、确定包含文本数据的原文本所在区域的像素均值,将原文本所在区域的像素值替换为像素均值。
原文本是指第一目标图片中以图片格式进行显示的文字,原文本由多个像素点组成,原文本所在区域是指原文本的像素点在第一目标图片中像素坐标对应的区域,原文本所在区域是指围绕在原文本的像素点周围的像素点组成的周边区域。通过光学字符识别第一目标图片中的原文本时,会得到每个识别到的文本数据对应的原文本在第一目标图片中的像素坐标,根据原文本的像素坐标确定原文字周边区域的像素点。根据周边区域的像素点确定周边区域的像素均值,并将原文本的像素点的像素值替换为周边区域的像素均值。
S2203、在原文本所在区域显示文本数据。
在一个实施例中,在原文本所在区域悬浮显示文本框,并在文本框中显示文本数据。图11是本申请实施例提供的手机界面的第九示意图。如图11所示,将“文件”、“经典菜单”、“开始”、“插入”和“设计”显示在文本框中,一个文本数据对应一个文本框。根据“文件”的原文本的像素坐标,将显示有“文件”的文本框悬浮显示在已替换完像素值的原文本所在区域,以使得图10中模糊的文字替换成图11中的清晰的文字。其中,可根据原文本的像素坐标确定原文本的像素尺寸,并根据像素尺寸确定文本框中的文本字号,根据原文本的像素均值确定文本框中的文本颜色。
可选的,对于悬浮显示的文本框,可根据文本框中文本数据对应的原文本周边区域的像素均值,确定文本框的填充颜色,根据原文本的像素均值确定文本框中显示文本的颜色。
本实施例通过获取原文本周边区域的像素均值,根据像素均值调整文本数据显示的背景颜色,以使得清晰文本的显示效果逼近原文本的显示效果, 但清晰度明显提高。而且将每个文本数据显示对应显示在一个文本框中,无需考虑各原文本之间的间隔添加空字符,只需将文本框对应显示在原文本所在区域。
步骤230、响应于对文本数据编辑的第四输入,对文本数据进行编辑,编辑包括文本大小编辑、文本颜色编辑、文本类型编辑、文本内容编辑和文本数据位置编辑中的一种或多种。
示例性的,如果第一目标图片中原文本太过模糊可能会出现识别错误的文本,对此本实施例提供可编辑文本框。用户可通过文本框对其显示的文本进行编辑。具体的,根据对文本框进行编辑的第四输入,对应调整文本框中文本的字号、颜色、类型和内容,以及调整文本框在第一目标图片中的位置。
步骤240、对编辑后的文本数据与第二目标图片进行关联保存,存储为待添加的多媒体信息。
编辑后的文本数据为第一目标图片中的文字信息。在一个实施例中,若在下次的图片处理过程中对别的图片进行信息添加的第一输入,而用户选中的待添加的图片就是当前进行文字识别处理的图片,那么可以获取与待添加的图片关联的文本数据,而无需再对待添加的图片进行文字识别处理,有效提高文字信息的添加效率。需要说明的是,如果用户不对识别到的文本数据进行二次编辑,则直接将识别到的文本数据与第一目标图片进行关联保存。
综上,本实施例提供的图片处理方法,通过文字识别待编辑图片中的文字,得到待编辑图片中的文本文字,将文本文字替换显示待编辑图片中的文字图片,由于文字识别算法输出的文本文字是清晰文字,将清晰文字对应替代显示模糊的文字图片,可以有效提高图片文字的清晰度,避免图片原文本显示模糊影响用户的阅读效率。除此之外,通过关联保存第一目标图片与对应的文本数据,提高后续的信息添加效率。
需要说明的是,本申请实施例提供的图片处理方法,执行主体可以为图片处理装置,或者该图片处理装置中的用于执行图片处理方法的控制模块。本申请实施例中以图片处理装置执行图片处理方法为例,说明本申请实施例提供的图片处理装置。
图12是本申请一个实施例提供的一种图片处理装置的结构示意图。如图 12所示,该图片处理装置包括:添加响应模块301、选择响应模块302和文字添加模块303。
其中,添加响应模块301,被配置为响应于对第一目标图片进行信息添加的第一输入,显示待添加的多媒体信息,待添加的多媒体信息包括文字信息;
选择响应模块302,被配置为响应于对待添加的多媒体信息进行选择的第二输入,确定待添加的多媒体信息中的目标文字信息;
文字添加模块303,被配置为将目标文字信息添加至第一目标图片中。
在上述实施例的基础上,图片处理装置还包括:文字识别模块,被配置为响应于对第二目标图片进行文字识别的第三输入,对第二目标图片中的文字进行识别得到文本数据;文字显示模块,被配置为在第二目标图片上显示文本数据。
在上述实施例的基础上,文字显示模块包括:替换确定单元,被配置为确定包含文本数据的原文本所在区域的像素均值,将原文本所在区域的像素值替换为像素均值;文本显示单元,被配置为在原文本所在区域显示文本数据。
在上述实施例的基础上,文本显示单元包括:
文本框显示子单元,被配置为在原文本所在区域悬浮显示文本框,并在文本框中显示文本数据。
在上述实施例的基础上,图片处理装置还包括:编辑模块,被配置为响应于对文本数据编辑的第四输入,对文本数据进行编辑,编辑包括文本大小编辑、文本颜色编辑、文本类型编辑、文本内容编辑和文本数据位置编辑中的一种或多种;保存模块,被配置为对编辑后的文本数据与第二目标图片进行关联保存,存储为待添加的多媒体信息。
在上述实施例的基础上,第二输入包括对待添加的多媒体信息进行移动的第五输入;选择响应模块包括:信息确定单元,被配置为根据第五输入的起始位置确定选择的待添加的多媒体信息;信息获取单元,被配置为获取选择的待添加的多媒体信息中的目标文字信息;文字添加模块包括:信息添加单元,被配置为将目标文字信息添加至第五输入的结束位置。
综上,本实施例提供的图片处理装置,通过第一输入触发第一目标图片的信息添加模式,并向用户展示待添加的多媒体信息,通过第二输入确定用户选择的待添加的多媒体信息中的目标文字信息,将目标文字信息添加至第一目标图片中。通过上述技术手段,可以方便快捷地在第一目标图片中添加多媒体信息中的文字信息,解决了图片编辑效率低,编辑流程操作复杂的问题,能够提升文字信息复用度,显著优化了图片编辑方式。
本申请实施例中的图片处理装置可以是装置,也可以是终端中的部件、集成电路、或芯片。该装置可以是移动电子设备,也可以为非移动电子设备。示例性的,移动电子设备可以为手机、平板电脑、笔记本电脑、掌上电脑、车载电子设备、可穿戴设备、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本或者个人数字助理(personal digital assistant,PDA)等,非移动电子设备可以为服务器、网络附属存储器(Network Attached Storage,NAS)、个人计算机(personal computer,PC)、电视机(television,TV)、柜员机或者自助机等,本申请实施例不作具体限定。
本申请实施例中的图片处理装置可以为具有操作系统的装置。该操作系统可以为安卓(Android)操作系统,可以为ios操作系统,还可以为其他可能的操作系统,本申请实施例不作具体限定。
本申请实施例提供的图片处理装置能够实现图1至图11的方法实施例实现的各个过程,为避免重复,这里不再赘述。
可选地,如图13所示,本申请实施例还提供一种电子设备40,包括处理器401,存储器402,存储在存储器402上并可在所述处理器401上运行的程序或指令,该程序或指令被处理器401执行时实现上述图片处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
需要说明的是,本申请实施例中的电子设备包括上述所述的移动电子设备和非移动电子设备。
图14为实现本申请实施例的一种电子设备的硬件结构示意图。
该电子设备40包括但不限于:射频单元403、网络模块404、音频输出单元405、输入单元406、传感器407、显示单元408、用户输入单元409、接口单元410、存储器402、以及处理器401等部件。
本领域技术人员可以理解,电子设备40还可以包括给各个部件供电的电源(比如电池),电源可以通过电源管理系统与处理器401逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。图14中示出的电子设备结构并不构成对电子设备的限定,电子设备可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置,在此不再赘述。
其中,处理器401,用于响应于对第一目标图片进行信息添加的第一输入,显示待添加的多媒体信息,待添加的多媒体信息包括文字信息;响应于对待添加的多媒体信息进行选择的第二输入,确定待添加的多媒体信息中的目标文字信息;将目标文字信息添加至第一目标图片中。
综上,本实施例提供的电子设备,通过第一输入触发第一目标图片的信息添加模式,并向用户展示待添加的多媒体信息,通过第二输入确定用户选择的待添加的多媒体信息中的目标文字信息,将目标文字信息添加至第一目标图片中。通过上述技术手段,可以方便快捷地在第一目标图片中添加多媒体信息中的文字信息,解决了图片编辑效率低,编辑流程操作复杂的问题,能够提升文字信息复用度,显著优化了图片编辑方式。
可选地,处理器401,还用于响应于对第二目标图片进行文字识别的第三输入,对第二目标图片中的文字进行识别得到文本数据;在第二目标图片上显示文本数据。
可选地,处理器401,还用于确定包含文本数据的原文本所在区域的像素均值,将原文本所在区域的像素值替换为像素均值;在原文本所在区域显示文本数据。
可选地,处理器401,还用于在原文本所在区域悬浮显示文本框,并在文本框中显示文本数据。
可选地,处理器401,还用于响应于对文本数据编辑的第四输入,对文本数据进行编辑,编辑包括文本大小编辑、文本颜色编辑、文本类型编辑、文本内容编辑和文本数据位置编辑中的一种或多种;对编辑后的文本数据与第二目标图片进行关联保存,存储为待添加的多媒体信息。
可选地,第二输入包括对待添加的多媒体信息进行移动的第五输入,处理器401,还用于根据第二输入的起始位置确定选择的待添加的多媒体信息; 获取选择的待添加的多媒体信息中的目标文字信息;将目标文字信息添加至第二输入的结束位置。
综上,本实施例提供的电子设备,通过文字识别待编辑图片中的文字,得到待编辑图片中的文本文字,将文本文字替换显示待编辑图片中的文字图片,由于文字识别算法输出的文本文字是清晰文字,将清晰文字对应替代显示模糊的文字图片,可以有效提高图片文字的清晰度,避免图片原文本显示模糊影响用户的阅读效率。除此之外,通过关联保存第一目标图片与对应的文本数据,提高后续的信息添加效率。
应理解的是,本申请实施例中,输入单元406可以包括图形处理器(Graphics Processing Unit,GPU)4061和麦克风4062,图形处理器4061对在视频捕获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的静态图片或视频的图像数据进行处理。显示单元408可包括显示面板4081,可以采用液晶显示器、有机发光二极管等形式来配置显示面板4081。用户输入单元409包括触控面板4091以及其他输入设备4092。触控面板4091,也称为触摸屏。触控面板4091可包括触摸检测装置和触摸控制器两个部分。其他输入设备409可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆,在此不再赘述。存储器402可用于存储软件程序以及各种数据,包括但不限于应用程序和操作系统。处理器401可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器401中。
本申请实施例还提供一种可读存储介质,所述可读存储介质上存储有程序或指令,该程序或指令被处理器执行时实现上述图片处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
其中,所述处理器为上述实施例中所述的电子设备中的处理器。所述可读存储介质,包括计算机可读存储介质,如计算机只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等。
本申请实施例另提供了一种芯片,所述芯片包括处理器和通信接口,所 述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现上述图片处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
应理解,本申请实施例提到的芯片还可以称为系统级芯片、系统芯片、芯片系统或片上系统芯片等。
本申请实施例另提供了一种计算机程序产品,所述计算机程序产品存储在非易失的存储介质中,所述计算机程序产品被至少一个处理器执行以实现上述图片处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
本申请实施例还提供了一种通信设备,被配置为执行如上述图片处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。此外,需要指出的是,本申请实施方式中的方法和装置的范围不限按示出或讨论的顺序来执行功能,还可包括根据所涉及的功能按基本同时的方式或按相反的顺序来执行功能,例如,可以按不同于所描述的次序来执行所描述的方法,并且还可以添加、省去、或组合各种步骤。另外,参照某些示例所描述的特征可在其他示例中被组合。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以计算机软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是手机,计算机,服 务器,或者网络设备等)执行本申请各个实施例所述的方法。
上面结合附图对本申请的实施例进行了描述,但是本申请并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本申请的保护之内。

Claims (17)

  1. 一种图片处理方法,包括:
    响应于对第一目标图片进行信息添加的第一输入,显示待添加的多媒体信息,所述待添加的多媒体信息包括文字信息;
    响应于对所述待添加的多媒体信息进行选择的第二输入,确定所述待添加的多媒体信息中的目标文字信息;
    将所述目标文字信息添加至所述第一目标图片中。
  2. 根据权利要求1所述的方法,其中,在响应于对第一目标图片进行信息添加的第一输入之前,还包括:
    响应于对第二目标图片进行文字识别的第三输入,对所述第二目标图片中的文字进行识别得到文本数据;
    在所述第二目标图片上显示所述文本数据。
  3. 根据权利要求2所述的方法,其中,所述在所述第二目标图片上显示所述文本数据,包括:
    确定包含所述文本数据的原文本所在区域的像素均值,将所述原文本所在区域的像素值替换为所述像素均值;
    在所述原文本所在区域显示所述文本数据。
  4. 根据权利要求3所述的方法,其中,所述在所述原文本所在区域显示所述文本数据,包括:
    在所述原文本所在区域悬浮显示文本框,并在所述文本框中显示所述文本数据。
  5. 根据权利要求4所述的方法,其中,在所述文本框中显示所述文本数据之后,还包括:
    响应于对所述文本数据编辑的第四输入,对所述文本数据进行编辑,所述编辑包括文本大小编辑、文本颜色编辑、文本类型编辑、文本内容编辑和文本数据位置编辑中的一种或多种;
    对编辑后的文本数据与所述第二目标图片进行关联保存,存储为所述待添加的多媒体信息。
  6. 根据权利要求1所述的方法,其中,所述第二输入包括对所述待添加的多媒体信息进行移动的第五输入,所述确定所述待添加的多媒体信息中的目标文字信息,将所述目标文字信息添加至所述第一目标图片中,包括:
    根据所述第五输入的起始位置确定选择的待添加的多媒体信息;
    获取所述选择的待添加的多媒体信息中的目标文字信息;
    将所述目标文字信息添加至所述第五输入的结束位置。
  7. 一种图片处理装置,包括:
    添加响应模块,被配置为响应于对第一目标图片进行信息添加的第一输入,显示待添加的多媒体信息,所述待添加的多媒体信息包括文字信息;
    选择响应模块,被配置为响应于对所述待添加的多媒体信息进行选择的第二输入,确定所述待添加的多媒体信息中的目标文字信息;
    文字添加模块,被配置为将所述目标文字信息添加至所述第一目标图片中。
  8. 根据权利要求7所述的装置,其中,所述图片处理装置包括:
    文字识别模块,被配置为响应于对第二目标图片进行文字识别的第三输入,对所述第二目标图片中的文字进行识别得到文本数据;
    文字显示模块,被配置为在所述第二目标图片上显示所述文本数据。
  9. 根据权利要求8所述的装置,其中,所述文字显示模块包括:
    替换确定单元,被配置为确定包含所述文本数据的原文本所在区域的像素均值,将所述原文本所在区域的像素值替换为所述像素均值;
    文本显示单元,被配置为在所述原文本所在区域显示所述文本数据。
  10. 根据权利要求9所述的装置,其中,所述文本显示单元包括:
    文本框显示子单元,被配置为在所述原文本所在区域悬浮显示文本框,并在所述文本框中显示所述文本数据。
  11. 根据权利要求10所述的装置,其中,所述图片文字信息处理装置还包括:
    编辑模块,被配置为响应于对文本数据编辑的第四输入,对文本数据进行编辑,编辑包括文本大小编辑、文本颜色编辑、文本类型编辑、文本内容编辑和文本数据位置编辑中的一种或多种;
    保存模块,被配置为对编辑后的文本数据与待文字识别图片进行关联保存,存储为所述待添加的多媒体信息。
  12. 根据权利要求7所述的装置,其中,所述第二输入包括对所述待添加的多媒体信息进行移动的第五输入;所述选择响应模块包括:
    信息确定单元,被配置为根据所述第五输入的起始位置确定选择的待添加的多媒体信息;
    信息获取单元,被配置为获取所述选择的待添加的多媒体信息中的目标文字信息;
    所述文字添加模块包括:
    信息添加单元,被配置为将所述目标文字信息添加至所述第五输入的结束位置。
  13. 一种电子设备,包括处理器,存储器及存储在所述存储器上并可在所述处理器上运行的程序或指令,其中,所述程序或指令被所述处理器执行时实现如权利要求1-6任一项所述的图片处理方法的步骤。
  14. 一种可读存储介质,所述可读存储介质上存储程序或指令,其中,所述程序或指令被处理器执行时实现如权利要求1-6任一项所述的图片处理方法的步骤。
  15. 一种芯片,包括处理器和通信接口,其中,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如权利要求1-6任一项所述的图片处理方法。
  16. 一种计算机程序产品,其中,所述程序产品被存储在非易失的存储介质中,所述程序产品被至少一个处理器执行以实现如权利要求1-6任一项所述的图片处理方法。
  17. 一种通信设备,其中,被配置为执行如权利要求1-6任一项所述的图片处理方法。
PCT/CN2022/104567 2021-07-15 2022-07-08 图片处理方法和电子设备 WO2023284640A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110799578.8A CN113436297A (zh) 2021-07-15 2021-07-15 图片处理方法和电子设备
CN202110799578.8 2021-07-15

Publications (2)

Publication Number Publication Date
WO2023284640A1 true WO2023284640A1 (zh) 2023-01-19
WO2023284640A9 WO2023284640A9 (zh) 2023-04-20

Family

ID=77760481

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/104567 WO2023284640A1 (zh) 2021-07-15 2022-07-08 图片处理方法和电子设备

Country Status (2)

Country Link
CN (1) CN113436297A (zh)
WO (1) WO2023284640A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113436297A (zh) * 2021-07-15 2021-09-24 维沃移动通信有限公司 图片处理方法和电子设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106909548A (zh) * 2015-12-22 2017-06-30 北京奇虎科技有限公司 基于服务器的图片加载方法及装置
US9971854B1 (en) * 2017-06-29 2018-05-15 Best Apps, Llc Computer aided systems and methods for creating custom products
CN110889379A (zh) * 2019-11-29 2020-03-17 深圳先进技术研究院 表情包生成方法、装置及终端设备
CN111126301A (zh) * 2019-12-26 2020-05-08 腾讯科技(深圳)有限公司 一种图像处理方法、装置、计算机设备和存储介质
CN113436297A (zh) * 2021-07-15 2021-09-24 维沃移动通信有限公司 图片处理方法和电子设备

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111858994A (zh) * 2019-04-26 2020-10-30 深圳市蓝灯鱼智能科技有限公司 文字检索方法和装置
CN111061933A (zh) * 2019-11-21 2020-04-24 深圳壹账通智能科技有限公司 图片样本库构建方法、装置、可读存储介质及终端设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106909548A (zh) * 2015-12-22 2017-06-30 北京奇虎科技有限公司 基于服务器的图片加载方法及装置
US9971854B1 (en) * 2017-06-29 2018-05-15 Best Apps, Llc Computer aided systems and methods for creating custom products
CN110889379A (zh) * 2019-11-29 2020-03-17 深圳先进技术研究院 表情包生成方法、装置及终端设备
CN111126301A (zh) * 2019-12-26 2020-05-08 腾讯科技(深圳)有限公司 一种图像处理方法、装置、计算机设备和存储介质
CN113436297A (zh) * 2021-07-15 2021-09-24 维沃移动通信有限公司 图片处理方法和电子设备

Also Published As

Publication number Publication date
CN113436297A (zh) 2021-09-24
WO2023284640A9 (zh) 2023-04-20

Similar Documents

Publication Publication Date Title
KR102367838B1 (ko) 동시에 열린 소프트웨어 애플리케이션들을 관리하기 위한 디바이스, 방법, 및 그래픽 사용자 인터페이스
US8786559B2 (en) Device, method, and graphical user interface for manipulating tables using multi-contact gestures
US8799775B2 (en) Device, method, and graphical user interface for displaying emphasis animations for an electronic document in a presentation mode
US10007426B2 (en) Device, method, and graphical user interface for performing character entry
KR102013331B1 (ko) 듀얼 카메라를 구비하는 휴대 단말기의 이미지 합성 장치 및 방법
EP3528140A1 (en) Picture processing method, device, electronic device and graphic user interface
CN107918563A (zh) 一种复制和粘贴的方法、数据处理装置和用户设备
JP2020516994A (ja) テキスト編集方法、装置及び電子機器
US20240231563A1 (en) Application icon display method and apparatus, electronic device and storage medium
WO2022242542A1 (zh) 应用图标的管理方法和电子设备
WO2020042468A1 (zh) 一种数据处理方法、装置和用于数据处理的装置
WO2022242586A1 (zh) 应用界面显示方法、装置和电子设备
WO2023005828A1 (zh) 消息显示方法、装置和电子设备
CN112269523B (zh) 对象编辑处理方法、装置及电子设备
WO2022095885A1 (zh) 应用程序切换处理方法、装置和电子设备
WO2024046204A1 (zh) 消息处理方法、装置、电子设备及存储介质
WO2023045923A1 (zh) 文字编辑方法、装置和电子设备
CN112672061A (zh) 视频拍摄方法、装置、电子设备及介质
WO2023284640A1 (zh) 图片处理方法和电子设备
WO2022068721A1 (zh) 截屏方法、装置及电子设备
CN113099033A (zh) 信息发送方法、信息发送装置和电子设备
WO2023241563A1 (zh) 数据处理方法和电子设备
WO2023131043A1 (zh) 信息处理方法、装置及电子设备
WO2022247787A1 (zh) 应用归类方法、装置及电子设备
WO2023045919A1 (zh) 文字编辑方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22841276

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22841276

Country of ref document: EP

Kind code of ref document: A1