CN111126301B - Image processing method and device, computer equipment and storage medium - Google Patents

Image processing method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN111126301B
CN111126301B CN201911362175.6A CN201911362175A CN111126301B CN 111126301 B CN111126301 B CN 111126301B CN 201911362175 A CN201911362175 A CN 201911362175A CN 111126301 B CN111126301 B CN 111126301B
Authority
CN
China
Prior art keywords
text
image
target
page
original image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911362175.6A
Other languages
Chinese (zh)
Other versions
CN111126301A (en
Inventor
伍芷滢
刘立强
何丹
蔡忆宁
董浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201911362175.6A priority Critical patent/CN111126301B/en
Priority to CN202210003009.2A priority patent/CN114332887A/en
Publication of CN111126301A publication Critical patent/CN111126301A/en
Application granted granted Critical
Publication of CN111126301B publication Critical patent/CN111126301B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44521Dynamic linking or loading; Link editing at or after load time, e.g. Java class loading
    • G06F9/44526Plug-ins; Add-ons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention discloses an image processing method, an image processing device, computer equipment and a storage medium, which can display a chat session page of an instant messaging client, wherein the chat session page comprises an original image sent by a chat session user; displaying a recognition result page of the original image based on an image text recognition operation for the original image, the recognition result page including a target image including: the text is recognized from the original image, and the background content corresponding to the text is recognized, wherein the text is an editable text, and the background content is the content except the text in the original image; when the editing operation aiming at the text in the target image is detected, the editing result of the text is displayed, so that the original image sent by a chat session user in the chat session can be identified as the target image with editable text, and the user can directly edit the text in the target image to obtain the required editing result.

Description

Image processing method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of internet technologies, and in particular, to an image processing method, an image processing apparatus, a computer device, and a storage medium.
Background
IM (instant messaging) application is software for realizing online chatting and communication based on instant messaging technology, in addition, the instant messaging application also provides an image recognition function for images sent by users in a chatting session page, and the image recognition function can perform character recognition on the images sent by the users, so that the users can conveniently use character recognition results corresponding to the images.
Disclosure of Invention
Embodiments of the present invention provide an image processing method, an image processing apparatus, a computer device, and a storage medium, which can identify an original image sent by a user in a chat session as a target image with editable text, so that the user can edit a text identification result of the original image on the target image.
The embodiment of the invention provides an image processing method, which comprises the following steps:
displaying a chat session page of an instant messaging client, wherein the chat session page comprises an original image sent by a chat session user;
displaying a recognition result page of the original image based on an image text recognition operation for the original image, the recognition result page including a target image including: the text is recognized from the original image, and the background content corresponding to the text is recognized, wherein the text is an editable text, and the background content is the content of the original image except the text;
when an editing operation for the text in the target image is detected, displaying an editing result of the text.
The present embodiment also provides an image processing apparatus including:
the system comprises a session page display unit, a chat session page display unit and a chat session display unit, wherein the session page display unit is used for displaying a chat session page of an instant messaging client, and the chat session page comprises an original image sent by a chat session user;
a recognition result display unit configured to display a recognition result page of the original image based on an image text recognition operation for the original image, the recognition result page including a target image including: the text is recognized from the original image, and the background content corresponding to the text is recognized, wherein the text is an editable text, and the background content is the content of the original image except the text;
an editing result display unit configured to display an editing result of the text when an editing operation for the text in the target image is detected.
The present embodiment also provides a storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the steps of the image processing method as shown in the embodiment of the present invention.
The present embodiment also provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the image processing method according to the embodiment of the present invention when executing the computer program.
The embodiment of the invention provides an image processing method, an image processing device, computer equipment and a storage medium, which can display a chat session page of an instant messaging client, wherein the chat session page comprises an original image sent by a chat session user; displaying a recognition result page of the original image based on an image text recognition operation for the original image, the recognition result page including a target image including: the text is recognized from the original image, and the background content corresponding to the text is recognized, wherein the text is an editable text, and the background content is the content except the text in the original image; when the editing operation aiming at the text in the target image is detected, the editing result of the text is displayed, so that the original image sent by a chat session user in the chat session can be identified as the target image with editable text, and the user can directly edit the text in the target image, thereby obtaining the editing experience similar to the text edited in the original image.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1a is a schematic view of a scene of an image processing method according to an embodiment of the present invention;
FIG. 1b is a flowchart of an image processing method provided by an embodiment of the invention;
FIG. 2a is a schematic diagram illustrating a display of an identification result page according to an embodiment of the present invention;
FIG. 2b is a schematic diagram illustrating a display of another recognition result page according to an embodiment of the present invention;
FIG. 2c is a schematic diagram illustrating a display of a recognition result page according to another embodiment of the present invention;
FIG. 2d is a schematic diagram illustrating a display of a shortcut operation for an image according to an embodiment of the present invention;
FIG. 2e is a schematic diagram illustrating a display of an image shortcut operation according to an embodiment of the present invention;
FIG. 2f is a schematic diagram illustrating a display of an image shortcut operation according to an embodiment of the present invention;
FIG. 2g is a schematic diagram illustrating a display of a shortcut operation for an image according to an embodiment of the present invention;
FIG. 2h is a schematic diagram illustrating a display of a shortcut operation for an image according to an embodiment of the present invention;
FIG. 3a is a schematic diagram of a text modification of a target image according to an embodiment of the present invention;
fig. 3b is a schematic diagram of partial text sharing of a target image according to an embodiment of the present invention;
fig. 3c is a schematic diagram of text sharing of a target image according to an embodiment of the present invention;
fig. 3d is an image sharing schematic diagram of a target image according to an embodiment of the present invention;
fig. 3e is a schematic view of translation-based image sharing of a target image according to an embodiment of the present invention;
fig. 3f is an alternative schematic diagram of a sharing setting page according to an embodiment of the present invention;
fig. 3g is an alternative schematic diagram of a sharing setting page according to an embodiment of the present invention;
FIG. 4a is a schematic diagram of a display of a text extraction result page of a target image according to an embodiment of the present invention;
fig. 4b is a schematic display diagram of a comparison page corresponding to the text extraction result page provided in the embodiment of the present invention;
FIG. 5a is a flowchart illustrating an image processing method according to an embodiment of the present invention;
FIG. 5b is a schematic flow chart of an image processing method according to an embodiment of the present invention;
FIG. 5c is a schematic diagram of an alternative process for performing rough classification on an original image according to an embodiment of the present invention;
FIG. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention;
FIG. 7 is a schematic structural diagram of a computer device provided by an embodiment of the present invention;
fig. 8 is an alternative structure diagram of the distributed system 800 applied to the blockchain system according to the embodiment of the present invention;
fig. 9 is an alternative schematic diagram of a block structure according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention provides an image processing method, an image processing device, computer equipment and a storage medium. Specifically, the embodiment of the present invention provides an image processing apparatus (for distinction, may be referred to as a first image processing apparatus) suitable for a first computer device, where the first computer device may be a device such as a terminal, and the terminal may be a device such as a mobile phone, a tablet computer, and a notebook computer. The embodiment of the present invention further provides an image processing apparatus (for distinction, may be referred to as a second image processing apparatus) suitable for a second computer device, where the second computer device may be a network-side device such as a server, and the server may be a single server, a server cluster composed of multiple servers, an entity server, or a virtual server.
For example, the first image processing apparatus may be integrated in a terminal, and the second image processing apparatus may be integrated in a server.
The embodiment of the invention will take a first computer device as a terminal and a second computer device as a server as an example to introduce an image processing method.
Referring to fig. 1a, an embodiment of the present invention provides an image processing system including a terminal 10, a server 20, and the like; the terminal 10 and the server 20 are connected via a network, for example, a wired or wireless network, and the like, wherein the first image processing device is integrated in the terminal, for example, in the form of a client.
The terminal 10 may be configured to display a chat session page of an instant messaging client, where the chat session page includes an original image sent by a chat session user; displaying a recognition result page of the original image based on an image text recognition operation for the original image, the recognition result page including a target image including: the text is recognized from the original image, and the background content corresponding to the text is recognized, wherein the text is an editable text, and the background content is the content of the original image except the text; when an editing operation for the text in the target image is detected, displaying an editing result of the text.
The target image corresponding to the original image may be generated by the server 20, and the terminal may obtain the target image by sending an image identification request carrying the original image to the server 20 when the target image needs to be obtained; the server 20 may be specifically configured to: receiving an image identification request sent by a terminal; acquiring an original image sent by a terminal based on an image recognition request, performing text recognition on the original image to obtain a text recognition result of the original image, wherein the text recognition result comprises a text recognized from the original image and a text position of the recognized text in the original image, replacing the text at the corresponding text position in the original image with the recognized text in the form of an editable text to obtain a target image corresponding to the original image, and sending the target image to the terminal 10.
The terminal 10 may display an identification result page including the target image after receiving the target image.
In one embodiment, after obtaining the text recognition result, the server may send the text recognition result to the terminal, and the terminal generates a target image of the original image based on the text recognition result and the original image.
The following are detailed below. It should be noted that the following description of the embodiments is not intended to limit the preferred order of the embodiments.
Embodiments of the present invention will be described from the perspective of a first image processing apparatus, which may be particularly integrated in a terminal.
An image processing method provided by an embodiment of the present invention may be executed by a processor of a terminal, as shown in fig. 1b, a flow of the image processing method may be as follows:
101. displaying a chat session page of an instant messaging client, wherein the chat session page comprises an original image sent by a chat session user;
for the purpose of understanding the contents of the present embodiment, some technical terms appearing in the present embodiment are explained:
instant messaging: a terminal service allows two or more people to communicate text messages, files, voice and video in real time using a network. Typical representatives include instant messaging tools such as cell phones QQ, WeChat, WhatsApp, etc.
Photo OCR: the full name is Optical Character Recognition, and refers to a process of translating shapes on pictures into computer words by an electronic device by using a Character Recognition method.
In the embodiment of the present invention, the chat session page of the instant messaging client may be a single chat session page, a group chat session page, or a chat session page with a public number, which is not limited in this embodiment. The chat session user sending the original image may be a current user of the terminal, that is, a user currently logging in the terminal, or may be another user in the chat session page that has a chat session with the current user, which is not limited in this embodiment.
In this embodiment, the original image may be any type of image, such as an image in a JGP format, an expression image, and the like; the content carried in the original image is not limited, and may include content in the form of tables, text, pictures, and the like. The source of the original image is not limited, and may be an image obtained by screen capture, an image obtained by shooting, or the like.
For example, in one embodiment, the original image may be a screenshot image obtained by a user of the chat session through a screenshot operation on the page of the chat session, or the original image may be an image captured by a camera of the terminal during the chat session.
102. Displaying a recognition result page of the original image based on an image text recognition operation for the original image, the recognition result page including a target image including: the text is an editable text, and the background content is the content of the original image except the text.
It can be understood that, in this embodiment, the recognition result page of the original image is displayed only when the text exists in the original image and is recognized from the original image, and the recognition result page is not displayed if the text is not recognized from the original image (for example, the text does not exist in the original image or the text recognition on the original image fails).
In the present embodiment, for the case where text can be recognized from the original image, the text recognized from the original image is not necessarily completely equivalent to the original text in the original image, in view of the problem that some text in the original image may have difficulty in recognition. It is to be understood that the distribution of the editable text and the background content in the target image on the target image in the present embodiment is similar to the distribution of the original text in the original image and the other content besides the original text, and in one embodiment, the target image may be understood as being obtained by replacing the original text in the original image with the text recognized from the original image on the basis of the original image, and one difference between the original image and the target image is that the text recognized from the original image in the target image is editable and the original text in the original image is not editable.
In this embodiment, the image text recognition operation may be a specific touch operation, such as a long-press operation, a double-click operation, a sliding operation, and the like. Optionally, the image text recognition operation may also be a combination of a series of operations, which is not limited in this embodiment.
For example, referring to the schematic diagram of displaying the recognition result page described in fig. 2a, in the chat session page shown in 201 in fig. 2a, a friend a of the current user of the terminal sends an image a to the current user, where the image a is the above-mentioned original image, in the page shown in 201, the original image a is shown in a thumbnail state, the user may perform an image text recognition operation on the original image, and based on the image text recognition operation on the original image, the recognition result page shown in 202 may be displayed, where the recognition result page includes a text and an illustration, the text is a text recognized from the original image a, and the text has an editable characteristic.
Optionally, in this embodiment, in the recognition result page, text identifiers may also be included, each text identifier corresponds to a text in a text region in the target image, the text identifier may be an identifier in the form of an underline, a color mark, a text box, or the like, the text region may be divided by taking a text line or a text column (depending on the arrangement manner of the text in the original image) as a unit, referring to 202 in fig. 2a, each line of text corresponds to a text box including the text line, and the text in the text box may be edited as a whole, such as forwarding, copying, modifying, and the like.
Optionally, in this embodiment, the step "displaying the recognition result page of the original image based on the image text recognition operation on the original image" may include:
displaying an image text recognition control based on a control display operation for the original image;
and when the triggering operation aiming at the image text recognition control is detected, displaying a recognition result page of the original image.
The representation form of the control in this embodiment may be in the form of an icon, an input box, a button, and the like.
In this embodiment, the control display operation may be a touch operation for the original image, such as a double-click operation, a long-press operation, and the like, and the control display operation may also be triggered in a voice manner.
Optionally, based on the control display operation for the original image, in addition to displaying the image text recognition control, other controls for the original image may be displayed, such as a forwarding control for forwarding the original image when triggered, an editing control for editing the original image when triggered, and so on. The present embodiment does not limit this.
Alternatively, there are a variety of ways to display the image text recognition control.
(1) The method comprises the steps of obtaining an original image in a full-screen display state based on operation;
optionally, the step "displaying an image text recognition control based on a control display operation for the original image" may include:
when the display operation aiming at the original image is detected, displaying an image amplification page of the original image, wherein the image amplification page comprises the original image in a full-screen display state;
and when the control display triggering operation aiming at the original image is detected on the image amplification page, displaying an image text recognition control.
In this embodiment, when the control display triggering operation for the original image is detected on the image magnification page, in addition to displaying the image text recognition control, other controls may be displayed, such as a collection control for adding the original image to the image collection.
When the control display triggering operation for the original image is detected on the image magnification page, displaying an image text recognition control may include: and when the control display triggering operation aiming at the original image is detected on the image amplification page, displaying a sub-page on the image amplification page, wherein the sub-page comprises a display image text recognition control.
For example, referring to the schematic diagram of the display of the recognition result page shown in fig. 2b, in the chat session page shown in 201, a friend a of the current user of the terminal sends an image a to the current user, where the image a is the above-mentioned original image, in the page shown in 201, the original image a is shown in a thumbnail state, and when a display operation for the original image a, such as a click operation, is detected in the page shown in 201, an enlarged image page shown as reference numeral 203 in fig. 2b is displayed, and the enlarged image page includes the original image in a full-screen display state. When a control display triggering operation aiming at an original image, such as a long-press operation, is detected in an image magnification page, a control named as 'extracting characters in a figure' in the page shown as 204 of an image text recognition control is displayed. When a trigger operation, such as a click operation, for the "extract text in figure" control is detected, a recognition result page shown in 202 is displayed.
The display operation for the original image may also be a double-click operation, a long-press operation, or the like, which is not limited in this embodiment. The image recognition control may be displayed in the form of a small window or, as shown at 204, in a sub-page, it being understood that controls for other functions may also be displayed in the sub-page, such as a control "friend" for sharing the original image to the associated user. In this embodiment, the associated user is a user in the instant messaging client address book of the current user.
In one embodiment, when a trigger operation for the image text recognition control is detected, displaying a recognition result page of the original image includes:
when the triggering operation aiming at the image text recognition control is detected, displaying a recognition waiting page of the original image, wherein the recognition waiting page comprises the original image and a recognition result loading icon;
and when the identification of the original image is successful, displaying an identification result page of the original image.
(2) Obtaining based on the operation of the original image of the chat callback page;
optionally, the step "displaying an image text recognition control based on a control display operation for the original image" may include:
when the control display operation aiming at the original image is detected, displaying a function control list corresponding to the original image on the chat session page, wherein the function control list comprises an image text recognition control.
In this embodiment, the control display operation for the original image may be a long press, a circle drawing, and the like for the original image, and the function list may include other controls, such as a forwarding control for forwarding the original image, in addition to the image text recognition control.
For example, referring to the schematic view of displaying the recognition result page shown in fig. 2c, in the chat session page shown in 201, a friend a of the current user of the terminal sends an image a to the current user, and when a control display operation for the original image a, such as a long press, a double click, or the like, is detected on the page shown in 201, a function control list 2011 is displayed on the chat session page shown in 201, and the function control list includes an image text recognition control, such as a control named "text recognition". When detecting a trigger operation such as a click operation for the "text recognition" control in the function control list 2011, displaying 205 a recognition waiting page of the original image, where the recognition waiting page includes the original image and a recognition result loading icon such as an icon of "extracting text", and when the recognition for the original image is successful, displaying 202 a recognition result page.
In one example, when a trigger operation such as a click operation for the "text recognition" control in the function control list 2011 is detected, the recognition waiting page of the original image shown in 205 may not be displayed, but when the recognition for the original image is successful, the recognition result page shown in 202 may be directly displayed.
In this embodiment, considering that the operations that users may prefer have certain commonality for some relatively special image contents, for example, for an id card photo, the operations that users may prefer are to extract an id card number, and for a bank card photo, the operations that users may prefer are to extract a bank card number. Inspired by these situations, the present embodiment provides a quick operation for an original image, which is convenient for reducing the operation time of a user on the image and is convenient for the user to quickly obtain a desired result.
Optionally, in this embodiment, the image magnification page further includes: and the shortcut operation control corresponds to the target content in the original image and is used for executing the operation indicated by the shortcut operation control on the target content when the shortcut operation control is triggered.
The target content may be set by a user, or may be set by a developer of the instant messaging client, which is not limited in this embodiment. The target content may include: various certificate cards such as bank cards, identity cards and driver licenses, code patterns such as two-dimensional codes and bar codes, or tickets with specific formats such as airline tickets, express tickets and tax receipts.
In an embodiment, the original images may be classified, and the shortcut operation control corresponding to the image type of the original image is determined based on the image type of the original image, and each shortcut operation control in this embodiment may be provided with corresponding target content.
Optionally, when the display operation for the original image is detected, displaying an image magnification page of the original image may include:
when the display operation aiming at the original image is detected, triggering the image type identification of the original image to acquire the image type of the original image;
and displaying an image amplification page of the original image, wherein the image amplification page comprises a shortcut operation control corresponding to the image type, and the shortcut operation control is used for executing the operation indicated by the shortcut operation control for the target content in the original image when being triggered.
For example, referring to fig. 2d, assuming that the original image is a photograph of a chinese XXX bank, a shortcut operation control corresponding to the photograph of the bank, such as a number extraction control named "extract number", is displayed in the image magnification page 203 of the original image, and when a trigger operation for the number extraction control is detected, a number extraction result page of the original image is displayed, where the number extraction result page includes a number extraction result image, and the number extraction result image includes: the number is an editable text, and the background content is the content of the original image except the number. The method includes the steps that a text box corresponding to the number is displayed outside the number extracted from a number extraction result page, when triggering operation such as clicking operation for the text box is detected, a function control list for the text box is displayed, the function control list comprises function controls such as a copy control, a forwarding control and an editing control, when the control in the function control list is triggered, operation is conducted on the content in the clicked text box, for example, when the copy control is clicked in the function list, the number in the text box such as 6224 XXXXXXXXXXXXXXXX is added into a copy content set, and therefore the text box can be used later.
For example, referring to fig. 2e, assuming that the original image includes text content, such as english text, the shortcut control displayed in the enlarged image page may be a translation control, such as the "text in translation" control in fig. 2 e.
For another example, referring to fig. 2f, assuming that the original image includes text content, the shortcut control displayed in the enlarged image page may be an image text recognition control, such as the control named "recognize characters in diagram" in fig. 2 f.
For another example, referring to fig. 2g, assuming that the original image includes a two-dimensional code, the shortcut operation control displayed in the enlarged image page may be a two-dimensional code recognition control, such as the control named "recognize two-dimensional code" in fig. 2 g.
For another example, referring to fig. 2h, assuming that a barcode is included in the original image, the shortcut operation control displayed in the image magnification page may be a barcode recognition control, as named "recognition barcode" control in fig. 2 h.
103. When an editing operation for the text in the target image is detected, displaying an editing result of the text.
In this embodiment, the editing operation for the text in the target image may be any type of text editing operation for the text in the prior art, such as editing operations of modifying, copying, forwarding, cutting, and the like.
Alternatively, the step "displaying an editing result of the text when the editing operation for the text in the target image is detected" may include:
when detecting a modification triggering operation aiming at a target text in the text, displaying a text input control;
determining a modified text corresponding to the target text based on the text input operation aiming at the text input control;
when detecting that the text input for the text input control is finished, displaying a modified target image, wherein the target text in the modified target image is replaced by the modified text.
In this embodiment, the target text may be all editable texts in the target image, or may be obtained based on a text selection operation.
Optionally, the step "displaying a text input control when a modification trigger operation for a target text in the text is detected" includes: and determining the selected target text in the target image based on the selection operation aiming at the text in the target image, and displaying a text input control.
In one embodiment, a text box is displayed around each line of text, the selection operation for the target image may be a selection operation for the text box, and the text in the selected text box is the target text.
In one embodiment, the text input control comprises an input box and an input sub-control, the input box displays the selected target text, the target text in the input box can be modified based on the text input operation of the input sub-control, when the text input ending operation of the text input control is detected, the text in the input box is used as the modified text of the target text, the modified text replaces the target text in the target image, and the replaced target image is displayed.
The input sub-control can be a control such as a keyboard.
Referring to fig. 3a, when a trigger operation for a text box in the 301 page of fig. 3a is detected, an editing function control list is displayed, the editing function control list contains controls of copy, forward, edit, and the like, and when a trigger operation for an "editing" control is detected, an image editing page shown in 302 is displayed, wherein text in the text box corresponding to the trigger operation is target text, the image editing page 302 includes a text input control, the text input control includes an input box 3021 and an input sub-control 3022, wherein a target text "He ads coarse for the eyes and buttons is displayed in the input box, modified text corresponding to the target text is determined based on the text input operation for the input sub-control, and when a text input end operation for the text input control is detected, the text" He ads tone for the eyes and buttons in "in the input box is used as modified text, the modified target image is displayed (as shown in 304), and the original "He ads teal for the eyes and buttons. in" in the modified target image is replaced by "He ads tone for the eyes and buttons. in".
For example, referring to fig. 3b, more than one selected textbox may be selected, and when a trigger operation for a textbox in the page 301 of fig. 3a is detected, an editing function control list is displayed, where the textbox corresponding to the trigger operation may be identified, for example, a grey textbox represents the textbox corresponding to the trigger operation, that is, the textbox selected by the user, and the editing control list includes controls for copying, forwarding, editing, and the like, when a trigger operation for a "forwarding" control is detected, a forwarding destination selection page shown in 305 is displayed, a forwarding user selection page is displayed based on the selection operation for the forwarding destination selection page, and when a user selection operation for the forwarding user selection page is detected, the text in the textbox selected by the user is forwarded to the user corresponding to the user selection operation. For example, the contents in the two gray text boxes are forwarded to friend B corresponding to the user selection operation (refer to the page identified by 307 in fig. 3B).
In this embodiment, the content in the target image can be shared, the sharing may be plain text sharing or picture-format sharing, and the plain text-format sharing includes all text sharing and partial text sharing.
Optionally, in this embodiment, the method of this embodiment further includes:
when the sharing triggering operation aiming at the target image is detected, displaying a text sharing control and an image sharing control;
when the triggering operation aiming at the text sharing control is detected, sharing the text in the target image;
and when the triggering operation aiming at the image sharing control is detected, sharing the target image.
The sharing of the text may be of a part of the text or of all the texts, optionally, the recognition result page further includes a sharing trigger control, and the step "when the sharing trigger operation for the target image is detected, the text sharing control and the image sharing control are displayed", may include: and when the triggering operation aiming at the sharing triggering control is detected, displaying the text sharing control and the image sharing control.
For example, referring to fig. 3c, a sharing trigger control, such as a control named "forward", is included in the recognition result page shown in 301, when a trigger operation, such as a click operation, for the sharing trigger control is detected, a sharing selection page shown in 308 is displayed, where the sharing selection page includes a text sharing control, such as a "text" control, and an image sharing control, such as a "picture" control, and when a trigger operation for the "text" control is detected, text content in an editable text in the target image is shared, for example, referring to 309 in fig. 3c, text content in the target image is shared to the friend D, it can be understood that the sharing of text is not limited to sharing text content to the friend, but also can share text content to a user group or to a friend circle, and so on.
For example, referring to fig. 3d, a sharing trigger control such as a control named "forward" is included in the recognition result page shown in 301, when a trigger operation, such as a click operation, is detected for the share trigger control, a share selection page is displayed 310, the sharing selection page comprises a text sharing control such as a 'text' control and an image sharing control such as a 'picture' control, when a trigger operation for the 'picture' control is detected, sharing the target image itself, for example, with reference to fig. 3d, when a trigger operation for the "picture" control is detected, and based on the sharing object selection operation aiming at the target image, sharing the target image to the selected sharing object, such as a friend D, however, it is understood that the sharing object is not limited to the user, but may also be a message integration page of the instant messaging client, such as a friend circle page.
Optionally, in this embodiment, the method of this embodiment further includes: when the text translation operation aiming at the target image is detected, displaying a translation result page corresponding to the target image, wherein the translation result page comprises a translation image corresponding to the target image, and the translation image comprises: the translation result corresponding to the text in the target image, and the background content corresponding to the text in the target image.
The text translation operation may be some special touch operations, such as long-time pressing, double-click, triple-click, and the like, and the text translation operation may also be implemented by a trigger operation on a control. Optionally, in an embodiment, a translation control is included in the recognition result page, for example, a translation control such as a control named "translate" is displayed in the recognition result page 301 of fig. 3 e.
The step of displaying a translation result page corresponding to the target image when the text translation operation for the target image is detected may include:
and when the triggering operation aiming at the translation control in the target image is detected, displaying a translation result page corresponding to the target image.
For example, referring to fig. 3e, when a trigger operation such as a click operation for the "translation" control in the recognition result page shown in 301 is detected, the translation result page shown in 311 is displayed. In the translation result page, the editable text in the target image is replaced by the corresponding translation result.
In this embodiment, a scheme for sharing a target image is further provided, and optionally, the method of this embodiment further includes:
when the image sharing operation aiming at the target image is detected, displaying a sharing setting page of the target image;
determining a target sharing style of a target image based on a sharing style selection operation for the sharing setting page;
determining an image to be shared based on the target sharing style and the target image;
and sharing the image to be shared.
The identifying result page may include a sharing trigger control, and the step "when the image sharing operation for the target image is detected, displaying a sharing setting page of the target image" may include:
when the triggering operation aiming at the sharing triggering control is detected, displaying a second text sharing control and a second image sharing control;
when the trigger operation for the second image sharing control is detected, the sharing setting page is displayed, in this embodiment, the sharing setting page may be used to select not only the sharing style of the target image but also the sharing object of the target image, and the process of selecting the sharing object may refer to the foregoing description, which is not repeated herein.
In one embodiment, the display of the sharing setting page may be triggered for an operation of the translation results page. Optionally, a translation control and other functional controls, such as a forwarding control, may also be included on the translation result page.
Optionally, "displaying a sharing setting page of the target image when the image sharing operation for the target image is detected" may include:
when the triggering operation aiming at the sharing triggering control on the translation result page is detected, displaying a second text sharing control and a second image sharing control;
when the trigger operation for the second image sharing control is detected, the sharing setting page is displayed, in this embodiment, the sharing setting page may be used to select not only the sharing style of the target image but also the sharing object of the target image, and the process of selecting the sharing object may refer to the foregoing description, which is not repeated herein.
For example, referring again to FIG. 3e, when a trigger operation, such as a click operation, is detected for the "translate" control in the recognition result page shown at 301, the translation result page shown at 311 is displayed. In the translation result page, the editable text in the target image is replaced with the corresponding translation result. A sharing trigger control, such as a control named "forward", is displayed in the translation result page, and when a trigger operation for the "forward" control is detected, a second text sharing control, such as a control named "text", and a second image sharing control, such as a control named "picture", are displayed, where the second text sharing control and the second image sharing control may be displayed on the translation result page, or may be displayed on the recognition result page (refer to fig. 3e), which is not limited in this embodiment. When the trigger operation aiming at the picture control is detected, displaying 313 the sharing setting page, determining a target sharing style of a target image based on the sharing style selection operation aiming at the sharing setting page, and determining an image to be shared based on the target sharing style and the target image; sharing the image to be shared
The sharing patterns in this embodiment include three types: sharing the recognition result of the original image, sharing the translation result, and sharing the translation comparison result.
Optionally, if the target sharing pattern is a sharing identification result, determining the image to be shared based on the target sharing pattern and the target image includes: and determining the target image as an image to be shared.
For example, in the sharing setting page 313 shown in fig. 3e, if the selected target sharing style is "recognition result", the image to be shared is the target image.
Optionally, the sharing setting page includes preview images of the images to be shared in each sharing style. Determining a target sharing style of a target image based on a sharing style selection operation for the sharing setting page may include: and determining a target sharing style of the image to be shared based on the selection operation aiming at the preview image in the sharing setting page.
Optionally, if the target sharing style is a sharing translation result, determining the image to be shared based on the target sharing style and the target image includes: and determining the translation image corresponding to the target image as an image to be shared.
In this embodiment, if a text translation operation for a target image is not detected before determining an image to be shared based on a target sharing style and the target image. The target image may be text-translated to obtain a translated image of the target image. Optionally, determining an image to be shared based on the target sharing style and the target image includes:
and acquiring a translation image of the target image, and determining the translation image corresponding to the target image as an image to be shared.
For example, in the sharing setting page shown in fig. 3f, if the selected target sharing style is a "translation result", the image to be shared is a translation image of the target image.
Optionally, if the target sharing style is a sharing translation comparison result, determining the image to be shared based on the target sharing style and the target image includes:
and acquiring a translation contrast image of the target image, wherein the translation contrast image comprises the content in the target image and the content in the translation image of the target image.
In this example, the translation contrast image may be obtained by stitching the target image and the translation image of the target image. The splicing can be completed by the terminal, or the terminal sends a splicing instruction to the server, and the server completes the splicing of the target image and the translation image.
For example, in the sharing setting page shown in fig. 3g, if the selected target sharing style is "translation contrast", the image to be shared is a translation contrast image.
In this embodiment, an editable text may be further extracted from the target image to be displayed and edited, and optionally, the image processing method of this embodiment may further include:
when a text extraction operation for the target image in the recognition result page is detected, displaying a text extraction result page of the target image, wherein the text extraction result page comprises editable text in the target image.
That is, the text in the text extraction result page is derived from the text recognized from the original image.
The text extraction operation may be a specific touch operation, such as a double-click touch operation, a long-press touch operation, and the like, and in addition, the text extraction operation may also be implemented by a trigger operation on a control.
For example, referring to fig. 4a, a text extraction control, such as a control named "extraction portion", is included in the recognition result page 401, and when a trigger operation for the control is detected, the text extraction result page 402 of the target image is displayed.
The text in the recognition result page 401 is editable, and if the user selects a part of text boxes in the recognition result page 401, the selected text in the text boxes is the extracted text corresponding to the text extraction control. Optionally, the step "displaying the text extraction result page of the target image when the text extraction operation for the target image in the recognition result page is detected" may include:
determining selected texts in the target image based on selection operation aiming at the texts in the target image;
and when the triggering operation aiming at the text extraction control in the recognition result page is detected, displaying the text extraction result page of the target image, wherein the text extraction result page comprises the selected text.
Thereby, partial extraction of text in the target image can be achieved.
Optionally, in this embodiment, after the step "displaying the text extraction result page of the target image", the method may further include:
when contrast display operation aiming at the text extraction result page is detected, displaying a contrast page, wherein the contrast page comprises a first display area and a second display area, the first display area is used for displaying the target image, and the second display area is used for displaying the text extraction result of the target image.
Optionally, the contrast display operation is a specific touch operation, and may also be implemented by an operation on a control.
Optionally, the text extraction result page further includes a contrast display control, and when a contrast display operation for the text extraction result page is detected, displaying the contrast page may include: and when the triggering operation for the contrast display control is detected, displaying the contrast page.
For example, referring to fig. 4b and 402, the text extraction result page includes a contrast display control, such as a control named "contrast control", and when a trigger operation for the "contrast control" is detected, the contrast page 403 is displayed, where the contrast page 403 includes two display areas, a first display area 4031 and a second display area 4032, the first display area is used for displaying the target image, and the second display area is used for displaying the text extraction result of the target image.
In one embodiment, the first display area may display not the target image but an original image, and the original image and the text extraction result of the target image are displayed in contrast, and an original image contrast function may be provided for the text extraction result, so as to check whether the text recognized from the original image is erroneous.
Optionally, when the first display area displays the target image, the method of this embodiment further includes:
when a text selection operation aiming at the target image in the first display area is detected, determining a selected text corresponding to the text selection operation in the target image;
and adjusting the text extraction result displayed in the second display area based on the selected text, wherein after adjustment, the text extraction result displayed in the second display area comprises the text extraction result corresponding to the selected text.
For example, referring to fig. 4b, in the comparison page shown in 403, when a text selection operation for a line of text "He puts a big snowball on top of the first display area, He ads a" is detected, the line of text is used as a selected text, the text extraction result displayed in the second display area is adjusted based on the selected text, the adjusted second display area can refer to 404, and compared to 403, the display position of "He puts a big snowball on top of the second display area, He ads a" in the second display area in 404 is more obvious.
In this embodiment, the text extraction result in the second display area is editable, and when an input trigger operation is detected in the second display area, a second text input control is displayed in the second display area. The input triggering operation may be a click operation, and at the position clicked by the user, a cursor as indicated by a in fig. 4b may be displayed, so as to prompt the user of the text input position. When the text input control is displayed, the embodiment may increase the area of the second display region, for example, increase the position of the upper boundary line of the second display region.
Optionally, when the first display area displays the original image, the method of this embodiment further includes:
when a text selection operation aiming at the original image in the first display area is detected, determining a selected text corresponding to the text selection operation in the original image;
and adjusting the text extraction result displayed in the second display area based on the selected text, wherein after adjustment, the text extraction result displayed in the second display area comprises the text extraction result corresponding to the selected text.
In this embodiment, the text in the original image may have position information, for example, the text in the original image may also be identified by a text box or the like, and the position information of the text box is used as the position information of the text identified by the text box. The position information of the text box may be determined based on the position information of the corresponding text box in the target image.
Optionally, in this embodiment, the step "displaying the recognition result page of the original image based on the image text recognition operation on the original image" may include:
triggering to acquire a text recognition result of the original image based on an image text recognition operation aiming at the original image, wherein the text recognition result comprises a text recognized from the original image and a text position of the text in the original image;
replacing the original text on the corresponding text position in the original image with the recognized text in the form of editable text to obtain a target image corresponding to the original image;
and displaying an identification result page of the original image, wherein the identification result page comprises the target image.
The text recognition result and the target image may be generated independently by the terminal, or the text recognition result may be obtained by the server based on the recognition of the original image, the target image may be generated by the terminal based on the original image and the text recognition result, or both the text recognition result and the target image may be generated by the server, which is not limited in this embodiment.
Optionally, in this embodiment, the original image may be recognized by an OCR technology to obtain a text recognition result.
Optionally, the step of replacing the recognized text with the text at the corresponding text position in the original image in the form of an editable text to obtain the target image corresponding to the original image includes:
analyzing the recognized text to obtain at least one text block based on the text position of the text in the text recognition result;
sequencing the text blocks, and typesetting the texts in the text blocks;
and replacing the corresponding text content in the original image by the typeset text block to obtain the target image.
The text in the original image may be removed based on the position information of the text in the text recognition result, the original image may be modified, the removed text may be filled with the background content near the removed text to obtain a background image, and the typesetted text block may be drawn into the background image in the form of an editable text to obtain the target image.
By adopting the image processing method of the embodiment, the chat session page of the instant messaging client can be displayed, wherein the chat session page comprises an original image sent by a chat session user; displaying a recognition result page of the original image based on an image text recognition operation for the original image, the recognition result page including a target image including: the text is recognized from the original image, and the background content corresponding to the text is recognized, wherein the text is an editable text, and the background content is the content except the text in the original image; when the editing operation aiming at the text in the target image is detected, the editing result of the text is displayed, so that the original image sent by a chat session user in the chat session can be identified as the target image with editable text, and the user can directly edit the text in the target image, thereby obtaining the editing experience similar to the text edited in the original image.
The method described in the above examples is further illustrated in detail below by way of example.
In this embodiment, an example will be described in which the first image processing apparatus is specifically integrated into a terminal and the second image processing apparatus is specifically integrated into a server.
As shown in fig. 5a, an image processing method specifically includes the following steps:
501. and the terminal displays a chat session page of the instant messaging client, wherein the chat session page comprises an original image sent by a chat session user.
502. The terminal sends an image identification request to a server based on image text identification operation aiming at the original image, wherein the image identification request can carry the original image;
referring to the optional timing diagram of the image processing method shown in fig. 5b, the user may send the original image to the server by long-pressing the original image in the chat session page of the instant messaging client, so as to trigger the identification of the original image.
In this embodiment, the server may be composed of many components, as described with reference to fig. 5b, including but not limited to: the device comprises a cloud recognition background component, an OCR recognition service component, a cloud recognition typesetting component, a drawing component and a picture generation component. These components may be integrated into one server or may be integrated into different servers, which is not limited in this embodiment.
Optionally, the terminal may send the image recognition request to a cloud recognition background component of the server through a big data channel.
The cloud recognition background integrates a plurality of classification and recognition services and supports different recognition types. The classification includes a classification of the picture, which includes a number recognition type, a text recognition type, a code pattern recognition type, and the like.
For the image text recognition operation of the original image, the terminal side can be considered to actively select a text recognition type, or the terminal can directly write the text recognition type into the image recognition request when sending the image recognition request, and after receiving the image recognition request, the cloud recognition background can start the OCR service of cloud recognition to extract characters in the original image.
503. The server receives an image identification request sent by the terminal and acquires an original image based on the image identification request;
504. the method comprises the steps that a server conducts text recognition on an original image to obtain an original text recognition result, wherein the text recognition result comprises a text recognized from the original image and a text position of the text in the original image;
after receiving the image recognition request, the cloud recognition background of the server finds that the recognition type of the original image is a text recognition type, invokes an OCR service component, performs OCR recognition on the original image, and receives a recognition result of the OCR service component.
505. Replacing the text on the corresponding text position in the original image by the recognized text in the form of editable text by the server to obtain a target image corresponding to the original image;
the server can determine original content corresponding to the text position in the original image based on the recognized text position, and replace the recognized text with the original content at the text position in the form of editable text to obtain the target image.
However, the direct replacement method may have the problem that the layout of the text after the replacement is not standard and is not beneficial to reading.
Optionally, the step that the server replaces the text in the original image at the corresponding text position with the recognized text in the form of an editable text to obtain the target image corresponding to the original image may include:
the server analyzes the recognized text to obtain at least one text block based on the position information of the text in the text recognition result;
the server sequences the text blocks and typesets the texts in the text blocks;
and the server replaces the corresponding text content in the original image with the typeset text block to obtain the target image.
The text recognition result can be an OCR recognition result, and after the cloud recognition background of the server receives the OCR recognition result, the cloud recognition typesetting component can be called to typeset the OCR recognition result.
For example, the cloud recognition background of the server calls the cloud recognition typesetting component to analyze the text in the OCR recognition result to obtain at least one text block based on the text position of the text in the OCR recognition result, sort the text blocks, and typeset the text in the text blocks.
The cloud identification typesetting component can firstly judge whether the original image contains the preset document through a classification algorithm, and if the original image does not contain the preset document, the original image is simply typeset, for example, if only a small amount of texts such as a line of texts are identified in the original image, the original image is considered not to contain the preset document, and the original image is simply typeset.
If the original image contains a preset document, for example, the number of recognized texts in the original image is large, the original image is considered to contain the preset document, and the cloud recognition typesetting component can adopt a layout analysis algorithm to perform typesetting.
In this embodiment, the layout analysis algorithm adopted by the cloud recognition and typesetting component may be an optimized Docstrum algorithm, and the algorithm uses text positions (for example, coordinates of four corners of a text box of the text) of the text extracted by the OCR as input, thereby solving the problems of time consumption, difficulty in controlling a threshold value, and the like of the conventional Docstrum algorithm, and finally merging the text boxes extracted by the OCR into a text block.
The cloud recognition typesetting component can determine the text line based on the text position of the text extracted by the OCR, and then divide the text line into at least one text block based on the centroid of the text line. After the text blocks are divided, the cloud identification typesetting component can sort the text blocks, for example, the text blocks can be recursively cut in the vertical direction and the horizontal direction to construct a binary tree, and the order of the text blocks is determined based on the binary tree so as to be in accordance with the reading order of a user and the reading logic.
Then, the cloud identification typesetting component can typeset the texts in the text blocks, so that the texts in the text blocks conform to the reading logic of the user.
The cloud identification typesetting component can also segment the original image, acquire position information of other contents except the text in the original image, for example, acquire the image position of the illustration in the original image, and the server can acquire the position information of the text block after sequencing based on the text position of the text in the text block, the sequencing of the text block and the typesetting in the text block, and draw the text block in the original image based on the position information and the position information of the background image such as the illustration in the original image to obtain the target image.
In one embodiment, the server obtains the position information of the sorted text blocks and the information of the original image, such as the position information of the background content of the original image, and then may send the position information of the text blocks and the information of the original image to the terminal.
After receiving the text block, the terminal can send the text block, the position information of the text block and the information of the original image to a drawing component of the server, the drawing component draws the text block in the original image based on the text block, the position information of the text block and the information of the original image to obtain a target image, wherein the text in the text block drawn in the target image is editable text. Optionally, the drawing component may draw a text box of the text while drawing the text, where each line of the text may be drawn correspondingly to one text box, and the text box may be used to respond to a touch operation of the user, for example, taking a text in the text box clicked by the user as a selected text, and performing an editing operation such as copying, forwarding, and the like on the selected text.
In one embodiment, the process of rendering the target image may be performed by a terminal.
506. And the server sends the target image to the terminal.
507. And the terminal receives the target image and displays an identification result page, wherein the identification result page comprises the target image.
508. When a terminal detects a text translation operation aiming at a target image, displaying a translation result page corresponding to the target image, wherein the translation result page comprises a translation image corresponding to the target image, and the translation image comprises: the translation result corresponding to the text in the target image, and the background content corresponding to the text in the target image.
The translation of the editable text of the target image and the generation of the translation image can be executed by the terminal or the server.
The translation result page may include a sharing control.
The method of this embodiment may further include:
when the terminal detects a sharing operation aiming at the sharing control in the translation result page, displaying a sharing setting page of the target image;
the terminal determines a target sharing style of a target image based on a sharing style selection operation aiming at the sharing setting page;
the terminal determines an image to be shared based on the target sharing style and the target image;
and the terminal shares the image to be shared.
For example, referring to fig. 5b, when the user clicks to share an image, the terminal may send an image sharing request to the picture generating component, and trigger the picture generating component to generate an image to be shared.
When the target sharing style is a sharing translation result, determining that the image to be shared is a translation image corresponding to the target image. The terminal can request a translation image from the picture generation component.
And when the target sharing style is a sharing translation comparison result, the image with the sharing is the translation comparison image, and the translation comparison image comprises the content in the target image and the content in the translation image of the target image.
Optionally, the terminal may send an image sharing request for translating and comparing the image to a picture generating component of the server, and trigger the picture generating component to synthesize the target image and the translated image, for example, perform left-right stitching to obtain a translated and compared image.
In this embodiment, a shortcut operation is further provided for the image, and a flowchart for implementing the shortcut operation is shown in fig. 5c, where the terminal detects that the user views the image, for example, when a display operation such as a click operation for the original image is detected, the terminal sends the original image to the server, triggers the server to start the cloud recognition background, calls a rough classification service under the cloud recognition service through a big data channel, and configures different recognition types for the image under different scenes in the cloud recognition background, for example, configures a number recognition type for an id card photo, configures a code pattern recognition type for an image with a two-dimensional code, and so on.
Through the rough classification service, after the identification type corresponding to the picture is identified, the identification type is returned to the client side, and the image shortcut control corresponding to the identification type is displayed by the client side.
Therefore, the embodiment can provide the target image with the editable text for the user, provides the user with the experience similar to directly modifying the text in the original image, and the text processing for the target image can be local text processing, so that the user can freely select the text to be processed to perform operations such as translation, point dividing, copying and the like, and the image processing experience of the user is favorably improved.
In order to better implement the above method, correspondingly, an image processing device is also provided, wherein the image processing device can be integrated in the terminal, or integrated in the server, or integrated in the terminal and the server.
For example, as shown in fig. 6, the image processing apparatus may include
A session page display unit 601, configured to display a chat session page of an instant messaging client, where the chat session page includes an original image sent by a chat session user;
a recognition result display unit 602 configured to display a recognition result page of the original image based on an image text recognition operation for the original image, the recognition result page including a target image including: the text is recognized from the original image, and the background content corresponding to the text is recognized, wherein the text is an editable text, and the background content is the content of the original image except the text;
an editing result display unit 603 configured to display an editing result of the text when an editing operation for the text in the target image is detected.
Optionally, the recognition result display unit is configured to display an image text recognition control based on a control display operation for the original image; and when the triggering operation aiming at the image text recognition control is detected, displaying a recognition result page of the original image.
Optionally, the identification result display unit is configured to, when a display operation for the original image is detected, display an image magnification page of the original image, where the image magnification page includes the original image in a full-screen display state; and when the control display triggering operation aiming at the original image is detected on the image amplification page, displaying an image text recognition control.
Optionally, the image magnification page further includes: and the shortcut operation control corresponds to the target content in the original image and is used for executing the operation indicated by the shortcut operation control on the target content when the shortcut operation control is triggered.
Optionally, the editing result display unit is configured to display a text input control when a modification triggering operation for a target text in the text is detected; determining a modified text corresponding to the target text based on the text input operation aiming at the text input control; when detecting that the text input for the text input control is finished, displaying a modified target image, wherein the target text in the modified target image is replaced by the modified text.
Optionally, the apparatus further comprises:
the first sharing triggering unit is used for displaying a text sharing control and an image sharing control when the sharing triggering operation aiming at the target image is detected;
the first sharing unit is used for sharing the text in the target image when the triggering operation aiming at the text sharing control is detected;
and the second sharing unit is used for sharing the target image when the triggering operation aiming at the image sharing control is detected.
Optionally, the apparatus further comprises: a translation result display unit, configured to display, when a text translation operation for the target image is detected, a translation result page corresponding to the target image, where the translation result page includes a translation image corresponding to the target image, where the translation image includes: the translation result corresponding to the text in the target image, and the background content corresponding to the text in the target image.
Optionally, the apparatus further comprises:
the second sharing triggering unit is used for displaying a sharing setting page of the target image when the image sharing operation aiming at the target image is detected;
the sharing setting unit is used for determining a target sharing style of a target image based on the sharing style selection operation aiming at the sharing setting page;
a determination unit configured to determine an image to be shared based on the target sharing pattern and the target image;
and the third sharing unit is used for sharing the image to be shared.
Optionally, if the target sharing pattern is a sharing identification result, the determining unit is configured to determine the target image as an image to be shared;
optionally, if the target sharing style is a sharing translation result, the determining unit is configured to determine a translation image corresponding to the target image as an image to be shared;
optionally, if the target sharing style is a sharing translation comparison result, the determining unit is configured to obtain a translation comparison image of the target image, where the translation comparison image includes content in the target image and content in a translation image of the target image.
Optionally, the apparatus further comprises: an extracting unit, configured to display a text extraction result page of the target image when a text extraction operation for the target image in the recognition result page is detected, where the text extraction result page includes editable text in the target image.
Optionally, the apparatus further includes a comparison display unit, configured to display a comparison page when a comparison display operation for the text extraction result page is detected after the extraction unit displays the text extraction result page of the target image, where the comparison page includes a first display area and a second display area, the first display area is used to display the target image, and the second display area is used to display the text extraction result of the target image.
Optionally, the apparatus further comprises: a text selection unit, configured to, when a text selection operation for the target image in the first display area is detected, determine a selected text corresponding to the text selection operation in the target image;
and the positioning unit is used for adjusting the text extraction result displayed in the second display area based on the selected text, wherein after adjustment, the text extraction result displayed in the second display area comprises the text extraction result corresponding to the selected text.
Optionally, the identification result display unit includes:
the triggering subunit is used for triggering and acquiring a text recognition result of the original image based on an image text recognition operation for the original image, wherein the text recognition result contains a text recognized from the original image and a text position of the text in the original image;
the replacing subunit is used for replacing the text on the corresponding text position in the original image with the recognized text in the form of editable text to obtain a target image corresponding to the original image;
and the display subunit is used for displaying an identification result page of the original image, wherein the identification result page comprises the target image.
In addition, an embodiment of the present invention further provides a computer device, where the computer device may be a terminal or a server, as shown in fig. 7, which shows a schematic structural diagram of the computer device according to the embodiment of the present invention, and specifically:
the computer device may include components such as a processor 701 of one or more processing cores, memory 702 of one or more computer-readable storage media, a power supply 703, and an input unit 704. Those skilled in the art will appreciate that the computer device configuration illustrated in FIG. 7 does not constitute a limitation of computer devices, and may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components. Wherein:
the processor 701 is a control center of the computer apparatus, connects various parts of the entire computer apparatus using various interfaces and lines, and performs various functions of the computer apparatus and processes data by running or executing software programs and/or modules stored in the memory 702 and calling data stored in the memory 702, thereby monitoring the computer apparatus as a whole. Optionally, processor 701 may include one or more processing cores; preferably, the processor 701 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 701.
The memory 702 may be used to store software programs and modules, and the processor 701 executes various functional applications and data processing by operating the software programs and modules stored in the memory 702. The memory 702 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to use of the computer device, and the like. Further, the memory 702 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 702 may also include a memory controller to provide the processor 701 with access to the memory 702.
The computer device further includes a power supply 703 for supplying power to the various components, and preferably, the power supply 703 is logically connected to the processor 701 through a power management system, so that functions of managing charging, discharging, and power consumption are implemented through the power management system. The power supply 703 may also include any component including one or more of a dc or ac power source, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
The computer device may also include an input unit 704, the input unit 704 being operable to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.
Although not shown, the computer device may further include a display unit and the like, which are not described in detail herein. Specifically, in this embodiment, the processor 701 in the computer device loads the executable file corresponding to the process of one or more application programs into the memory 702 according to the following instructions, and the processor 701 runs the application program stored in the memory 702, thereby implementing various functions as follows:
displaying a chat session page of an instant messaging client, wherein the chat session page comprises an original image sent by a chat session user;
displaying a recognition result page of the original image based on an image text recognition operation for the original image, the recognition result page including a target image including: the text is recognized from the original image, and the background content corresponding to the text is recognized, wherein the text is an editable text, and the background content is the content of the original image except the text;
when an editing operation for the text in the target image is detected, displaying an editing result of the text.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, an embodiment of the present invention further provides a storage medium, in which a plurality of instructions are stored, and the instructions can be loaded by a processor to execute the image processing method provided by the embodiment of the present invention.
The system related to the embodiment of the invention can be a distributed system formed by connecting a client and a plurality of nodes (computer equipment in any form in an access network, such as servers and terminals) through a network communication form.
Taking a distributed system as an example of a blockchain system, referring To fig. 8, fig. 8 is an optional structural schematic diagram of a blockchain system To which a distributed system 800 provided by the embodiment of the present invention is applied, where the system is formed by a plurality of nodes 801 (computing devices in any form in an access network, such as servers and user terminals) and a client 802, a Peer-To-Peer (P2P, Peer To Peer) network is formed between the nodes, and a P2P Protocol is an application layer Protocol operating on a Transmission Control Protocol (TCP). In a distributed system, any machine such as a server and a terminal can be added to become a node, and the node comprises a hardware layer, a middle layer, an operating system layer and an application layer, wherein an original image, a target image, a translation image of the target image and the like can be stored in a shared book of a block chain system.
Referring to the functions of each node in the blockchain system shown in fig. 8, the functions involved include:
1) routing, a basic function that a node has, is used to support communication between nodes.
Besides the routing function, the node may also have the following functions:
2) the application is used for being deployed in a block chain, realizing specific services according to actual service requirements, recording data related to the realization functions to form recording data, carrying a digital signature in the recording data to represent a source of task data, and sending the recording data to other nodes in the block chain system, so that the other nodes add the recording data to a temporary block when the source and integrity of the recording data are verified successfully.
For example, the services implemented by the application include:
2.1) wallet, for providing the function of transaction of electronic money, including initiating transaction (i.e. sending the transaction record of current transaction to other nodes in the blockchain system, after the other nodes are successfully verified, storing the record data of transaction in the temporary blocks of the blockchain as the response of confirming the transaction is valid; of course, the wallet also supports the querying of the remaining electronic money in the electronic money address;
and 2.2) sharing the account book, wherein the shared account book is used for providing functions of operations such as storage, query and modification of account data, record data of the operations on the account data are sent to other nodes in the block chain system, and after the other nodes verify the validity, the record data are stored in a temporary block as a response for acknowledging that the account data are valid, and confirmation can be sent to the node initiating the operations.
2.3) Intelligent contracts, computerized agreements, which can enforce the terms of a contract, implemented by codes deployed on a shared ledger for execution when certain conditions are met, for completing automated transactions according to actual business requirement codes, such as querying the logistics status of goods purchased by a buyer, transferring the buyer's electronic money to the merchant's address after the buyer signs for the goods; of course, smart contracts are not limited to executing contracts for trading, but may also execute contracts that process received information.
3) And the Block chain comprises a series of blocks (blocks) which are mutually connected according to the generated chronological order, new blocks cannot be removed once being added into the Block chain, and recorded data submitted by nodes in the Block chain system are recorded in the blocks.
In this embodiment, the content browsed by the current user and/or the associated user, and/or the record data (such as description information and link information of the content, etc.) of the content browsed by the current user and/or the associated user may be stored in the shared ledger of the area chain through the node, and the computer device (e.g., a terminal or a server) may obtain the content browsed by the current user and/or the associated user based on the data stored in the shared ledger.
Referring to fig. 9, fig. 9 is an optional schematic diagram of a Block Structure (Block Structure) according to an embodiment of the present invention, where each Block includes a hash value of a transaction record stored in the Block (hash value of the Block) and a hash value of a previous Block, and the blocks are connected by the hash values to form a Block chain. The block may include information such as a time stamp at the time of block generation. A block chain (Blockchain), which is essentially a decentralized database, is a string of data blocks associated by using cryptography, and each data block contains related information for verifying the validity (anti-counterfeiting) of the information and generating a next block.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Wherein the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the instructions stored in the storage medium can execute the steps in the image processing method provided in the embodiment of the present invention, the beneficial effects that can be achieved by the image processing method provided in the embodiment of the present invention can be achieved, which are detailed in the foregoing embodiments and will not be described herein again.
The foregoing detailed description has provided an image processing method, an image processing apparatus, a computer device, and a storage medium according to embodiments of the present invention, and specific examples have been applied herein to illustrate the principles and implementations of the present invention, and the above descriptions of the embodiments are only used to help understanding the method and the core ideas of the present invention; meanwhile, for those skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (15)

1. An image processing method, comprising:
displaying a chat session page of an instant messaging client, wherein the chat session page comprises an original image sent by a chat session user;
displaying a recognition result page of the original image based on an image text recognition operation for the original image, the recognition result page including a target image including: the text is recognized from the original image, and the background content corresponding to the text is recognized, wherein the text is an editable text, and the background content is the content of the original image except the text;
determining a selected target text in a target image based on a selection operation aiming at the text in the target image, and displaying a text input control, wherein the text input control comprises an input box and an input sub-control, and the selected target text is displayed in the input box;
modifying the target text in the input box based on a text input operation for the input sub-control;
when detecting that the text input for the text input control is finished, taking the text in the input box as the modified text of the target text, and replacing the target text in the target image with the modified text to obtain a modified target image;
and displaying the modified target image.
2. The method according to claim 1, wherein the displaying a recognition result page of the original image based on the image text recognition operation for the original image comprises:
displaying an image text recognition control based on a control display operation for the original image;
and when the triggering operation aiming at the image text recognition control is detected, displaying a recognition result page of the original image.
3. The method of claim 2, wherein displaying an image text recognition control based on a control display operation for the original image comprises:
when the display operation aiming at the original image is detected, displaying an image amplification page of the original image, wherein the image amplification page comprises the original image in a full-screen display state;
and when the control display triggering operation aiming at the original image is detected on the image amplification page, displaying an image text recognition control.
4. The image processing method according to claim 3, wherein the image enlarging page further comprises: and the shortcut operation control corresponds to the target content in the original image and is used for executing the operation indicated by the shortcut operation control on the target content when the shortcut operation control is triggered.
5. The image processing method according to claim 1, further comprising:
when the sharing triggering operation aiming at the target image is detected, displaying a text sharing control and an image sharing control;
when the triggering operation aiming at the text sharing control is detected, sharing the text in the target image;
and when the triggering operation aiming at the image sharing control is detected, sharing the target image.
6. The image processing method according to claim 1, further comprising:
when the text translation operation aiming at the target image is detected, displaying a translation result page corresponding to the target image, wherein the translation result page comprises a translation image corresponding to the target image, and the translation image comprises: the translation result corresponding to the text in the target image, and the background content corresponding to the text in the target image.
7. The image processing method according to claim 6, further comprising:
when the image sharing operation aiming at the target image is detected, displaying a sharing setting page of the target image;
determining a target sharing style of a target image based on a sharing style selection operation for the sharing setting page;
determining an image to be shared based on the target sharing style and the target image;
and sharing the image to be shared.
8. The image processing method according to claim 7, wherein if the target sharing pattern is a sharing identification result, the determining an image to be shared based on the target sharing pattern and the target image comprises:
determining the target image as an image to be shared;
if the target sharing style is a sharing translation result, determining an image to be shared based on the target sharing style and the target image comprises:
determining the translation image corresponding to the target image as an image to be shared;
if the target sharing style is a sharing translation comparison result, determining an image to be shared based on the target sharing style and the target image comprises:
and acquiring a translation contrast image of the target image, wherein the translation contrast image comprises the content in the target image and the content in the translation image of the target image.
9. The image processing method according to claim 1, further comprising:
when a text extraction operation for the target image in the recognition result page is detected, displaying a text extraction result page of the target image, wherein the text extraction result page comprises editable text in the target image.
10. The image processing method according to claim 9, wherein after displaying the text extraction result page of the target image, the method further comprises:
when contrast display operation aiming at the text extraction result page is detected, displaying a contrast page, wherein the contrast page comprises a first display area and a second display area, the first display area is used for displaying the target image, and the second display area is used for displaying the text extraction result of the target image.
11. The image processing method according to claim 10, further comprising:
when a text selection operation aiming at the target image in the first display area is detected, determining a selected text corresponding to the text selection operation in the target image;
and adjusting the text extraction result displayed in the second display area based on the selected text, wherein after adjustment, the text extraction result displayed in the second display area comprises the text extraction result corresponding to the selected text.
12. The image processing method according to claim 1, wherein displaying a recognition result page of the original image based on an image text recognition operation for the original image comprises:
triggering to acquire a text recognition result of the original image based on an image text recognition operation aiming at the original image, wherein the text recognition result comprises a text recognized from the original image and a text position of the text in the original image;
replacing the original text on the corresponding text position in the original image with the recognized text in the form of editable text to obtain a target image corresponding to the original image;
and displaying an identification result page of the original image, wherein the identification result page comprises the target image.
13. An image processing apparatus characterized by comprising:
the system comprises a session page display unit, a chat session page display unit and a chat session display unit, wherein the session page display unit is used for displaying a chat session page of an instant messaging client, and the chat session page comprises an original image sent by a chat session user;
a recognition result display unit configured to display a recognition result page of the original image based on an image text recognition operation for the original image, the recognition result page including a target image including: the text is recognized from the original image, and the background content corresponding to the text is recognized, wherein the text is an editable text, and the background content is the content of the original image except the text;
the editing result display unit is used for determining a selected target text in a target image based on selection operation aiming at the text in the target image and displaying a text input control, wherein the text input control comprises an input box and an input sub-control, and the selected target text is displayed in the input box;
modifying the target text in the input box based on a text input operation for the input sub-control;
when detecting that the text input for the text input control is finished, taking the text in the input box as the modified text of the target text, and replacing the target text in the target image with the modified text to obtain a modified target image;
and displaying the modified target image.
14. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the steps of the method according to any of claims 1-12 are implemented when the computer program is executed by the processor.
15. A storage medium having stored thereon instructions which can be loaded by a processor to perform the steps of the method according to any one of claims 1 to 12.
CN201911362175.6A 2019-12-26 2019-12-26 Image processing method and device, computer equipment and storage medium Active CN111126301B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201911362175.6A CN111126301B (en) 2019-12-26 2019-12-26 Image processing method and device, computer equipment and storage medium
CN202210003009.2A CN114332887A (en) 2019-12-26 2019-12-26 Image processing method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911362175.6A CN111126301B (en) 2019-12-26 2019-12-26 Image processing method and device, computer equipment and storage medium

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202210003009.2A Division CN114332887A (en) 2019-12-26 2019-12-26 Image processing method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111126301A CN111126301A (en) 2020-05-08
CN111126301B true CN111126301B (en) 2022-01-11

Family

ID=70502687

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202210003009.2A Pending CN114332887A (en) 2019-12-26 2019-12-26 Image processing method and device, computer equipment and storage medium
CN201911362175.6A Active CN111126301B (en) 2019-12-26 2019-12-26 Image processing method and device, computer equipment and storage medium

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202210003009.2A Pending CN114332887A (en) 2019-12-26 2019-12-26 Image processing method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (2) CN114332887A (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111669312A (en) * 2020-05-15 2020-09-15 上海盛付通电子支付服务有限公司 Message interaction method, electronic device and medium
CN111638838A (en) * 2020-05-19 2020-09-08 维沃移动通信有限公司 Text editing method and device and electronic equipment
CN111753108B (en) * 2020-06-28 2023-08-25 平安科技(深圳)有限公司 Presentation generation method, device, equipment and medium
CN113761257A (en) * 2020-09-08 2021-12-07 北京沃东天骏信息技术有限公司 Picture analysis method and device
CN112169326A (en) * 2020-10-19 2021-01-05 网易(杭州)网络有限公司 Picture processing method and device, electronic equipment and storage medium
CN112947923A (en) * 2021-02-25 2021-06-11 维沃移动通信有限公司 Object editing method and device and electronic equipment
CN113300938B (en) * 2021-04-02 2023-02-24 维沃移动通信有限公司 Message sending method and device and electronic equipment
CN115527135A (en) * 2021-06-24 2022-12-27 Oppo广东移动通信有限公司 Content identification method and device and electronic equipment
CN115567473A (en) * 2021-06-30 2023-01-03 北京有竹居网络技术有限公司 Data processing method, device, server, client, medium and product
CN113436297A (en) * 2021-07-15 2021-09-24 维沃移动通信有限公司 Picture processing method and electronic equipment
CN113778303A (en) * 2021-08-23 2021-12-10 深圳价值在线信息科技股份有限公司 Character extraction method and device and computer readable storage medium
CN115857737A (en) * 2021-09-24 2023-03-28 荣耀终端有限公司 Information recommendation method and electronic equipment
CN115016710B (en) * 2021-11-12 2023-06-16 荣耀终端有限公司 Application program recommendation method
CN115081404B (en) * 2022-08-22 2022-11-15 佳瑛科技有限公司 Block chain-based shared document editing management method and device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463103A (en) * 2014-11-10 2015-03-25 小米科技有限责任公司 Image processing method and device
CN104636740A (en) * 2013-11-08 2015-05-20 株式会社理光 Image processing system and image processing method
CN105739832A (en) * 2016-03-10 2016-07-06 联想(北京)有限公司 Information processing method and electronic equipment
CN106909270A (en) * 2016-07-20 2017-06-30 阿里巴巴集团控股有限公司 Chat data input method, device and communicating terminal
WO2018125003A1 (en) * 2016-12-30 2018-07-05 Turkcell Teknoloji̇ Araştirma Ve Geli̇şti̇rme Anoni̇m Şi̇rketi̇ A translation system
CN109002759A (en) * 2018-06-07 2018-12-14 Oppo广东移动通信有限公司 text recognition method, device, mobile terminal and storage medium
CN109993075A (en) * 2019-03-14 2019-07-09 深圳市六度人和科技有限公司 Chat application session content storage method, system and device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103135977A (en) * 2011-12-02 2013-06-05 腾讯科技(深圳)有限公司 Information-inputting method in browser and device using the same
US10496276B2 (en) * 2013-09-24 2019-12-03 Microsoft Technology Licensing, Llc Quick tasks for on-screen keyboards
CN105786295A (en) * 2014-12-19 2016-07-20 阿里巴巴集团控股有限公司 Character input method and device
US20160202865A1 (en) * 2015-01-08 2016-07-14 Apple Inc. Coordination of static backgrounds and rubberbanding
CN108182184B (en) * 2017-12-27 2021-11-02 北京百度网讯科技有限公司 Picture character translation method, application and computer equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104636740A (en) * 2013-11-08 2015-05-20 株式会社理光 Image processing system and image processing method
CN104463103A (en) * 2014-11-10 2015-03-25 小米科技有限责任公司 Image processing method and device
CN105739832A (en) * 2016-03-10 2016-07-06 联想(北京)有限公司 Information processing method and electronic equipment
CN106909270A (en) * 2016-07-20 2017-06-30 阿里巴巴集团控股有限公司 Chat data input method, device and communicating terminal
WO2018125003A1 (en) * 2016-12-30 2018-07-05 Turkcell Teknoloji̇ Araştirma Ve Geli̇şti̇rme Anoni̇m Şi̇rketi̇ A translation system
CN109002759A (en) * 2018-06-07 2018-12-14 Oppo广东移动通信有限公司 text recognition method, device, mobile terminal and storage medium
CN109993075A (en) * 2019-03-14 2019-07-09 深圳市六度人和科技有限公司 Chat application session content storage method, system and device

Also Published As

Publication number Publication date
CN111126301A (en) 2020-05-08
CN114332887A (en) 2022-04-12

Similar Documents

Publication Publication Date Title
CN111126301B (en) Image processing method and device, computer equipment and storage medium
CN109918345B (en) Document processing method, device, terminal and storage medium
JP7102170B2 (en) Image processing device and control method and program of image processing device
US20150277686A1 (en) Systems and Methods for the Real-Time Modification of Videos and Images Within a Social Network Format
WO2017125024A1 (en) Resource sharing method, terminal and storage medium
JP4897520B2 (en) Information distribution system
US20140195921A1 (en) Methods and systems for background uploading of media files for improved user experience in production of media-based products
JP7407928B2 (en) File comments, comment viewing methods, devices, computer equipment and computer programs
CN111144320A (en) Image processing method and device, computer equipment and storage medium
CN111324535B (en) Control abnormity detection method and device and computer equipment
US20160210347A1 (en) Classification and storage of documents
CN101908218A (en) Editing equipment and method for arranging
CN112749606A (en) Text positioning method and device
US11567635B2 (en) Online collaborative document processing method and device
CN113158619B (en) Document processing method and device, computer readable storage medium and computer equipment
US11887390B2 (en) Information processing apparatus, information processing system, information processing method, and non-transitory recording medium
CN106104531A (en) Automatically numerical map is embedded in software application
CN113591657B (en) OCR layout recognition method and device, electronic equipment and medium
JP4430490B2 (en) Data entry device, control method therefor, and program
CN113268232B (en) Page skin generation method and device and computer readable storage medium
US11206336B2 (en) Information processing apparatus, method, and non-transitory computer readable medium
JP7069631B2 (en) Information processing equipment and information processing programs
CN113434679A (en) Image-text content publishing method and device
JP2023137077A (en) Information processing device, information processing system, information processing method, and program
CN105701527A (en) Template identification method and template identification device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant