CN110930410A - Image processing method, server and terminal equipment - Google Patents

Image processing method, server and terminal equipment Download PDF

Info

Publication number
CN110930410A
CN110930410A CN201911033623.8A CN201911033623A CN110930410A CN 110930410 A CN110930410 A CN 110930410A CN 201911033623 A CN201911033623 A CN 201911033623A CN 110930410 A CN110930410 A CN 110930410A
Authority
CN
China
Prior art keywords
server
target image
target
image
terminal device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911033623.8A
Other languages
Chinese (zh)
Other versions
CN110930410B (en
Inventor
张可
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201911033623.8A priority Critical patent/CN110930410B/en
Publication of CN110930410A publication Critical patent/CN110930410A/en
Priority to PCT/CN2020/123343 priority patent/WO2021083058A1/en
Application granted granted Critical
Publication of CN110930410B publication Critical patent/CN110930410B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioethics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Information Transfer Between Computers (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention provides an image processing method, a server and terminal equipment, relates to the technical field of terminals, and can solve the problem that privacy information of a user is leaked when the user triggers the terminal equipment to perform screen identification operation. The scheme comprises the following steps: receiving a target image sent by the terminal equipment, wherein the target image is an image acquired by the terminal equipment through a screen; according to a first preset rule, dividing a target image into M sub-images, wherein M is an integer greater than 1; and sending the M sub-images to a second server. The scheme is applied to a screen recognition scene based on a screen recognition function.

Description

Image processing method, server and terminal equipment
Technical Field
The embodiment of the invention relates to the technical field of terminals, in particular to an image processing method, a server and terminal equipment.
Background
Along with the continuous improvement of the intelligent degree of the terminal equipment, the functions supported by the terminal equipment are more and more.
Currently, a user may trigger a terminal device to start a screen recognition function so as to recognize characters in an image displayed in a screen area of the terminal device. Specifically, when the user triggers the terminal device to start the screen recognition function, the terminal device may collect an image displayed in a current screen area, then the terminal device may send the image to a vendor server of the terminal device, after the vendor server receives the image, the vendor server may send the image to a third-party server (for example, a server of a developer developing the screen recognition function), the third-party server may recognize the image by an Optical Character Recognition (OCR) technique to obtain text information in the image, then the third-party server may send the text information to the vendor server, the vendor server then sends the text information to the terminal device, after the terminal device receives the text information, the terminal device may perform entity recognition on the text information and display a result of the entity recognition, so that the user can view the recognition result through the terminal device.
However, in the above process, since the image captured by the terminal device needs to be identified by the third-party server, in the case that the image captured by the terminal device contains the privacy information of the user, the privacy information of the user may be leaked.
Disclosure of Invention
The embodiment of the invention provides an image processing method, a server and terminal equipment, and aims to solve the problem that privacy information of a user is leaked when the user triggers the terminal equipment to perform screen identification operation.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present invention provides an image processing method, where the method is applied to a first server, and the method includes: and receiving the target image sent by the terminal equipment, dividing the target image into M sub-images according to a first preset rule, and then sending the M sub-images to a second server. The target image is an image acquired by the terminal equipment through the screen, and M is an integer greater than 1.
In a second aspect, an embodiment of the present invention provides an image processing method, where the method is applied to a terminal device, and the method includes: and sending the target image to the first server, and sending the first indication information to the first server. The target image is an image acquired by the terminal device through a screen, the first indication information is used for indicating whether the target image comprises privacy information of a user, and the first indication information is used for the first server to determine whether to segment the target image.
In a third aspect, an embodiment of the present invention provides a server, which may include a receiving module, a processing module, and a sending module. The receiving module is used for receiving a target image sent by the terminal equipment, wherein the target image is an image acquired by the terminal equipment through an identification screen; the processing module is used for dividing the target image received by the receiving module into M sub-images according to a first preset rule, wherein M is an integer greater than 1; and the sending module is used for sending the M sub-images divided by the processing module to the second server.
In a fourth aspect, an embodiment of the present invention provides a terminal device, where the terminal device may include a sending module. The sending module is used for sending the target image to the first server and sending first indication information to the first server; the target image is an image acquired by the terminal device through a screen, the first indication information is used for indicating whether the target image comprises privacy information of a user, and the first indication information is used for the first server to determine whether to segment the target image.
In a fifth aspect, embodiments of the present invention provide a server comprising a processor, a memory, and a computer program stored on the memory and operable on the processor, the computer program, when executed by the processor, implementing the steps of the image processing method as in the first aspect described above.
In a sixth aspect, an embodiment of the present invention provides a terminal device, which includes a processor, a memory, and a computer program stored on the memory and operable on the processor, and when executed by the processor, the computer program implements the steps of the image processing method in the second aspect.
In a seventh aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps of the image processing method in the first and second aspects.
In the embodiment of the present invention, the first server may receive a target image sent by the terminal device (the target image is an image acquired by the terminal device through a screen), divide the target image into M (M is an integer greater than 1) sub-images according to a first preset rule, and then send the M sub-images to the second server. According to the scheme, the first server can divide the target image acquired by the terminal equipment through the screen identification into the plurality of sub-images and send the sub-images to the second server, so that the second server cannot acquire the complete target image, the second server can only respectively identify the plurality of sub-images to obtain the plurality of text messages, the complete text messages in the target image cannot be obtained, further, the privacy information of the user in the target image can be prevented from being leaked, and the safety of the privacy information of the user is ensured.
Drawings
Fig. 1 is a schematic structural diagram of an android operating system according to an embodiment of the present invention;
fig. 2 is a schematic diagram of an image processing method according to an embodiment of the present invention;
fig. 3 is one of schematic interface diagrams of an application of an image processing method according to an embodiment of the present invention;
fig. 4 is a second schematic diagram of an image processing method according to an embodiment of the present invention;
fig. 5 is a third schematic diagram of an image processing method according to an embodiment of the present invention;
fig. 6 is a second schematic interface diagram of an application of an image processing method according to an embodiment of the present invention;
fig. 7 is a third schematic interface diagram of an application of an image processing method according to an embodiment of the present invention;
FIG. 8 is a fourth schematic diagram illustrating an image processing method according to an embodiment of the present invention;
FIG. 9 is a fourth schematic interface diagram of an application of the image processing method according to the embodiment of the present invention;
fig. 10 is a schematic structural diagram of a server according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of a terminal device according to an embodiment of the present invention;
fig. 12 is a second schematic structural diagram of a terminal device according to an embodiment of the present invention;
FIG. 13 is a hardware diagram of a server according to an embodiment of the present invention;
fig. 14 is a hardware schematic diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The term "and/or" herein is an association relationship describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. The symbol "/" herein denotes a relationship in which the associated object is or, for example, a/B denotes a or B.
The terms "first" and "second," etc. herein are used to distinguish between different objects and are not used to describe a particular order of objects. For example, the first indication information and the second indication information are used to distinguish different indication information, and are not used to describe a specific order of the indication information.
In the embodiments of the present invention, words such as "exemplary" or "for example" are used to mean serving as examples, illustrations or descriptions. Any embodiment or design described as "exemplary" or "e.g.," an embodiment of the present invention is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
In the description of the embodiments of the present invention, unless otherwise specified, "a plurality" means two or more, for example, a plurality of elements means two or more elements, and the like.
The embodiment of the invention provides an image processing method, a server and a terminal device, which can receive a target image sent by the terminal device (the target image is an image acquired by the terminal device through a screen), divide the target image into M (M is an integer larger than 1) sub-images according to a first preset rule, and then send the M sub-images to a second server. According to the scheme, the first server can divide the target image acquired by the terminal equipment through the screen identification into the plurality of sub-images and send the sub-images to the second server, so that the second server cannot acquire the complete target image, the second server can only respectively identify the plurality of sub-images to obtain the plurality of text messages, the complete text messages in the target image cannot be obtained, further, the privacy information of the user in the target image can be prevented from being leaked, and the safety of the privacy information of the user is ensured.
The terminal device in the embodiment of the present invention may be a terminal device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present invention are not limited in particular.
The following describes a software environment to which the image processing method provided by the embodiment of the present invention is applied, by taking an android operating system as an example.
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention. In fig. 1, the architecture of the android operating system includes 4 layers, which are respectively: an application layer, an application framework layer, a system runtime layer, and a kernel layer (specifically, a Linux kernel layer).
The application program layer comprises various application programs (including system application programs and third-party application programs) in an android operating system.
The application framework layer is a framework of the application, and a developer can develop some applications based on the application framework layer under the condition of complying with the development principle of the framework of the application.
The system runtime layer includes libraries (also called system libraries) and android operating system runtime environments. The library mainly provides various resources required by the android operating system. The android operating system running environment is used for providing a software environment for the android operating system.
The kernel layer is an operating system layer of an android operating system and belongs to the bottommost layer of an android operating system software layer. The kernel layer provides kernel system services and hardware-related drivers for the android operating system based on the Linux kernel.
Taking an android operating system as an example, in the embodiment of the present invention, a developer may develop a software program for implementing the image processing method provided in the embodiment of the present invention based on the system architecture of the android operating system shown in fig. 1, so that the image processing method may operate based on the android operating system shown in fig. 1. Namely, the processor or the terminal device can implement the image processing method provided by the embodiment of the invention by running the software program in the android operating system.
The terminal equipment in the embodiment of the invention can be a mobile terminal or a non-mobile terminal. For example, the mobile terminal may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted terminal, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile terminal may be a Personal Computer (PC), a Television (TV), a teller machine, a self-service machine, and the like, and the embodiment of the present invention is not limited in particular.
An execution main body of the image processing method provided by the embodiment of the present invention may be the terminal device, or may also be a functional module and/or a functional entity capable of implementing the image processing method in the terminal device, which may be determined specifically according to actual use requirements, and the embodiment of the present invention is not limited. The following takes a terminal device as an example to exemplarily explain an image processing method provided by the embodiment of the present invention.
In the embodiment of the present invention, when a user triggers a terminal device to perform a screen recognition function (for example, the user may trigger the terminal device to perform the screen recognition function by pressing a screen of the terminal device with one input, for example, a double finger), the terminal device may capture an image currently displayed on a screen of the terminal device (i.e., the image captured by the terminal device through the screen recognition), and send the image to a first server, after the first server receives the image, the first server may divide the image into a plurality of sub-images according to a certain rule, and send the plurality of sub-images to a second server, after the second server receives the plurality of sub-images, the second server may identify the plurality of sub-images to obtain a plurality of text information, and then the second server sends the plurality of text information to the first server, and after the first server synthesizes the plurality of text information into target text information (i.e., text information in an image captured by the terminal device through the screen recognition), the first server sends the target text information to the And the terminal equipment is given so that the terminal equipment can display the target text information to show the user the result of executing the screen recognition function. In the process, the first server can divide the image acquired by the terminal device through the screen recognition into the plurality of sub-images and send the sub-images to the second server, so that the second server cannot acquire a complete image, the second server can only respectively recognize the plurality of sub-images to acquire the plurality of text information, the complete text information in the image acquired by the terminal device through the screen recognition cannot be acquired, the privacy information of the user in the image can be prevented from being leaked, and the security of the privacy information of the user is ensured.
The following describes an exemplary image processing method according to an embodiment of the present invention with reference to the drawings.
As shown in fig. 2, an embodiment of the present invention provides an image processing method, which may include S201 to S205 described below.
S201, the terminal device sends the target image to the first server.
The target image can be an image acquired by the terminal device through the screen.
Specifically, in the embodiment of the present invention, when a user needs to acquire text information in an image displayed on a screen of a terminal device, the user may trigger the terminal device to execute a screen recognizing function through an input (for example, pressing a screen of the terminal device with two fingers), so that the terminal device may collect an image currently displayed on the screen of the terminal device through the screen recognizing function (which may also be called as that the terminal device intercepts the image currently displayed on the screen of the terminal device), that is, a target image, and then the terminal device may send the collected target image to a first server.
Optionally, in this embodiment of the present invention, the first server may be a server of a manufacturer of the terminal device.
S202, the first server receives the target image.
S203, the first server divides the target image into M sub-images according to a first preset rule.
Wherein M may be an integer greater than 1. I.e. the first server may split the target image into a plurality of sub-images.
Optionally, in the embodiment of the present invention, the first preset rule may be any possible segmentation method such as "left-right segmentation", "up-down segmentation", and "mesh segmentation", and may specifically be determined according to an actual use requirement, which is not limited in the embodiment of the present invention.
The above S203 is exemplarily described with reference to fig. 3.
Illustratively, as shown in fig. 3 (a), is a schematic diagram of a target image (as shown in 31 of fig. 3 (a)). Assuming that M is 2, the first server may divide the image into a first sub-image (as shown by 32 of (b) in fig. 3) and a second sub-image (as shown by 33 of (b) in fig. 3) according to a preset rule of "dividing up and down" (i.e., a first preset rule). Alternatively, the first server may divide the image into a third sub-image (as shown by 34 of (c) in fig. 3) and a fourth sub-image (as shown by 35 of (c) in fig. 3) according to a preset rule of "left-right division" (i.e., a first preset rule).
Optionally, in the embodiment of the present invention, after the first server divides the target image into M sub-images, the first server may sequentially set an Identifier (ID) for each sub-image in the M sub-images according to the dividing order. The ID set by the first server for each sub-image may be unique, that is, each ID may uniquely indicate one sub-image.
In the embodiment of the present invention, after the M sub-images are processed (for example, the following second server recognizes the M sub-images to obtain M pieces of text information), the corresponding IDs do not change.
S204, the first server sends the M sub-images to the second server.
S205, the second server receives the M sub-images.
Optionally, in this embodiment of the present invention, the second server may be a third-party server. For example, the second server may be a server of a developer who develops the screen recognition function.
Optionally, in this embodiment of the present invention, when the first server sends the M sub-images to the second server, the first server may send each sub-image in the M sub-images in a random order, or the first server may send each image in the M sub-images in sequence according to the IDs of the M sub-images.
In the embodiment of the invention, the first server can divide the target image acquired by the terminal equipment through the screen identification into the plurality of sub-images and then send the sub-images to the second server, so that the second server cannot acquire the complete target image, the second server can only respectively identify the plurality of sub-images to obtain the plurality of text messages, and cannot obtain the complete text messages in the target image, and further, the leakage of the privacy information of the user in the target image can be prevented, and the safety of the privacy information of the user can be ensured.
Optionally, in this embodiment of the present invention, before the first server divides the target image into M sub-images according to a first preset rule, the first server may further receive first indication information sent by the terminal device, and when the first indication information indicates that the target image includes the privacy information of the user, the first server may divide the target image into M sub-images according to the first preset rule.
For example, in conjunction with fig. 2, as shown in fig. 4, before S203, the image processing method according to the embodiment of the present invention may further include S206-S207 described below. Specifically, S203 may be implemented as S203a described below.
S206, the terminal device sends the first indication information to the first server.
S207, the first server receives the first indication information.
The first indication information may be used to indicate whether the target image includes the privacy information of the user. The first indication information may also be used for the first server to determine whether to segment the target image, that is, after the first server receives the first indication information, the first server may determine whether to segment the target image according to the first indication information.
Specifically, in a case where the first indication information indicates that the privacy information of the user is included in the target image, in order to prevent the privacy information of the user in the target image from being leaked, the first server may determine to segment the target image; in a case where the first indication information indicates that the privacy information of the user is not included in the target image, in order to avoid the first server performing an unnecessary division operation, the first server may determine not to divide the target image.
S203a, in case that the first indication information indicates that the target image includes the privacy information of the user, the first server divides the target image into M sub-images according to a first preset rule.
In the embodiment of the present invention, when the first indication information indicates that the target image includes the privacy information of the user, the first server may divide the target image into M sub-images according to a first preset rule, and send the M sub-images obtained by the division to the second server.
In the embodiment of the invention, in the case that the first indication information indicates that the privacy information of the user is not included in the target image, the first server may directly send the target image to the second server.
It should be noted that the execution order of S201-S202 and S206-S207 may not be limited in the embodiments of the present invention. That is, in the embodiment of the present invention, S201 to S202 may be executed first, and then S206 to S207 may be executed; or S206-S207 can be executed first, and then S201-S202 can be executed; S201-S202 and S206-S207 may also be performed simultaneously.
In the embodiment of the present invention, since the terminal device may transmit, to the first server, the first indication information indicating whether the privacy information of the user is included in the target image, the first server may divide the target image into the plurality of sub-images only in a case where the first indication information indicates that the privacy information of the user is included in the target image, and the first server may not divide the target image in a case where the first indication information indicates that the privacy information of the user is not included in the target image. Therefore, the privacy information of the user can be prevented from being leaked, and unnecessary segmentation operation of the first server can be prevented from being executed.
Optionally, in the embodiment of the present invention, after the second server receives the M sub-images sent by the first server, the second server may identify the M sub-images, obtain M text messages, and send the M text messages to the first server. And then the first server can synthesize the M text messages into target text messages according to the preset rule and send the target text messages to the terminal equipment, so that the terminal equipment can display the target text messages to show the screen identification result for the user.
Illustratively, in conjunction with fig. 3, as shown in fig. 5, after S205 described above, the image processing method provided by the embodiment of the present invention may further include S208-S2013 described below.
S208, the second server identifies the M sub-images to obtain M text messages.
In the embodiment of the present invention, after the second server receives the M sub-images sent by the first server, the second server may respectively identify each sub-image in the M sub-images to obtain text information in each sub-image, so as to obtain M text information, where the M text information is obtained by identifying the M sub-images by the second server.
Optionally, in this embodiment of the present invention, the second server may recognize each sub-image through an Optical Character Recognition (OCR) technique, so as to obtain text information in each sub-image. OCR techniques may include image processing techniques and pattern recognition techniques, among others. Specifically, the second server may process each sub-image by using an image processing technique, and then the second server recognizes each processed sub-image by using a pattern recognition technique, so as to obtain text information in each sub-image.
S209, the second server sends M text messages to the first server.
S2010, the first server receives the M text messages.
And S2011, the first server synthesizes the M text messages into target text messages according to a second preset rule corresponding to the first preset rule.
The target text information may be text information in a target image. It can be understood that, in the embodiment of the present invention, the first server synthesizes the M pieces of text information into the target text information according to the second preset rule, which is the text information in the target image.
In the embodiment of the present invention, after the first server receives the M text messages, the first server may synthesize, according to a second preset rule, the M text messages into the target text message according to an ID sequence corresponding to each text message in the M text messages. And the second preset rule adopted when the first server synthesizes the text information corresponds to the first preset rule adopted when the first server divides the target image. For example, assuming that the first preset rule is a left-right segmentation preset rule, if the first server segments the target image into M sub-images according to the left-right segmentation preset rule, the first server may further synthesize the M text messages into the target text message according to a left-right synthesis preset rule (i.e., a second preset rule) corresponding to the left-right segmentation preset rule; or, assuming that the first preset rule is a preset "vertical segmentation" rule, if the first server segments the target image into M sub-images according to the preset "vertical segmentation" rule, the first server may further synthesize the M pieces of text information into the target text information according to a preset "vertical synthesis" rule (i.e., a second preset rule) corresponding to the preset "vertical segmentation" rule. In this way, it can be ensured that the synthesized target text information is completely consistent with the text information in the target image (e.g., the content, the form, etc. are completely consistent).
The above-described S207-S2011 is exemplarily explained below with reference to fig. 6.
Illustratively, in conjunction with fig. 3, assuming that M is 2, and the first server divides the target image 31 into the first sub-image 34 (ID: 01) and the second sub-image 35 (ID: 02) according to the preset rule of "left-right division", after the first server sends the first sub-image 34 and the second sub-image 35 to the second server, as shown in fig. 6, the second server may recognize the first sub-image 34 to obtain the first text information (ID: 01, shown as 61 in (a) in fig. 6) and recognize the second sub-image 35 to obtain the second text information (ID: 02, shown as 62 in (a) in fig. 6). Then, the second server may send the first text message 61 and the second text message 62 to the first server, and after the first server receives the first text message 61 and the second text message 62, the first server may synthesize the target text message according to a preset rule of "left-right synthesis" (i.e., a second preset rule) and according to a sequence of IDs of the first text message 61 and the second text message 62 (as shown in 63 of (b) in fig. 6), and it may be understood that the target text message 63 is the text message in the target image 31 shown in (a) in fig. 3.
It should be noted that, in the embodiment of the present invention, the ID of one sub-image is the same as the ID of the text information obtained by identifying the sub-image, that is, the ID of one sub-image is the same as the ID of the text information in the sub-image. Illustratively, as shown in fig. 3 and 6, the ID of the first sub-image 34 is the same as the ID of the first text information 61, and the ID of the second sub-image 35 is the same as the ID of the second text information 62. Therefore, the target image can be divided into M sub-images according to a certain sequence, and then the M text messages obtained by identifying the M sub-images are synthesized into the target text message according to the same sequence, so that the synthesized target text message can be ensured to be completely consistent with the text message in the target image.
S2012, the first server sends the target text information to the terminal equipment.
S2013, the terminal device receives the target text information.
Optionally, in an embodiment of the present invention, in a possible implementation manner, after the terminal device receives the target text information sent by the first server, the terminal device may display the target text information, so that a user may view the target text information through the terminal device and operate the text information.
Illustratively, in conjunction with fig. 6 described above, as shown in fig. 7, after the terminal device receives the target text information sent by the first server, the terminal device may display the target text information in the form of text identifiers (a plurality of text identifiers are displayed as indicated by 71 in fig. 7, and the text information indicated by the plurality of text identifiers indicated by 71 is the same as the target text information indicated by 63 in (b) in fig. 6).
Optionally, in this embodiment of the present invention, in one possible implementation manner described above, when the terminal device displays the target text information, at least one operation control may also be displayed (as shown in 72 in fig. 7). The user can trigger the terminal device to execute the action corresponding to the content and the operation control through the input of the certain content in the target text information and the certain operation control in the at least one operation control.
Illustratively, as shown in fig. 7, the terminal device displays a plurality of text labels (shown as 71 in fig. 7, which may be used to indicate the target text information) and a plurality of operation controls (shown as 72 in fig. 7). The user can trigger the terminal device to execute a certain action corresponding to the text information indicated by the text identifier and a certain operation control by inputting a certain text identifier in the text identifiers shown in 71 and a certain operation control in the operation controls shown in 72. For example, after the user inputs the text identifier "chinese XX bank" and the operation control "search", that is, the terminal device receives the input of the user, the terminal device may call a search application installed in the terminal device in response to the input, and find information related to "chinese XX bank" using "chinese XX bank" as a search keyword, and after the terminal device finds the information related to "chinese XX bank", the terminal device may display the information, so that the user may view the information.
In another possible implementation manner, after the terminal device receives the target text information sent by the first server, the terminal device may extract some specific terms in the target text information, perform entity recognition on the specific terms, and then the terminal device may display the result of the entity recognition so as to be convenient for the user to view.
In the embodiment of the present invention, the specific noun may be a name of a person, a name of a place, a name of a institution, a proper noun, or other nouns. Other terms may include the names of articles related to life, such as eating, wearing, living, walking, etc., for example, "chopsticks", "backpack", etc.
For example, assuming that the target text information is "i like a down jacket", after the terminal device receives "i like a down jacket" sent by the first server, the terminal device may extract another term "down jacket" from the "i like a down jacket", and then the terminal device may perform entity identification on the "down jacket". For example, the terminal device may call a shopping application installed in the terminal device, find the goods links related to "down jackets", and after the terminal device finds the goods links related to "down jackets", the terminal device may display the goods links so that the user may view the goods links.
In the embodiment of the invention, in the process that the user triggers the terminal device to execute the screen identification function, the first server divides the target image into the M sub-images according to the preset rule and then sends the M sub-images to the second server for identification, so that after the first server receives the M text messages which are sent by the second server and obtained by identifying the M sub-images, the first server can synthesize the M text messages into a complete target text message according to the same preset rule, and the terminal device can obtain the complete text message in the target image after the first server sends the target text message to the terminal device. Therefore, the terminal equipment can be ensured to normally execute the screen recognition function, and the privacy information of the user can be prevented from being revealed in the screen recognition function executing process.
Furthermore, the terminal device can display the target text information to the user by executing the screen recognition function, so that the user can operate the target text information, and the problem that the user cannot directly operate the text information in the image in the traditional technology can be solved. Therefore, the user experience can be improved, and the man-machine interaction performance is improved.
Optionally, in this embodiment of the present invention, before the terminal device sends the first indication information to the first server, the terminal device may first display a prompt message to prompt the user to determine whether the target image includes the privacy information of the user. The terminal device may then generate the first indication information based on the user input on the reminder information.
For example, in conjunction with fig. 4, as shown in fig. 8, before S206, the image processing method provided in the embodiment of the present invention may further include S2014-2016 described below.
And S2014, the terminal equipment displays the target prompt information.
The target prompting information can be used for prompting the user to determine whether the target image includes the privacy information of the user.
Optionally, in this embodiment of the present invention, the target prompt information may include first prompt content and a first prompt option. The first prompting content may be used to prompt the user to determine whether the user's private information is included in the target image. The first prompt option may include a first option and a second option; the first option may be used to determine that the target image includes the user's private information, that is, the user's input of the first option may be used to determine that the target image includes the user's private information; the second option may be used to determine that the user's private information is not included in the target image, i.e. user input of the second option may be used to determine that the user's private information is not included in the target image.
Illustratively, as shown in fig. 9, the target prompt message may be "whether or not to include the private information" (as shown at 91 in fig. 9), the first option may be a "yes" option (as shown at 92 in fig. 9), and the second option may be a "no" option (as shown at 93 in fig. 9).
Optionally, in the embodiment of the present invention, the above-mentioned S2014 may be specifically implemented by the following S2014 a.
S2014a, the terminal device displays the destination prompt information when receiving the destination content.
The target content may be a first input of a user, or may be second indication information sent by the first server. The first input may be used to trigger the terminal device to perform a screen-recognition function, and the second indication information may be used to indicate that the first server receives the target image.
Specifically, in an embodiment of the present invention, in a possible implementation manner, when the target content is the first input, after the user triggers the terminal device to execute the screen recognizing function (that is, the user executes the first input, for example, the user may trigger the terminal device to execute the screen recognizing function by pressing a screen of the terminal device with two fingers), the terminal device may collect an image currently displayed on the screen of the terminal device (that is, the target image), and display the target prompt information to prompt the user to determine whether the target image includes the privacy information of the user. In another possible implementation manner, in a case that the target content is the second indication information, after the user triggers the terminal device to execute the screen identification function, the terminal device may collect an image currently displayed on a screen of the terminal device (that is, the target image), and send the target image to the first server, after the first server receives the target image, the first server may send, to the terminal device, the second indication information used for indicating that the terminal device receives the target image, that is, after the terminal device receives the second indication information, the terminal device may display the target prompt information, and prompt the user to determine whether the target image includes the privacy information of the user.
S2015, the terminal equipment receives target input of the user on the target prompt information.
Optionally, in this embodiment of the present invention, the target input may be a first input or a second input, where the first input is used to determine that the target image includes the privacy information of the user, and the second input is used to determine that the target image does not include the privacy information of the user.
Illustratively, the first input may be a user input of the "yes" option shown at 92 in FIG. 9 and the second input may be a user input of the "no" option shown at 93 in FIG. 9.
It should be noted that, the embodiment of the present invention may not limit the input form of the target input, and may specifically be determined according to the actual use requirement, and the embodiment of the present invention is not limited.
And S2016, the terminal equipment generates first indication information according to the target input.
In the embodiment of the invention, after the terminal device receives the first input of the target prompt information by the user, the first indication information generated by the terminal device according to the first input can be used for indicating that the target image comprises the privacy information of the user. After the terminal device receives a second input of the target prompt message from the user, the first indication message generated by the terminal device according to the second input can be used for indicating that the privacy information of the user is not included in the target image.
In the embodiment of the present invention, since the terminal device may generate different first indication information according to different inputs of the target prompt information by the user, after the terminal device sends the first indication information to the first server, the first server may perform different operations according to an indication of the first indication information, that is, the first server may accurately determine whether to segment the target image. Therefore, the privacy information of the user can be prevented from being leaked, and unnecessary segmentation operation of the first server can be prevented from being executed.
In the embodiment of the present invention, the image processing methods shown in the above-mentioned method drawings are all exemplarily described with reference to one drawing in the embodiment of the present invention. In specific implementation, the image processing methods shown in the above method drawings may also be implemented by combining with any other drawings that may be combined, which are illustrated in the above embodiments, and are not described herein again.
As shown in fig. 10, an embodiment of the present invention provides a server 400, and the server 400 may include a receiving module 401, a processing module 402, and a sending module 403. The receiving module 401 may be configured to receive a target image sent by a terminal device, where the target image is an image acquired by the terminal device through a screen; a processing module 402, configured to divide the target image received by the receiving module 401 into M sub-images according to a first preset rule, where M is an integer greater than 1; the sending module 403 may be configured to send the M sub-images divided by the processing module 402 to the second server.
Optionally, in this embodiment of the present invention, the receiving module 401 is further configured to receive first indication information sent by the terminal device before the processing module 402 divides the target image into M sub-images according to a first preset rule, where the first indication information is used to indicate whether the target image includes privacy information of a user; the processing module 402 may be specifically configured to, when the first indication information received by the receiving module 401 indicates that the target image includes the privacy information of the user, divide the target image into M sub-images according to a first preset rule.
Optionally, in this embodiment of the present invention, the receiving module 401 may be further configured to receive M pieces of text information sent by the second server after the sending module 403 sends the M pieces of sub-images to the second server, where the M pieces of text information are obtained by identifying the M pieces of sub-images by the second server; the processing module 402 may be further configured to synthesize, according to a second preset rule corresponding to the first preset rule, the M pieces of text information received by the receiving module 401 into target text information, where the target text information is text information in a target image; the sending module 403 may be further configured to send the target text information synthesized by the processing module 402 to the terminal device.
The embodiment of the invention provides a server, which can receive a target image sent by a terminal device, divide the target image into M sub-images according to a first preset rule, and then send the M sub-images to a second server. The target image is an image acquired by the terminal equipment through the screen, and M is an integer greater than 1. According to the scheme, the first server can divide the target image acquired by the terminal equipment through the screen identification into the plurality of sub-images and send the sub-images to the second server, so that the second server cannot acquire the complete target image, the second server can only respectively identify the plurality of sub-images to obtain the plurality of text messages, the complete text messages in the target image cannot be obtained, further, the privacy information of the user in the target image can be prevented from being leaked, and the safety of the privacy information of the user is ensured.
As shown in fig. 11, an embodiment of the present invention provides a terminal device 500, where the terminal device 500 may include a sending module 501. The sending module 501 may be configured to send a target image to a first server, and send first indication information to the first server, where the target image may be an image acquired by a terminal device through a screen, the first indication information is used to indicate whether the target image includes privacy information of a user, and the first indication information is used by the first server to determine whether to segment the target image.
Optionally, with reference to fig. 11, as shown in fig. 12, in this embodiment of the present invention, the terminal device may further include a display module 502, a receiving module 503, and a processing module 504. The display module 502 may be configured to display target prompt information before the sending module 501 sends the first indication information to the first server, where the target prompt information is used to prompt the user to determine whether the target image includes the privacy information of the user; a receiving module 503, which may be used to receive a target input on the target prompt message displayed by the display module 502; the processing module 504 may be configured to generate the first indication information according to the target input received by the receiving module 503.
Optionally, in this embodiment of the present invention, the display module 502 may be specifically configured to display the target prompt information when the receiving module 503 receives the target content; the target content is a first input of a user or second indication information sent by a first server, the first input is used for triggering the terminal device to execute a screen identification function, and the second indication information is used for indicating the first server to receive a target image.
The embodiment of the invention provides a terminal device, which can send a target image to a first server and send first indication information to the first server. The target image is an image acquired by the terminal device through a screen, the first indication information is used for indicating whether the target image comprises privacy information of a user, and the first indication information is used for the first server to determine whether to segment the target image. With this arrangement, since the terminal device can transmit the first indication information indicating whether the privacy information of the user is included in the target image to the first server, the first server divides the target image into the plurality of sub-images only in a case where the first indication information indicates that the privacy information of the user is included in the target image, and does not divide the target image in a case where the first indication information indicates that the privacy information of the user is not included in the target image. Therefore, the privacy information of the user can be prevented from being leaked, and unnecessary segmentation operation of the first server can be prevented from being executed.
Fig. 13 is a hardware schematic diagram of a server according to an embodiment of the present invention. As shown in fig. 13, the server 600 may include: one or more processors 601 (other processors than one processor are illustrated in fig. 13 by dashed boxes), memory 602, communication interface 603, and bus 604. The one or more processors 601, the memory 602, and the communication interface 603 are connected to each other via a bus 604, and perform communication with each other.
The processor 601 may be configured to control the communication interface 603 to receive the target image sent by the terminal device through the bus 604, divide the target image into M sub-images according to a first preset rule, and control the communication interface 603 to send the M sub-images to the second server through the bus 604. The target image is an image acquired by the terminal equipment through the screen, and M is an integer greater than 1.
The embodiment of the invention provides a server, which can receive a target image sent by a terminal device, divide the target image into M sub-images according to a first preset rule, and then send the M sub-images to a second server. The target image is an image acquired by the terminal equipment through the screen, and M is an integer greater than 1. According to the scheme, the first server can divide the target image acquired by the terminal equipment through the screen identification into the plurality of sub-images and send the sub-images to the second server, so that the second server cannot acquire the complete target image, the second server can only respectively identify the plurality of sub-images to obtain the plurality of text messages, the complete text messages in the target image cannot be obtained, further, the privacy information of the user in the target image can be prevented from being leaked, and the safety of the privacy information of the user is ensured.
It is understood that, in the embodiment of the present invention, the processor 601 may be the processing module 402 in the schematic structural diagram (for example, fig. 10) of the server in the embodiment; the communication interface 603 may be the receiving module 401 and the sending module 403 in the schematic structural diagram (for example, fig. 10) of the server in the above embodiment.
The bus 604 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus 604 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 13, but this is not intended to represent only one bus or type of bus. In addition, the server 600 may further include some other functional modules (for example, hard disks) not shown in fig. 13, and the embodiments of the present invention are not described herein again.
Fig. 14 is a hardware schematic diagram of a terminal device for implementing various embodiments of the present invention, and as shown in fig. 14, the terminal device 100 includes, but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the terminal device configuration shown in fig. 14 is not intended to be limiting, and that terminal devices may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the terminal device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
The radio frequency unit 101 may be configured to send a target image to a first server, and send first indication information to the first server, where the target image is an image acquired by a terminal device through a screen, the first indication information is used to indicate whether the target image includes privacy information of a user, and the first indication information is used by the first server to determine whether to segment the target image.
The embodiment of the invention provides a terminal device, which can send a target image to a first server and send first indication information to the first server. The target image is an image acquired by the terminal device through a screen, the first indication information is used for indicating whether the target image comprises privacy information of a user, and the first indication information is used for the first server to determine whether to segment the target image. With this arrangement, since the terminal device can transmit the first indication information indicating whether the privacy information of the user is included in the target image to the first server, the first server divides the target image into the plurality of sub-images only in a case where the first indication information indicates that the privacy information of the user is included in the target image, and does not divide the target image in a case where the first indication information indicates that the privacy information of the user is not included in the target image. Therefore, the privacy information of the user can be prevented from being leaked, and unnecessary segmentation operation of the first server can be prevented from being executed.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 101 may be used for receiving and sending signals during a message transmission or call process, and specifically, after receiving downlink data from a base station, the downlink data is processed by the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through a wireless communication system.
The terminal device provides wireless broadband internet access to the user through the network module 102, such as helping the user send and receive e-mails, browse webpages, access streaming media, and the like.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the network module 102 or stored in the memory 109 into an audio signal and output as sound. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the terminal device 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 includes a speaker, a buzzer, a receiver, and the like.
The input unit 104 is used to receive an audio or video signal. The input unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the graphics processor 1041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the network module 102. The microphone 1042 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode.
The terminal device 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or the backlight when the terminal device 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the terminal device posture (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration identification related functions (such as pedometer, tapping), and the like; the sensors 105 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 106 is used to display information input by a user or information provided to the user. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal device. Specifically, the user input unit 107 includes a touch panel 1071 and other input devices 1072. Touch panel 1071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 1071 (e.g., operations by a user on or near touch panel 1071 using a finger, stylus, or any suitable object or attachment). The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and receives and executes commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. Specifically, other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 1071 may be overlaid on the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although in fig. 14, the touch panel 1071 and the display panel 1061 are two independent components to implement the input and output functions of the terminal device, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the terminal device, and is not limited herein.
The interface unit 108 is an interface for connecting an external device to the terminal apparatus 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the terminal apparatus 100 or may be used to transmit data between the terminal apparatus 100 and the external device.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the terminal device, connects various parts of the entire terminal device by using various interfaces and lines, and performs various functions of the terminal device and processes data by running or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the terminal device. Processor 110 may include one or more processing units; alternatively, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The terminal device 100 may further include a power supply 111 (such as a battery) for supplying power to each component, and optionally, the power supply 111 may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, the terminal device 100 includes some functional modules that are not shown, and are not described in detail here.
Optionally, an embodiment of the present invention further provides a server, which includes the processor 110 shown in fig. 14, a memory 109, and a computer program stored in the memory 109 and capable of running on the processor 110, where the computer program, when executed by the processor 110, implements each process of the image processing method embodiment, and can achieve the same technical effect, and details are not described here to avoid repetition.
Optionally, an embodiment of the present invention further provides a terminal device, which includes the processor 110 shown in fig. 14, the memory 109, and a computer program stored in the memory 109 and capable of running on the processor 110, where the computer program, when executed by the processor 110, implements each process of the image processing method embodiment, and can achieve the same technical effect, and details are not described here to avoid repetition.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may include a read-only memory (ROM), a Random Access Memory (RAM), a magnetic or optical disk, and the like.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. An image processing method applied to a first server, the method comprising:
receiving a target image sent by terminal equipment, wherein the target image is an image acquired by the terminal equipment through an identification screen;
according to a first preset rule, dividing the target image into M sub-images, wherein M is an integer larger than 1;
and sending the M sub-images to a second server.
2. The method according to claim 1, wherein before the target image is divided into M sub-images according to the first preset rule, the method further comprises:
receiving first indication information sent by the terminal equipment, wherein the first indication information is used for indicating whether privacy information of a user is included in the target image;
the dividing the target image into M sub-images according to a first preset rule includes:
and under the condition that the first indication information indicates that the target image comprises the privacy information of the user, dividing the target image into the M sub-images according to the first preset rule.
3. The method of claim 1 or 2, wherein after sending the M sub-images to the second server, the method further comprises:
receiving M pieces of text information sent by the second server, wherein the M pieces of text information are obtained by identifying the M sub-images by the second server;
synthesizing the M text messages into target text messages according to a second preset rule corresponding to the first preset rule, wherein the target text messages are text messages in the target image;
and sending the target text information to the terminal equipment.
4. An image processing method applied to a terminal device is characterized by comprising the following steps:
sending a target image to a first server, wherein the target image is an image acquired by the terminal equipment through a screen;
sending first indication information to the first server, wherein the first indication information is used for indicating whether privacy information of a user is included in a target image, and the first indication information is used for the first server to determine whether to segment the target image.
5. The method of claim 4, wherein before sending the first indication to the first server, the method further comprises:
displaying target prompt information, wherein the target prompt information is used for prompting a user to determine whether privacy information of the user is included in the target image;
receiving target input of a user on the target prompt message;
and generating the first indication information according to the target input.
6. The method of claim 5, wherein displaying the target prompt comprises:
under the condition that target content is received, displaying the target prompt information;
the target content is a first input of a user or second indication information sent by a first server, the first input is used for triggering the terminal device to execute a screen identification function, and the second indication information is used for indicating the first server to receive the target image.
7. A server, characterized in that the server comprises a receiving module, a processing module and a sending module;
the receiving module is used for receiving a target image sent by the terminal equipment, wherein the target image is an image acquired by the terminal equipment through an identification screen;
the processing module is used for dividing the target image received by the receiving module into M sub-images according to a first preset rule, wherein M is an integer greater than 1;
and the sending module is used for sending the M sub-images segmented by the processing module to a second server.
8. A terminal device, characterized in that the terminal device comprises a sending module;
the sending module is used for sending a target image to a first server and sending first indication information to the first server; the target image is an image acquired by the terminal device through a screen, the first indication information is used for indicating whether the target image comprises privacy information of a user, and the first indication information is used for the first server to determine whether to segment the target image.
9. A server, characterized by comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the image processing method according to any one of claims 1 to 3.
10. Terminal device, characterized in that it comprises a processor, a memory and a computer program stored on said memory and executable on said processor, said computer program, when executed by said processor, implementing the steps of the image processing method according to any one of claims 4 to 6.
CN201911033623.8A 2019-10-28 2019-10-28 Image processing method, server and terminal equipment Active CN110930410B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201911033623.8A CN110930410B (en) 2019-10-28 2019-10-28 Image processing method, server and terminal equipment
PCT/CN2020/123343 WO2021083058A1 (en) 2019-10-28 2020-10-23 Image processing method, server, and terminal device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911033623.8A CN110930410B (en) 2019-10-28 2019-10-28 Image processing method, server and terminal equipment

Publications (2)

Publication Number Publication Date
CN110930410A true CN110930410A (en) 2020-03-27
CN110930410B CN110930410B (en) 2023-06-23

Family

ID=69849618

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911033623.8A Active CN110930410B (en) 2019-10-28 2019-10-28 Image processing method, server and terminal equipment

Country Status (2)

Country Link
CN (1) CN110930410B (en)
WO (1) WO2021083058A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111966316A (en) * 2020-08-25 2020-11-20 西安万像电子科技有限公司 Image data display method and device and image data display system
WO2021083058A1 (en) * 2019-10-28 2021-05-06 维沃移动通信有限公司 Image processing method, server, and terminal device
CN113688658A (en) * 2020-05-18 2021-11-23 华为技术有限公司 Object identification method, device, equipment and medium
CN114826734A (en) * 2022-04-25 2022-07-29 维沃移动通信有限公司 Character recognition method and device and electronic equipment
CN113688658B (en) * 2020-05-18 2024-06-28 华为云计算技术有限公司 Object identification method, device, equipment and medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113996058B (en) * 2021-11-01 2023-07-25 腾讯科技(深圳)有限公司 Information processing method, apparatus, electronic device, and computer-readable storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105046133A (en) * 2015-07-21 2015-11-11 深圳市元征科技股份有限公司 Image display method and vehicle-mounted terminal
JP2016095592A (en) * 2014-11-12 2016-05-26 株式会社エンタシス Data entry system
CN106295398A (en) * 2016-07-29 2017-01-04 维沃移动通信有限公司 The guard method of privacy information and mobile terminal thereof
US20170185808A1 (en) * 2015-12-24 2017-06-29 Samsung Electronics Co., Ltd. Privacy protection method in a terminal device and the terminal device
CN107667382A (en) * 2015-06-26 2018-02-06 莱克斯真株式会社 Vehicle number code recognition device and its method
CN109803110A (en) * 2019-01-29 2019-05-24 维沃移动通信有限公司 A kind of image processing method, terminal device and server
CN110278327A (en) * 2019-06-10 2019-09-24 维沃移动通信有限公司 Data processing method and mobile terminal
WO2019201146A1 (en) * 2018-04-20 2019-10-24 维沃移动通信有限公司 Expression image display method and terminal device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8625113B2 (en) * 2010-09-24 2014-01-07 Ricoh Company Ltd System and method for distributed optical character recognition processing
CN109064373B (en) * 2018-07-17 2022-09-20 大连理工大学 Privacy protection method based on outsourcing image data entry
CN110930410B (en) * 2019-10-28 2023-06-23 维沃移动通信有限公司 Image processing method, server and terminal equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016095592A (en) * 2014-11-12 2016-05-26 株式会社エンタシス Data entry system
CN107667382A (en) * 2015-06-26 2018-02-06 莱克斯真株式会社 Vehicle number code recognition device and its method
CN105046133A (en) * 2015-07-21 2015-11-11 深圳市元征科技股份有限公司 Image display method and vehicle-mounted terminal
US20170185808A1 (en) * 2015-12-24 2017-06-29 Samsung Electronics Co., Ltd. Privacy protection method in a terminal device and the terminal device
CN106295398A (en) * 2016-07-29 2017-01-04 维沃移动通信有限公司 The guard method of privacy information and mobile terminal thereof
WO2019201146A1 (en) * 2018-04-20 2019-10-24 维沃移动通信有限公司 Expression image display method and terminal device
CN109803110A (en) * 2019-01-29 2019-05-24 维沃移动通信有限公司 A kind of image processing method, terminal device and server
CN110278327A (en) * 2019-06-10 2019-09-24 维沃移动通信有限公司 Data processing method and mobile terminal

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHAO SHEN ET AL.: "Adaptive Human–Machine Interactive Behavior Analysis With Wrist-Worn Devices for Password Inference", 《IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS》 *
桂彦;曾光;汤问;: "基于双边网格和置信颜色模型的快速图像鲁棒分割方法", 计算机辅助设计与图形学学报, no. 07 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021083058A1 (en) * 2019-10-28 2021-05-06 维沃移动通信有限公司 Image processing method, server, and terminal device
CN113688658A (en) * 2020-05-18 2021-11-23 华为技术有限公司 Object identification method, device, equipment and medium
CN113688658B (en) * 2020-05-18 2024-06-28 华为云计算技术有限公司 Object identification method, device, equipment and medium
CN111966316A (en) * 2020-08-25 2020-11-20 西安万像电子科技有限公司 Image data display method and device and image data display system
CN111966316B (en) * 2020-08-25 2023-08-25 西安万像电子科技有限公司 Image data display method and device and image data display system
CN114826734A (en) * 2022-04-25 2022-07-29 维沃移动通信有限公司 Character recognition method and device and electronic equipment

Also Published As

Publication number Publication date
CN110930410B (en) 2023-06-23
WO2021083058A1 (en) 2021-05-06

Similar Documents

Publication Publication Date Title
CN110069306B (en) Message display method and terminal equipment
CN110062105B (en) Interface display method and terminal equipment
CN110851051A (en) Object sharing method and electronic equipment
CN110855830A (en) Information processing method and electronic equipment
CN110930410B (en) Image processing method, server and terminal equipment
CN109857494B (en) Message prompting method and terminal equipment
CN109871164B (en) Message sending method and terminal equipment
CN109379484B (en) Information processing method and terminal
CN110764666B (en) Display control method and electronic equipment
CN110752981B (en) Information control method and electronic equipment
CN111026464A (en) Identification method and electronic equipment
CN110007822B (en) Interface display method and terminal equipment
CN109358931B (en) Interface display method and terminal
CN109523253B (en) Payment method and device
CN109388456B (en) Head portrait selection method and mobile terminal
CN110703972B (en) File control method and electronic equipment
CN110049187B (en) Display method and terminal equipment
CN109901761B (en) Content display method and mobile terminal
EP4068750A1 (en) Object display method and electronic device
CN111124231B (en) Picture generation method and electronic equipment
CN110012151B (en) Information display method and terminal equipment
CN110209324B (en) Display method and terminal equipment
CN110012152B (en) Interface display method and terminal equipment
CN109286726B (en) Content display method and terminal equipment
CN109067975B (en) Contact person information management method and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant