CN110930410B - Image processing method, server and terminal equipment - Google Patents

Image processing method, server and terminal equipment Download PDF

Info

Publication number
CN110930410B
CN110930410B CN201911033623.8A CN201911033623A CN110930410B CN 110930410 B CN110930410 B CN 110930410B CN 201911033623 A CN201911033623 A CN 201911033623A CN 110930410 B CN110930410 B CN 110930410B
Authority
CN
China
Prior art keywords
server
target
target image
information
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911033623.8A
Other languages
Chinese (zh)
Other versions
CN110930410A (en
Inventor
张可
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201911033623.8A priority Critical patent/CN110930410B/en
Publication of CN110930410A publication Critical patent/CN110930410A/en
Priority to PCT/CN2020/123343 priority patent/WO2021083058A1/en
Application granted granted Critical
Publication of CN110930410B publication Critical patent/CN110930410B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioethics (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Information Transfer Between Computers (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention provides an image processing method, a server and terminal equipment, relates to the technical field of terminals, and can solve the problem of privacy information leakage of a user when the user triggers the terminal equipment to perform screen identification operation. The scheme comprises the following steps: receiving a target image sent by terminal equipment, wherein the target image is an image acquired by the terminal equipment through screen recognition; dividing the target image into M sub-images according to a first preset rule, wherein M is an integer greater than 1; m sub-images are sent to a second server. The scheme is applied to the scene based on the screen recognition function.

Description

Image processing method, server and terminal equipment
Technical Field
The embodiment of the invention relates to the technical field of terminals, in particular to an image processing method, a server and terminal equipment.
Background
Along with the continuous improvement of the intelligent degree of the terminal equipment, the functions supported by the terminal equipment are more and more.
Currently, a user can trigger a terminal device to start a screen recognition function so as to recognize characters in an image displayed in a screen area of the terminal device. Specifically, when the user triggers the terminal device to start the screen recognition function, the terminal device may collect an image displayed in the current screen area, then the terminal device may send the image to a vendor server of the terminal device, after the vendor server receives the image, the vendor server may send the image to a third party server (for example, a server of a developer who develops the screen recognition function), the third party server may identify the image by using an optical character recognition (optical character recognition, OCR) technology to obtain text information in the image, then the third party server may send the text information to the vendor server, the vendor server may send the text information to the terminal device, after the terminal device receives the text information, the terminal device may perform entity recognition on the text information, and display the entity recognition result, so that the user may view the recognition result through the terminal device.
However, in the above process, since the image collected by the terminal device needs to be identified by the third party server, in the case where the image collected by the terminal device contains the privacy information of the user, the privacy information of the user may be revealed.
Disclosure of Invention
The embodiment of the invention provides an image processing method, a server and terminal equipment, which are used for solving the problem of privacy information leakage of a user when the user triggers the terminal equipment to perform screen identification operation.
In order to solve the technical problems, the application is realized as follows:
in a first aspect, an embodiment of the present invention provides an image processing method, where the method is applied to a first server, and the method includes: and receiving the target image sent by the terminal equipment, dividing the target image into M sub-images according to a first preset rule, and then sending the M sub-images to a second server. The target image is an image acquired by the terminal equipment through screen recognition, and M is an integer greater than 1.
In a second aspect, an embodiment of the present invention provides an image processing method, where the method is applied to a terminal device, and the method includes: and sending the target image to the first server and sending the first indication information to the first server. The target image is an image acquired by the terminal equipment through screen recognition, the first indication information is used for indicating whether privacy information of a user is included in the target image, and the first indication information is used for determining whether the target image is segmented by the first server.
In a third aspect, an embodiment of the present invention provides a server, where the server may include a receiving module, a processing module, and a sending module. The receiving module is used for receiving a target image sent by the terminal equipment, wherein the target image is an image acquired by the terminal equipment through screen recognition; the processing module is used for dividing the target image received by the receiving module into M sub-images according to a first preset rule, wherein M is an integer greater than 1; and the sending module is used for sending the M sub-images segmented by the processing module to the second server.
In a fourth aspect, an embodiment of the present invention provides a terminal device, where the terminal device may include a sending module. The sending module is used for sending the target image to the first server and sending first indication information to the first server; the target image is an image acquired by the terminal equipment through screen recognition, the first indication information is used for indicating whether privacy information of a user is included in the target image, and the first indication information is used for determining whether the target image is segmented by the first server.
In a fifth aspect, an embodiment of the present invention provides a server comprising a processor, a memory and a computer program stored on the memory and executable on the processor, which when executed by the processor implements the steps of the image processing method as in the first aspect described above.
In a sixth aspect, an embodiment of the present invention provides a terminal device comprising a processor, a memory and a computer program stored on the memory and executable on the processor, which when executed by the processor implements the steps of the image processing method as in the second aspect described above.
In a seventh aspect, embodiments of the present invention provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the image processing method as in the first and second aspects described above.
In the embodiment of the invention, the first server can receive the target image sent by the terminal equipment (the target image is an image acquired by the terminal equipment through screen recognition), divide the target image into M (M is an integer greater than 1) sub-images according to a first preset rule, and then send the M sub-images to the second server. According to the scheme, the first server can divide the target image acquired by the terminal equipment through the screen recognition into a plurality of sub-images and send the sub-images to the second server, so that the second server cannot acquire the complete target image, and therefore the second server can only respectively recognize the plurality of sub-images to acquire a plurality of text information, but cannot acquire the complete text information in the target image, privacy information of a user in the target image can be prevented from being revealed, and privacy information safety of the user is guaranteed.
Drawings
Fig. 1 is a schematic architecture diagram of an android operating system according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an image processing method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an interface to which an image processing method according to an embodiment of the present invention is applied;
FIG. 4 is a second schematic diagram of an image processing method according to an embodiment of the present invention;
FIG. 5 is a third schematic diagram of an image processing method according to an embodiment of the present invention;
FIG. 6 is a second diagram of an interface for an image processing method according to an embodiment of the present invention;
FIG. 7 is a third exemplary diagram of an interface for applying an image processing method according to an embodiment of the present invention;
FIG. 8 is a diagram illustrating an image processing method according to an embodiment of the present invention;
FIG. 9 is a diagram illustrating an interface for applying an image processing method according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of a server according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of a terminal device according to an embodiment of the present invention;
fig. 12 is a second schematic structural diagram of a terminal device according to an embodiment of the present invention;
FIG. 13 is a hardware schematic of a server according to an embodiment of the present invention;
Fig. 14 is a hardware schematic diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by one of ordinary skill in the art without undue burden on the person of ordinary skill in the art based on the embodiments of the present invention, are intended to be within the scope of the present application.
The term "and/or" herein is an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, which may represent: a exists alone, A and B exist together, and B exists alone. The symbol "/" herein indicates that the associated object is or is a relationship, e.g., A/B indicates A or B.
The terms "first" and "second" and the like herein are used to distinguish between different objects and are not used to describe a particular order of objects. For example, the first indication information and the second indication information are used to distinguish between different indication information, and are not used to describe a particular order of indication information.
In embodiments of the invention, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g." in an embodiment should not be taken as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
In the description of the embodiments of the present invention, unless otherwise indicated, the meaning of "a plurality" means two or more, for example, a plurality of elements means two or more, elements, etc.
The embodiment of the invention provides an image processing method, a server and a terminal device, which can receive a target image sent by the terminal device (the target image is an image acquired by the terminal device through screen recognition), divide the target image into M (M is an integer larger than 1) sub-images according to a first preset rule, and then send the M sub-images to a second server. According to the scheme, the first server can divide the target image acquired by the terminal equipment through the screen recognition into a plurality of sub-images and send the sub-images to the second server, so that the second server cannot acquire the complete target image, and therefore the second server can only respectively recognize the plurality of sub-images to acquire a plurality of text information, but cannot acquire the complete text information in the target image, privacy information of a user in the target image can be prevented from being revealed, and privacy information safety of the user is guaranteed.
The terminal device in the embodiment of the invention can be a terminal device with an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, and the embodiment of the present invention is not limited specifically.
The software environment to which the image processing method provided by the embodiment of the invention is applied is described below by taking an android operating system as an example.
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention. In fig. 1, the architecture of the android operating system includes 4 layers, respectively: an application program layer, an application program framework layer, a system runtime layer and a kernel layer (specifically, a Linux kernel layer).
The application program layer comprises various application programs (including system application programs and third party application programs) in the android operating system.
The application framework layer is a framework of applications, and developers can develop some applications based on the application framework layer while adhering to the development principle of the framework of the applications.
The system runtime layer includes libraries (also referred to as system libraries) and android operating system runtime environments. The library mainly provides various resources required by the android operating system. The android operating system running environment is used for providing a software environment for the android operating system.
The kernel layer is an operating system layer of the android operating system, and belongs to the bottommost layer of the software hierarchy of the android operating system. The kernel layer provides core system services and a driver related to hardware for the android operating system based on a Linux kernel.
Taking an android operating system as an example, in the embodiment of the present invention, a developer may develop a software program for implementing the image processing method provided in the embodiment of the present invention based on the system architecture of the android operating system shown in fig. 1, so that the image processing method may be operated based on the android operating system shown in fig. 1. Namely, the processor or the terminal device can realize the image processing method provided by the embodiment of the invention by running the software program in the android operating system.
The terminal device in the embodiment of the invention can be a mobile terminal or a non-mobile terminal. By way of example, the mobile terminal may be a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, an ultra-mobile personal computer (ultra-mobile personal computer, UMPC), a netbook or a personal digital assistant (personal digital assistant, PDA), and the like, and the non-mobile terminal may be a personal computer (personal computer, PC), a Television (TV), a teller machine, a self-service machine, or the like, and the embodiments of the present invention are not limited in particular.
The execution main body of the image processing method provided by the embodiment of the invention can be the terminal equipment, or can be a functional module and/or a functional entity capable of realizing the image processing method in the terminal equipment, and the execution main body can be specifically determined according to actual use requirements, and the embodiment of the invention is not limited. An exemplary description will be given below of an image processing method provided in an embodiment of the present invention, taking a terminal device as an example.
In the embodiment of the invention, when the user triggers the terminal device to execute the screen recognition function (for example, the user can press the screen of the terminal device through one input, for example, the screen of the terminal device is triggered to execute the screen recognition function), the terminal device can acquire the image currently displayed by the screen of the terminal device (namely, the image acquired by the terminal device through the screen recognition) and send the image to the first server, after the first server receives the image, the first server can divide the image into a plurality of sub-images according to a certain rule and send the plurality of sub-images to the second server, after the second server receives the plurality of sub-images, the second server can identify the plurality of sub-images to obtain a plurality of text information, then the second server sends the plurality of text information to the first server, and the first server synthesizes the plurality of text information into target text information (namely, the text information in the image acquired by the terminal device through the screen recognition), and then sends the target text information to the terminal device, so that the terminal device can display the target text information to the user to execute the result of the screen recognition function. In the above process, the first server can divide the image collected by the terminal device through the screen recognition into a plurality of sub-images and send the sub-images to the second server, so that the second server cannot acquire the complete image, and therefore the second server can only respectively identify the plurality of sub-images to obtain a plurality of text information, but cannot obtain the complete text information in the image collected by the terminal device through the screen recognition, further, the privacy information of the user in the image can be prevented from being revealed, and the privacy information safety of the user is ensured.
The image processing method provided by the embodiment of the invention is described below in detail with reference to various drawings.
As shown in fig. 2, an embodiment of the present invention provides an image processing method, which may include S201 to S205 described below.
S201, the terminal equipment sends the target image to the first server.
The target image may be an image acquired by the terminal device through screen recognition.
Specifically, in the embodiment of the present invention, when a user needs to acquire text information in an image displayed on a screen of a terminal device, the user may trigger the terminal device to perform a screen recognition function through one input (for example, two-finger pressing on the screen of the terminal device), so that the terminal device may acquire, through screen recognition, an image currently displayed on the screen of the terminal device (may also be referred to as an image currently displayed on the screen of the terminal device intercepted by the terminal device), that is, a target image, and then the terminal device may send the acquired target image to the first server.
Optionally, in an embodiment of the present invention, the first server may be a server of a manufacturer of the terminal device.
S202, the first server receives the target image.
S203, the first server divides the target image into M sub-images according to a first preset rule.
Wherein, M may be an integer greater than 1. I.e. the first server may divide the target image into a plurality of sub-images.
Optionally, in the embodiment of the present invention, the first preset rule may be any possible form of segmentation method such as "left-right segmentation", "up-down segmentation", "grid segmentation", etc., and may specifically be determined according to actual use requirements, which is not limited in the embodiment of the present invention.
The above S203 will be exemplarily described with reference to fig. 3.
Illustratively, as shown in fig. 3 (a), is a schematic diagram of a target image (as shown at 31 in fig. 3 (a)). Assuming that M is 2, the first server may divide the image into a first sub-image (shown as 32 in fig. 3 (b)) and a second sub-image (shown as 33 in fig. 3 (b)) according to a preset rule of "divide up and down" (i.e., a first preset rule). Alternatively, the first server may divide the image into a third sub-image (shown as 34 in fig. 3 (c)) and a fourth sub-image (shown as 35 in fig. 3 (c)) according to a preset rule of "division left and right" (i.e., a first preset rule).
Optionally, in the embodiment of the present invention, after the first server segments the target image into M sub-images, the first server may set an Identifier (ID) for each of the M sub-images in turn according to the segmentation order. Wherein the ID set by the first server for each sub-image may be unique, i.e. each ID may uniquely indicate one sub-image.
In the embodiment of the present invention, after the M sub-images are processed (for example, the second server described below identifies the M sub-images to obtain M text information), the IDs corresponding to the M sub-images are not changed.
S204, the first server sends M sub-images to the second server.
S205, the second server receives M sub-images.
Optionally, in an embodiment of the present invention, the second server may be a third party server. For example, the second server may be a server of a developer who develops the screen recognition function.
Optionally, in the embodiment of the present invention, when the first server sends M sub-images to the second server, the first server may send each of the M sub-images in a random order, or the first server may send each of the M sub-images in turn in an order of IDs of the M sub-images.
In the embodiment of the invention, the first server can divide the target image acquired by the terminal equipment through the screen recognition into a plurality of sub-images and then send the sub-images to the second server, so that the second server cannot acquire the complete target image, and the second server can only respectively recognize the plurality of sub-images to acquire a plurality of text messages, but cannot acquire the complete text messages in the target image, thereby preventing privacy information of a user in the target image from being revealed and ensuring the privacy information safety of the user.
Optionally, in the embodiment of the present invention, before the first server divides the target image into M sub-images according to the first preset rule, the first server may further receive first indication information sent by the terminal device, and in a case that the first indication information indicates that the target image includes privacy information of the user, the first server may divide the target image into M sub-images according to the first preset rule.
For example, in conjunction with fig. 2, as shown in fig. 4, before S203, the image processing method provided in the embodiment of the present invention may further include S206 to S207 described below. The step S203 may be specifically realized by the following step S203 a.
S206, the terminal equipment sends first indication information to the first server.
S207, the first server receives the first indication information.
The first indication information may be used to indicate whether privacy information of the user is included in the target image. The first indication information may also be used for the first server to determine whether to segment the target image, i.e. after the first server receives the first indication information, the first server may determine whether to segment the target image according to the first indication information.
Specifically, in the case where the first indication information indicates that the privacy information of the user is included in the target image, the first server may determine to divide the target image in order to prevent disclosure of the privacy information of the user in the target image; in the case where the first indication information indicates that the privacy information of the user is not included in the target image, the first server may determine not to segment the target image in order to avoid the first server from performing an unnecessary segmentation operation.
S203a, in a case where the first indication information indicates that the target image includes privacy information of the user, the first server segments the target image into M sub-images according to a first preset rule.
In the embodiment of the invention, when the first indication information indicates that the target image includes the privacy information of the user, the first server may divide the target image into M sub-images according to the first preset rule, and send the M sub-images obtained by dividing to the second server.
In the embodiment of the invention, the first server can directly send the target image to the second server under the condition that the first indication information indicates that the target image does not include the privacy information of the user.
It should be noted that the execution sequence of S201-S202 and S206-S207 may not be limited in the embodiment of the present invention. That is, in the embodiment of the present invention, S201 to S202 may be executed first, and then S206 to S207 may be executed; s206 to S207 may be performed first, and S201 to S202 may be performed later; S201-S202 and S206-S207 may also be performed simultaneously.
In the embodiment of the present invention, since the terminal device may send the first indication information for indicating whether the privacy information of the user is included in the target image to the first server, the first server may divide the target image into the plurality of sub-images only if the first indication information indicates that the privacy information of the user is included in the target image, and the first server may not divide the target image if the first indication information indicates that the privacy information of the user is not included in the target image. Thus, not only can the leakage of the privacy information of the user be avoided, but also the unnecessary segmentation operation performed by the first server can be avoided.
Optionally, in the embodiment of the present invention, after the second server receives the M sub-images sent by the first server, the second server may identify the M sub-images to obtain M text information, and send the M text information to the first server. And then the first server can synthesize the M pieces of text information into target text information according to the preset rule and send the target text information to the terminal equipment, so that the terminal equipment can display the target text information to show the screen recognition result to the user.
For example, in conjunction with fig. 3, as shown in fig. 5, after S205, the image processing method provided in the embodiment of the present invention may further include S208 to S2013 described below.
S208, the second server identifies M sub-images to obtain M text messages.
In the embodiment of the invention, after the second server receives the M sub-images sent by the first server, the second server can respectively identify each sub-image in the M sub-images to obtain the text information in each sub-image, so as to obtain M text information, namely, the M text information is obtained by identifying the M sub-images for the second server.
Optionally, in the embodiment of the present invention, the second server may identify each sub-image by using an optical character recognition (optical character recognition, OCR) technology, so as to obtain text information in each sub-image. Among other things, OCR techniques may include image processing techniques and pattern recognition techniques. Specifically, the second server may process each sub-image through an image processing technology, and then the second server identifies each sub-image after processing through a pattern identification technology, so that text information in each sub-image may be obtained.
S209, the second server sends M pieces of text information to the first server.
S2010, the first server receives M pieces of text information.
S2011, the first server synthesizes the M pieces of text information into target text information according to a second preset rule corresponding to the first preset rule.
The target text information may be text information in a target image. It can be understood that in the embodiment of the present invention, the first server synthesizes the M pieces of text information according to the second preset rule, and then the synthesized target text information is the text information in the target image.
In the embodiment of the invention, after the first server receives the M text messages, the first server may synthesize the M text messages into the target text message according to the second preset rule and according to the order of IDs corresponding to each text message in the M text messages. The second preset rule adopted when the first server synthesizes the text information corresponds to the first preset rule adopted when the first server segments the target image. For example, assuming that the first preset rule is a "split left and right" preset rule, if the first server splits the target image into M sub-images according to the "split left and right" preset rule, the first server may synthesize M text information into the target text information according to a "synthesize left and right" preset rule (i.e., a second preset rule) corresponding to the "split left and right" preset rule; alternatively, assuming that the first preset rule is a "split up and down" preset rule, if the first server splits the target image into M sub-images according to the "split up and down" preset rule, the first server may synthesize M text information into the target text information according to the "composite up and down" preset rule (i.e., the second preset rule) corresponding to the "split up and down" preset rule. In this way, it can be ensured that the synthesized target text information is completely consistent with the text information in the target image (e.g., the content, the form, etc.).
The above-described S207 to S2011 are exemplarily described below with reference to fig. 6.
Illustratively, in conjunction with the above-described fig. 3, assuming that M is 2, and the first server divides the target image 31 into the first sub-image 34 (ID: 01) and the second sub-image 35 (ID: 02) according to a preset rule of "division left and right", after the first server transmits the first sub-image 34 and the second sub-image 35 to the second server, the second server may recognize the first sub-image 34, as shown in fig. 6, to obtain the first text information (as shown by 61 of (a) in fig. 6, ID: 01), and recognize the second sub-image 35, to obtain the second text information (as shown by 62 of (a) in fig. 6, ID: 02). Then, the second server may send the first text information 61 and the second text information 62 to the first server, after the first server receives the first text information 61 and the second text information 62, the first server may synthesize the target text information according to the preset rule of "synthesizing left and right" (i.e., the second preset rule), and according to the sequence of IDs of the first text information 61 and the second text information 62 (as shown by 63 in fig. 6 (b)), and it may be understood that the target text information 63 is the text information in the target image 31 shown by (a) in fig. 3.
It should be noted that, in the embodiment of the present invention, the ID of one sub-image is the same as the ID of the text information obtained by identifying the sub-image, that is, the ID of one sub-image is the same as the ID of the text information in the sub-image. Illustratively, as shown in fig. 3 and 6, the ID of the first sub-image 34 is identical to the ID of the first text information 61, and the ID of the second sub-image 35 is identical to the ID of the second text information 62. Therefore, the target text information can be synthesized by dividing the target image into M sub-images according to a certain sequence and then synthesizing M text information obtained by identifying the M sub-images according to the same sequence, so that the synthesized target text information is completely consistent with the text information in the target image.
And S2012, the first server sends the target text information to the terminal equipment.
S2013, the terminal equipment receives the target text information.
Optionally, in an embodiment of the present invention, in a possible implementation manner, after the terminal device receives the target text information sent by the first server, the terminal device may display the target text information, so that the user may view the target text information through the terminal device and operate on the text information.
For example, in connection with fig. 6 described above, as shown in fig. 7, after the terminal device receives the target text information transmitted from the first server, the terminal device may display the target text information in the form of text labels (a plurality of text labels are displayed as shown at 71 in fig. 7, and the text information indicated by the plurality of text labels as shown at 71 is identical to the target text information as shown at 63 in fig. 6 (b)).
Optionally, in an embodiment of the present invention, in one possible implementation manner, when the terminal device displays the target text information, at least one operation control may also be displayed (as shown by 72 in fig. 7). The user can trigger the terminal device to execute the action corresponding to the content and the operation control through inputting the content in the target text information and the operation control in at least one operation control.
Illustratively, as shown in FIG. 7, the terminal device displays a plurality of text identifications (as shown at 71 in FIG. 7, which may be used to indicate target text information) and a plurality of operational controls (as shown at 72 in FIG. 7). The user may trigger the terminal device to perform a certain action corresponding to the text information indicated by the text identifier and a certain operation control of the plurality of operation controls shown at 72 by inputting a certain text identifier of the plurality of text identifiers shown at 71. For example, after the user inputs the text identifier "chinese XX bank" and the operation control "search", i.e., the terminal device receives the input of the user, the terminal device may call a search application installed in the terminal device in response to the input, search for information related to "chinese XX bank" using "chinese XX bank" as a search keyword, and after the terminal device searches for information related to "chinese XX bank", the terminal device may display the information so that the user may view the information.
In another possible implementation manner, after the terminal device receives the target text information sent by the first server, the terminal device may extract certain specific nouns in the target text information and perform entity identification on the specific nouns, and then the terminal device may display a result of the entity identification so as to facilitate the user to view.
In the embodiment of the present invention, the specific noun may be a person name, a place name, an organization name, a proper noun, and other nouns. Other terms may include, among others, the names of articles related to life, such as eating, wearing, holding, walking, etc., e.g., "chopsticks," "backpack," etc.
For example, assuming that the target text information is "i like down jacket", after the terminal device receives the "i like down jacket" sent by the first server, the terminal device may extract other nouns "down jacket" in the "i like down jacket", and then the terminal device may perform entity recognition on the "down jacket". For example, the terminal device may invoke a shopping application installed in the terminal device, find the commodity links related to "down jackets", and after the terminal device finds the commodity links related to "down jackets", the terminal device may display the commodity links so that the user may view the commodity links.
In the embodiment of the invention, in the process that the user triggers the terminal equipment to execute the screen recognition function, as the first server divides the target image into M sub-images according to the preset rule and then sends the M sub-images to the second server for recognition, after the first server receives M text information obtained by the M sub-images which are sent by the second server and are recognized by the first server, the first server can synthesize the M text information into a complete target text information according to the same preset rule, so that after the first server sends the target text information to the terminal equipment, the terminal equipment can obtain the complete text information in the target image. Therefore, the terminal equipment can be ensured to normally execute the screen recognition function, and the privacy information of the user can be prevented from being revealed in the process of executing the screen recognition function.
Furthermore, the terminal equipment can display the target text information to the user by executing the screen recognition function, so that the user can operate the target text information, and the problem that the user cannot directly operate the text information in the image in the conventional technology can be solved. Therefore, the user experience can be improved, and the man-machine interaction performance is improved.
Optionally, in the embodiment of the present invention, before the terminal device sends the first indication information to the first server, the terminal device may first display a prompt message to prompt the user to determine whether the target image includes the privacy information of the user. The terminal device may then generate the first indication information according to the user input on the prompt.
For example, in conjunction with fig. 4, as shown in fig. 8, before S206, the image processing method provided in the embodiment of the present invention may further include S2014-2016 described below.
S2014, the terminal equipment displays the target prompt information.
The target prompt information may be used to prompt the user to determine whether the target image includes privacy information of the user.
Optionally, in an embodiment of the present invention, the target prompt information may include a first prompt content and a first prompt option. The first prompting content may be used to prompt a user to determine whether privacy information of the user is included in the target image. The first prompt options may include a first option and a second option; the first option may be used to determine that the target image includes privacy information of the user, i.e., user input to the first option may be used to determine that the target image includes privacy information of the user; the second option may be used to determine that the privacy information of the user is not included in the target image, i.e. the user's input of the second option may be used to determine that the privacy information of the user is not included in the target image.
For example, as shown in fig. 9, the target hint information may be "whether privacy information is included" (as shown at 91 in fig. 9), the first option may be a "yes" option (as shown at 92 in fig. 9), and the second option may be a "no" option (as shown at 93 in fig. 9).
Alternatively, in the embodiment of the present invention, the above S2014 may be specifically implemented by the following S2014 a.
S2014a, the terminal equipment displays target prompt information under the condition that target content is received.
The target content may be a first input of a user, or may be second indication information sent by the first server. The first input may be used to trigger the terminal device to perform a screen recognition function, and the second indication information may be used to indicate that the first server receives the target image.
Specifically, in one possible implementation manner of the embodiment of the present invention, after the user triggers the terminal device to perform the screen recognition function (i.e., the user performs the first input, for example, the user may trigger the terminal device to perform the screen recognition function by pressing the screen of the terminal device with two fingers), the terminal device may collect an image currently displayed on the screen of the terminal device (i.e., the target image), display the target prompt information, and prompt the user to determine whether the target image includes privacy information of the user. In another possible implementation manner, in the case that the target content is the second indication information, after the user triggers the terminal device to execute the screen recognition function, the terminal device may collect an image currently displayed on a screen of the terminal device (i.e., the target image), and send the target image to the first server, after the first server receives the target image, the first server may send second indication information for indicating that the first server receives the target image, i.e., after the terminal device receives the second indication information, the terminal device may display the target indication information, so as to prompt the user to determine whether the target image includes privacy information of the user.
S2015, the terminal equipment receives target input of the user on the target prompt information.
Optionally, in the embodiment of the present invention, the target input may be a first input or a second input, where the first input is used to determine that the target image includes privacy information of the user, and the second input is used to determine that the target image does not include privacy information of the user.
Illustratively, the first input may be a user input of the "Yes" option described above as 92 in FIG. 9, and the second input may be a user input of the "No" option described above as 93 in FIG. 9.
It should be noted that, the embodiment of the present invention may not be limited to the input form of the target input, and may be specifically determined according to the actual use requirement, and the embodiment of the present invention is not limited thereto.
S2016, the terminal equipment generates first indication information according to target input.
In the embodiment of the invention, after the terminal equipment receives the first input of the target prompt information from the user, the first indication information generated by the terminal equipment according to the first input can be used for indicating that the target image comprises the privacy information of the user. After the terminal equipment receives the second input of the target prompt information from the user, the first indication information generated by the terminal equipment according to the second input can be used for indicating that the privacy information of the user is not included in the target image.
In the embodiment of the invention, because the terminal equipment can generate different first indication information according to different inputs of the target prompt information by the user, after the terminal equipment sends the first indication information to the first server, the first server can execute different operations according to the indication of the first indication information, namely, the first server can accurately determine whether to segment the target image. Thus, not only can the leakage of the privacy information of the user be avoided, but also the unnecessary segmentation operation performed by the first server can be avoided.
In the embodiment of the present invention, the image processing methods shown in the foregoing method drawings are all exemplified by the embodiment of the present invention in combination with one drawing. In specific implementation, the image processing method shown in the foregoing method drawings may also be implemented in combination with any other drawing that may be illustrated in the foregoing embodiments, and will not be described herein.
As shown in fig. 10, an embodiment of the present invention provides a server 400, where the server 400 may include a receiving module 401, a processing module 402, and a transmitting module 403. The receiving module 401 may be configured to receive a target image sent by a terminal device, where the target image is an image acquired by the terminal device through screen recognition; the processing module 402 may be configured to divide, according to a first preset rule, the target image received by the receiving module 401 into M sub-images, where M is an integer greater than 1; the sending module 403 may be configured to send the M sub-images split by the processing module 402 to the second server.
Optionally, in the embodiment of the present invention, the receiving module 401 is further configured to receive, before the processing module 402 divides the target image into M sub-images according to a first preset rule, first indication information sent by the terminal device, where the first indication information is used to indicate whether privacy information of a user is included in the target image; the processing module 402 may be specifically configured to divide the target image into M sub-images according to a first preset rule when the first indication information received by the receiving module 401 indicates that the target image includes privacy information of the user.
Optionally, in the embodiment of the present invention, the receiving module 401 may be further configured to receive M pieces of text information sent by the second server after the sending module 403 sends M sub-images to the second server, where the M pieces of text information are obtained by identifying the M sub-images by the second server; the processing module 402 may be further configured to synthesize, according to a second preset rule corresponding to the first preset rule, M pieces of text information received by the receiving module 401 into target text information, where the target text information is text information in a target image; the sending module 403 may be further configured to send the target text information synthesized by the processing module 402 to a terminal device.
The embodiment of the invention provides a server which can receive a target image sent by terminal equipment, divide the target image into M sub-images according to a first preset rule and then send the M sub-images to a second server. The target image is an image acquired by the terminal equipment through screen recognition, and M is an integer greater than 1. According to the scheme, the first server can divide the target image acquired by the terminal equipment through the screen recognition into a plurality of sub-images and send the sub-images to the second server, so that the second server cannot acquire the complete target image, and therefore the second server can only respectively recognize the plurality of sub-images to acquire a plurality of text information, but cannot acquire the complete text information in the target image, privacy information of a user in the target image can be prevented from being revealed, and privacy information safety of the user is guaranteed.
As shown in fig. 11, an embodiment of the present invention provides a terminal device 500, where the terminal device 500 may include a transmitting module 501. The sending module 501 may be configured to send a target image to a first server, and send first indication information to the first server, where the target image may be an image acquired by a terminal device through screen recognition, the first indication information is used to indicate whether privacy information of a user is included in the target image, and the first indication information is used by the first server to determine whether to partition the target image.
Optionally, in conjunction with fig. 11, as shown in fig. 12, in an embodiment of the present invention, the terminal device may further include a display module 502, a receiving module 503, and a processing module 504. The display module 502 may be configured to display, before the sending module 501 sends the first indication information to the first server, target prompt information, where the target prompt information is used to prompt a user to determine whether the target image includes privacy information of the user; a receiving module 503, configured to receive a target input from a user on the target prompt information displayed by the display module 502; the processing module 504 may be configured to generate the first indication information according to the target input received by the receiving module 503.
Optionally, in the embodiment of the present invention, the display module 502 may be specifically configured to display the target prompt information when the receiving module 503 receives the target content; the target content is a first input of a user or second indication information sent by a first server, the first input is used for triggering the terminal equipment to execute a screen recognition function, and the second indication information is used for indicating the first server to receive a target image.
The embodiment of the invention provides terminal equipment, which can send a target image to a first server and send first indication information to the first server. The target image is an image acquired by the terminal equipment through screen recognition, the first indication information is used for indicating whether privacy information of a user is included in the target image, and the first indication information is used for determining whether the target image is segmented by the first server. With this arrangement, since the terminal device can transmit the first indication information indicating whether the privacy information of the user is included in the target image to the first server, the first server divides the target image into the plurality of sub-images only in the case where the first indication information indicates that the privacy information of the user is included in the target image, and does not divide the target image in the case where the first indication information indicates that the privacy information of the user is not included in the target image. Thus, not only can the leakage of the privacy information of the user be avoided, but also the unnecessary segmentation operation performed by the first server can be avoided.
Fig. 13 is a schematic hardware diagram of a server according to an embodiment of the present invention. As shown in fig. 13, the server 600 may include: one or more processors 601 (other processors of more than one processor are illustrated in dashed boxes in fig. 13), a memory 602, a communication interface 603, and a bus 604. Wherein one or more processors 601, memory 602, and communication interfaces 603 are coupled to each other and communicate with each other via a bus 604.
The processor 601 may be configured to control the communication interface 603 to receive the target image sent by the terminal device through the bus 604, divide the target image into M sub-images according to the first preset rule, and control the communication interface 603 to send the M sub-images to the second server through the bus 604. The target image is an image acquired by the terminal equipment through screen recognition, and M is an integer greater than 1.
The embodiment of the invention provides a server which can receive a target image sent by terminal equipment, divide the target image into M sub-images according to a first preset rule and then send the M sub-images to a second server. The target image is an image acquired by the terminal equipment through screen recognition, and M is an integer greater than 1. According to the scheme, the first server can divide the target image acquired by the terminal equipment through the screen recognition into a plurality of sub-images and send the sub-images to the second server, so that the second server cannot acquire the complete target image, and therefore the second server can only respectively recognize the plurality of sub-images to acquire a plurality of text information, but cannot acquire the complete text information in the target image, privacy information of a user in the target image can be prevented from being revealed, and privacy information safety of the user is guaranteed.
It can be appreciated that, in the embodiment of the present invention, the processor 601 may be the processing module 402 in the schematic structural diagram (e.g. fig. 10) of the server in the embodiment; the communication interface 603 may be the receiving module 401 and the transmitting module 403 in the schematic structural diagram (for example, fig. 10) of the server in the above embodiment.
The bus 604 may be a peripheral component interconnect standard (Peripheral Component Interconnect, PCI) bus, or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, among others. The bus 604 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in fig. 13, but not only one bus or one type of bus. In addition, the server 600 may further include some other functional modules (e.g., a hard disk) not shown in fig. 13, which is not described herein.
Fig. 14 is a hardware schematic of a terminal device implementing various embodiments of the present invention, and as shown in fig. 14, the terminal device 100 includes, but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. It will be appreciated by those skilled in the art that the terminal device structure shown in fig. 14 does not constitute a limitation of the terminal device, and the terminal device may comprise more or less components than shown, or may combine certain components, or may have a different arrangement of components. In the embodiment of the invention, the terminal equipment comprises, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer and the like.
The radio frequency unit 101 may be configured to send a target image to a first server, where the target image is an image acquired by a terminal device through screen recognition, and send first indication information to the first server, where the first indication information is used to indicate whether the target image includes privacy information of a user, and the first indication information is used by the first server to determine whether to partition the target image.
The embodiment of the invention provides terminal equipment, which can send a target image to a first server and send first indication information to the first server. The target image is an image acquired by the terminal equipment through screen recognition, the first indication information is used for indicating whether privacy information of a user is included in the target image, and the first indication information is used for determining whether the target image is segmented by the first server. With this arrangement, since the terminal device can transmit the first indication information indicating whether the privacy information of the user is included in the target image to the first server, the first server divides the target image into the plurality of sub-images only in the case where the first indication information indicates that the privacy information of the user is included in the target image, and does not divide the target image in the case where the first indication information indicates that the privacy information of the user is not included in the target image. Thus, not only can the leakage of the privacy information of the user be avoided, but also the unnecessary segmentation operation performed by the first server can be avoided.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 101 may be configured to receive and send information or signals during a call, specifically, receive downlink data from a base station, and then process the received downlink data with the processor 110; and, the uplink data is transmitted to the base station. Typically, the radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 may also communicate with networks and other devices through a wireless communication system.
The terminal device provides wireless broadband internet access to the user through the network module 102, such as helping the user to send and receive e-mail, browse web pages, access streaming media, etc.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the network module 102 or stored in the memory 109 into an audio signal and output as sound. Also, the audio output unit 103 may also provide audio output (e.g., a call signal reception sound, a message reception sound, etc.) related to a specific function performed by the terminal device 100. The audio output unit 103 includes a speaker, a buzzer, a receiver, and the like.
The input unit 104 is used for receiving an audio or video signal. The input unit 104 may include a graphics processor (graphics processing unit, GPU) 1041 and a microphone 1042, the graphics processor 1041 processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphics processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the network module 102. Microphone 1042 may receive sound and be capable of processing such sound into audio data. The processed audio data may be converted into a format output that can be transmitted to the mobile communication base station via the radio frequency unit 101 in the case of a telephone call mode.
The terminal device 100 further comprises at least one sensor 105, such as a light sensor, a motion sensor and other sensors. Specifically, the light sensor includes an ambient light sensor and a proximity sensor, wherein the ambient light sensor can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and the proximity sensor can turn off the display panel 1061 and/or the backlight when the terminal device 100 moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and direction when the accelerometer sensor is stationary, and can be used for recognizing the gesture (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking) and the like of the terminal equipment; the sensor 105 may further include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which are not described herein.
The display unit 106 is used to display information input by a user or information provided to the user. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), or the like.
The user input unit 107 is operable to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the terminal device. Specifically, the user input unit 107 includes a touch panel 1071 and other input devices 1072. The touch panel 1071, also referred to as a touch screen, may collect touch operations thereon or thereabout by a user (e.g., operations of the user on the touch panel 1071 or thereabout using any suitable object or accessory such as a finger, stylus, etc.). The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts the touch information into touch point coordinates, and sends the touch point coordinates to the processor 110, and receives and executes commands sent by the processor 110. Further, the touch panel 1071 may be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave. The user input unit 107 may include other input devices 1072 in addition to the touch panel 1071. In particular, other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 1071 may be overlaid on the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 110 to determine the type of touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of touch event. Although in fig. 14, the touch panel 1071 and the display panel 1061 are two independent components for implementing the input and output functions of the terminal device, in some embodiments, the touch panel 1071 may be integrated with the display panel 1061 to implement the input and output functions of the terminal device, which is not limited herein.
The interface unit 108 is an interface to which an external device is connected to the terminal apparatus 100. For example, the external devices may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the terminal apparatus 100 or may be used to transmit data between the terminal apparatus 100 and an external device.
Memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area that may store an operating system, application programs required for at least one function (such as a sound playing function, an image playing function, etc.), and a storage data area; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. In addition, memory 109 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 110 is a control center of the terminal device, connects respective parts of the entire terminal device using various interfaces and lines, and performs various functions of the terminal device and processes data by running or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the terminal device. Processor 110 may include one or more processing units; alternatively, the processor 110 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The terminal device 100 may further include a power source 111 (e.g., a battery) for supplying power to the respective components, and optionally, the power source 111 may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption management through the power management system.
In addition, the terminal device 100 includes some functional modules, which are not shown, and will not be described herein.
Optionally, the embodiment of the present invention further provides a server, including a processor 110 shown in fig. 14, a memory 109, and a computer program stored in the memory 109 and capable of running on the processor 110, where the computer program is executed by the processor 110 to implement each process of the embodiment of the image processing method, and achieve the same technical effect, so that repetition is avoided and no further description is given here.
Optionally, the embodiment of the present invention further provides a terminal device, including a processor 110 shown in fig. 14, a memory 109, and a computer program stored in the memory 109 and capable of running on the processor 110, where the computer program when executed by the processor 110 implements each process of the embodiment of the image processing method, and the same technical effects can be achieved, so that repetition is avoided, and no further description is given here.
The embodiment of the invention also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the processes of the above-mentioned image processing method embodiment, and can achieve the same technical effects, so that repetition is avoided, and no further description is given here. The computer readable storage medium may include, among others, read-only memory (ROM), random access memory (random access memory, RAM), magnetic or optical disks, and the like.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method described in the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are also within the protection of the present application.

Claims (8)

1. An image processing method applied to a first server, the method comprising:
receiving a target image sent by terminal equipment, wherein the target image is an image acquired by the terminal equipment through screen recognition;
dividing the target image into M sub-images according to a first preset rule, wherein M is an integer greater than 1;
transmitting the M sub-images to a second server;
wherein the first preset rule is any one of left-right segmentation, up-down segmentation and grid segmentation;
before the target image is divided into M sub-images according to the first preset rule, the method further includes:
receiving first indication information sent by the terminal equipment, wherein the first indication information is used for indicating whether privacy information of a user is included in the target image or not;
The dividing the target image into M sub-images according to a first preset rule includes:
and dividing the target image into M sub-images according to the first preset rule under the condition that the first indication information indicates that the target image comprises privacy information of a user.
2. The method of claim 1, wherein after the sending the M sub-images to the second server, the method further comprises:
receiving M pieces of text information sent by the second server, wherein the M pieces of text information are obtained by the second server identifying the M sub-images;
synthesizing the M pieces of text information into target text information according to a second preset rule corresponding to the first preset rule, wherein the target text information is the text information in the target image;
and sending the target text information to the terminal equipment.
3. An image processing method applied to a terminal device, the method comprising:
sending a target image to a first server, wherein the target image is an image acquired by the terminal equipment through screen recognition;
sending first indication information to the first server, wherein the first indication information is used for indicating whether privacy information of a user is included in a target image or not, and the first indication information is used for determining whether the target image is segmented or not by the first server;
The method further comprises the steps of:
receiving target text information sent by the first server, extracting specific nouns in the target text information, carrying out entity identification on the specific nouns, and displaying the entity identification result;
before the first indication information is sent to the first server, the method further includes:
displaying target prompt information, wherein the target prompt information is used for prompting a user to determine whether privacy information of the user is included in the target image;
receiving target input of a user on the target prompt information;
and generating the first indication information according to the target input.
4. The method of claim 3, wherein displaying the target cue information comprises:
displaying the target prompt information under the condition of receiving target content;
the target content is a first input of a user or second indication information sent by a first server, the first input is used for triggering the terminal equipment to execute a screen recognition function, and the second indication information is used for indicating the first server to receive the target image.
5. A server, characterized in that the server comprises a receiving module, a processing module and a sending module;
The receiving module is used for receiving a target image sent by the terminal equipment, wherein the target image is an image acquired by the terminal equipment through screen recognition;
the processing module is used for dividing the target image received by the receiving module into M sub-images according to a first preset rule, wherein M is an integer greater than 1;
the sending module is used for sending the M sub-images segmented by the processing module to a second server;
wherein the first preset rule is any one of left-right segmentation, up-down segmentation and grid segmentation;
the receiving module is further configured to receive first indication information sent by the terminal device before the processing module segments the target image into M sub-images according to a first preset rule, where the first indication information is used to indicate whether privacy information of a user is included in the target image;
the processing module is specifically configured to divide the target image into the M sub-images according to the first preset rule when the first indication information indicates that the target image includes privacy information of a user.
6. The terminal equipment is characterized by comprising a sending module, a processing module, a display module and a receiving module;
The sending module is used for sending the target image to a first server and sending first indication information to the first server; the target image is an image acquired by the terminal equipment through screen recognition, the first indication information is used for indicating whether privacy information of a user is included in the target image, and the first indication information is used for determining whether the target image is segmented by the first server;
the processing module is used for receiving the target text information sent by the first server, extracting specific nouns in the target text information, carrying out entity identification on the specific nouns, and displaying the entity identification result;
the display module is used for displaying target prompt information, and the target prompt information is used for prompting a user to determine whether privacy information of the user is included in the target image;
the receiving module is used for receiving target input of a user on the target prompt information;
the processing module is further configured to generate first indication information according to the target input received by the receiving module.
7. A server comprising a processor, a memory and a computer program stored on the memory and executable on the processor, which when executed by the processor implements the steps of the image processing method according to any one of claims 1 or 2.
8. A terminal device comprising a processor, a memory and a computer program stored on the memory and executable on the processor, which when executed by the processor implements the steps of the image processing method according to any one of claims 3 or 4.
CN201911033623.8A 2019-10-28 2019-10-28 Image processing method, server and terminal equipment Active CN110930410B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201911033623.8A CN110930410B (en) 2019-10-28 2019-10-28 Image processing method, server and terminal equipment
PCT/CN2020/123343 WO2021083058A1 (en) 2019-10-28 2020-10-23 Image processing method, server, and terminal device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911033623.8A CN110930410B (en) 2019-10-28 2019-10-28 Image processing method, server and terminal equipment

Publications (2)

Publication Number Publication Date
CN110930410A CN110930410A (en) 2020-03-27
CN110930410B true CN110930410B (en) 2023-06-23

Family

ID=69849618

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911033623.8A Active CN110930410B (en) 2019-10-28 2019-10-28 Image processing method, server and terminal equipment

Country Status (2)

Country Link
CN (1) CN110930410B (en)
WO (1) WO2021083058A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110930410B (en) * 2019-10-28 2023-06-23 维沃移动通信有限公司 Image processing method, server and terminal equipment
CN118644780A (en) * 2020-05-18 2024-09-13 华为云计算技术有限公司 Object identification method, device, equipment and medium
CN111966316B (en) * 2020-08-25 2023-08-25 西安万像电子科技有限公司 Image data display method and device and image data display system
CN113996058B (en) * 2021-11-01 2023-07-25 腾讯科技(深圳)有限公司 Information processing method, apparatus, electronic device, and computer-readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105046133A (en) * 2015-07-21 2015-11-11 深圳市元征科技股份有限公司 Image display method and vehicle-mounted terminal
JP2016095592A (en) * 2014-11-12 2016-05-26 株式会社エンタシス Data entry system
CN109803110A (en) * 2019-01-29 2019-05-24 维沃移动通信有限公司 A kind of image processing method, terminal device and server
WO2019201146A1 (en) * 2018-04-20 2019-10-24 维沃移动通信有限公司 Expression image display method and terminal device

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8625113B2 (en) * 2010-09-24 2014-01-07 Ricoh Company Ltd System and method for distributed optical character recognition processing
KR101648701B1 (en) * 2015-06-26 2016-08-17 렉스젠(주) Apparatus for recognizing vehicle number and method thereof
WO2017111501A1 (en) * 2015-12-24 2017-06-29 Samsung Electronics Co., Ltd. Privacy protection method in a terminal device and the terminal device
CN106295398A (en) * 2016-07-29 2017-01-04 维沃移动通信有限公司 The guard method of privacy information and mobile terminal thereof
CN109064373B (en) * 2018-07-17 2022-09-20 大连理工大学 Privacy protection method based on outsourcing image data entry
CN110278327B (en) * 2019-06-10 2021-01-08 维沃移动通信有限公司 Data processing method and mobile terminal
CN110930410B (en) * 2019-10-28 2023-06-23 维沃移动通信有限公司 Image processing method, server and terminal equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016095592A (en) * 2014-11-12 2016-05-26 株式会社エンタシス Data entry system
CN105046133A (en) * 2015-07-21 2015-11-11 深圳市元征科技股份有限公司 Image display method and vehicle-mounted terminal
WO2019201146A1 (en) * 2018-04-20 2019-10-24 维沃移动通信有限公司 Expression image display method and terminal device
CN109803110A (en) * 2019-01-29 2019-05-24 维沃移动通信有限公司 A kind of image processing method, terminal device and server

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Adaptive Human–Machine Interactive Behavior Analysis With Wrist-Worn Devices for Password Inference;Chao Shen et al.;《IEEE Transactions on Neural Networks and Learning Systems》;全文 *
基于双边网格和置信颜色模型的快速图像鲁棒分割方法;桂彦;曾光;汤问;;计算机辅助设计与图形学学报(07);全文 *

Also Published As

Publication number Publication date
WO2021083058A1 (en) 2021-05-06
CN110930410A (en) 2020-03-27

Similar Documents

Publication Publication Date Title
CN110930410B (en) Image processing method, server and terminal equipment
CN110062105B (en) Interface display method and terminal equipment
CN109543099B (en) Content recommendation method and terminal equipment
CN109857494B (en) Message prompting method and terminal equipment
CN109871164B (en) Message sending method and terminal equipment
CN110855830A (en) Information processing method and electronic equipment
CN108595946B (en) Privacy protection method and terminal
CN109085968B (en) Screen capturing method and terminal equipment
CN110457935B (en) Permission configuration method and terminal equipment
CN109358931B (en) Interface display method and terminal
CN111026464A (en) Identification method and electronic equipment
JP7371254B2 (en) Target display method and electronic equipment
CN111090489B (en) Information control method and electronic equipment
CN109388456B (en) Head portrait selection method and mobile terminal
CN110049187A (en) A kind of display methods and terminal device
CN110780751B (en) Information processing method and electronic equipment
CN110012151B (en) Information display method and terminal equipment
CN109286726B (en) Content display method and terminal equipment
CN109067975B (en) Contact person information management method and terminal equipment
CN111381753B (en) Multimedia file playing method and electronic equipment
CN110166621B (en) Word processing method and terminal equipment
CN110045897B (en) Information display method and terminal equipment
CN109639880B (en) Weather information display method and terminal equipment
CN109451154B (en) Method for setting multimedia file and terminal equipment
CN111597435A (en) Voice search method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant