CN111881901A - Screenshot content detection method and device and computer-readable storage medium - Google Patents

Screenshot content detection method and device and computer-readable storage medium Download PDF

Info

Publication number
CN111881901A
CN111881901A CN202010739177.9A CN202010739177A CN111881901A CN 111881901 A CN111881901 A CN 111881901A CN 202010739177 A CN202010739177 A CN 202010739177A CN 111881901 A CN111881901 A CN 111881901A
Authority
CN
China
Prior art keywords
screenshot
area
image
region
identified
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010739177.9A
Other languages
Chinese (zh)
Inventor
何胜
喻宁
柳阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202010739177.9A priority Critical patent/CN111881901A/en
Publication of CN111881901A publication Critical patent/CN111881901A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9536Search customisation based on social or collaborative filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0251Targeted advertisements
    • G06Q30/0269Targeted advertisements based on user profile or attribute
    • G06Q30/0271Personalized advertisement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Strategic Management (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • General Business, Economics & Management (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Medical Informatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Marketing (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Finance (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Game Theory and Decision Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Mathematical Physics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Human Resources & Organizations (AREA)
  • Primary Health Care (AREA)
  • Tourism & Hospitality (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a deep learning algorithm, and provides a screenshot content detection method, screenshot content detection equipment and a computer-readable storage medium. According to the method, the region of the screenshot needing to be identified is firstly divided, so that the character part and the picture part in the screenshot can be distinguished, and the condition of low identification precision caused by uniformly identifying the characters and the picture in the screenshot is avoided; the character areas and the image areas after being segmented are respectively detected and identified, so that the areas of the two different types can be accurately identified in a targeted manner; the area identification result is automatically judged according to the preset auditing standard, so that the screenshot can be efficiently and accurately audited. In addition, the invention also relates to a block chain technology, and the region identification result after identifying the screenshot can be stored in the block chain.

Description

Screenshot content detection method and device and computer-readable storage medium
Technical Field
The invention relates to the technical field of image recognition in artificial intelligence, in particular to a screenshot content detection method, screenshot content detection equipment and a computer-readable storage medium.
Background
In today's society with highly developed social networks, WeChat circles of friends are shared as an important avenue for business promotion. The businessman can achieve the purpose of publicity by enabling the customer to share the appointed content to reach the circle of friends and achieving a certain number of praise. After the customer completes the sharing of the friend circle appointed by the merchant, the corresponding screenshot is sent through the client side in a feedback mode so as to obtain the gift given by the merchant, and therefore the massive user feedback screenshot is generated. The existing auditing mode for the feedback screenshot is a manual auditing mode and a deep learning detection identification mode. If the screenshots are audited in a manual auditing mode, the number of feedback screenshots is very large under the actual condition, and a lot of screenshots which do not reach the preset standard of a merchant exist, and the workload of auditing the screenshots in a manual mode is large, so that the accuracy is unstable, and the technical problem that the accuracy of the conventional screenshot auditing mode is low is caused.
Disclosure of Invention
The invention mainly aims to provide a screenshot content detection method, screenshot content detection equipment and a computer readable storage medium, and aims to solve the technical problem that the accuracy of the conventional screenshot verification mode is low.
In order to achieve the above object, the present invention provides a screenshot content detection method, which includes the following steps:
receiving a screenshot to be identified sent by a client, and performing character image segmentation on the screenshot to be identified to generate a character area image set and an image area image set;
respectively detecting and identifying the character region picture set and the image region picture set based on a preset deep learning algorithm and a preset target graph estimation algorithm to obtain a region identification result after the character region picture set and the image region picture set are summarized;
and determining an auditing result corresponding to the screenshot to be identified based on a preset auditing standard and the area identification result.
Optionally, the text region picture set comprises a time text region and a content text region,
the steps of receiving the screenshot to be identified sent by the client, carrying out character image segmentation on the screenshot to be identified and generating a character area picture set and an image area picture set comprise:
receiving a screenshot to be recognized sent by a client, and determining a time character region in the screenshot to be recognized based on a preset multi-scale template matching algorithm;
and detecting the pixel value characteristics of the screenshot to be identified, and determining a content text area and an image area in the screenshot to be identified according to the pixel value characteristics to obtain a text area picture set and an image area picture set.
Optionally, the step of receiving the screenshot to be recognized sent by the client, and determining a time text region in the screenshot to be recognized based on a preset multi-scale template matching algorithm includes:
acquiring the image resolution of the screenshot to be identified, and zooming the size of a preset matching template according to the image resolution so as to adapt the matching template to the screenshot to be identified;
and performing local mask matching on the screenshot to be recognized by using the matching template, positioning a time bar coordinate in the screenshot to be recognized, and taking an area corresponding to the time bar coordinate as the time character area.
Optionally, the image region picture set comprises a content picture region and a like picture region,
the step of detecting the pixel value characteristics of the screenshot to be identified and determining the content text area and the image area in the screenshot to be identified according to the pixel value characteristics to obtain the text area picture set and the image area picture set comprises the following steps:
detecting and obtaining the pixel value distribution, the extreme value of the row-column pixels and the average difference value of the adjacent rows of pixels of the screenshot to be identified as the pixel value characteristics;
determining a content picture area in the screenshot to be identified according to the pixel value distribution;
determining a content character area in the screenshot to be identified according to the line and column pixel extreme value;
and determining a praise picture area in the screenshot to be identified according to the average difference value of the adjacent rows of pixels so as to obtain a text area picture set comprising the time text area and the content text area and an image area picture set comprising the content picture area and the praise picture area.
Optionally, the step of detecting and identifying the text region picture set and the image region picture set respectively based on a preset deep learning algorithm and a preset target graph estimation algorithm to obtain a region identification result after the text region picture set and the image region picture set are summarized includes:
identifying the character region picture set according to a preset character identification model based on a deep learning algorithm to obtain a corresponding character identification result;
according to a preset target graph estimation method, identifying the image area picture set to obtain an image identification result corresponding to the image area picture set;
and summarizing the character recognition result and the image recognition result to generate the area recognition result, wherein the area recognition result is stored in a block chain.
Optionally, the target pattern estimation algorithm comprises an head box size estimation algorithm, the image recognition result comprises a praise statistical quantity,
the step of identifying the image area picture set according to a preset target figure estimation algorithm to obtain an image identification result corresponding to the image area picture set comprises the following steps:
positioning a head photo frame region in the image region picture set, and dividing the head photo frame region according to lines according to a head photo frame size estimation algorithm to obtain a head photo frame set;
and counting the number of the head photo frames in the head photo frame set by utilizing a preset head photo frame interval to serve as the praise counting number.
Optionally, the audit result includes a first audit result and a second audit result.
The step of determining the auditing result corresponding to the screenshot to be identified based on the preset auditing standard and the area identification result comprises the following steps:
judging whether the area identification result meets a preset auditing standard or not;
if the area identification result meets the preset auditing standard, generating auditing passing information as a first auditing result;
and if the area identification result does not accord with the preset auditing standard, collecting error information in the auditing process, and generating auditing failure information based on the error information to serve as a second auditing result.
Optionally, after the step of determining the review result corresponding to the screenshot to be recognized based on the preset review standard and the area recognition result, the method further includes:
extracting label information in the area identification result, and acquiring user information of a client;
establishing a user representation model based on the tag information and the user information to select targeted push content for a client using the user representation model.
In addition, to achieve the above object, the present invention further provides a screenshot content detecting apparatus, including:
the character image segmentation module is used for receiving the screenshot to be identified sent by the client, segmenting the screenshot to be identified into character images and generating a character area image set and an image area image set;
the region detection and identification module is used for respectively detecting and identifying the character region picture set and the image region picture set based on a preset deep learning algorithm and a preset target graph estimation algorithm to obtain a region identification result after the character region picture set and the image region picture set are aggregated;
and the identification result auditing module is used for determining an auditing result corresponding to the screenshot to be identified based on a preset auditing standard and the area identification result.
Optionally, the text region picture set comprises a time text region and a content text region,
the character image segmentation module comprises:
the template matching unit is used for receiving the screenshot to be recognized sent by the client and determining a time character area in the screenshot to be recognized based on a preset multi-scale template matching algorithm;
and the feature identification unit is used for detecting the pixel value features of the screenshot to be identified and determining the content text area and the image area in the screenshot to be identified according to the pixel value features so as to obtain the text area picture set and the image area picture set.
Optionally, the template matching unit is further configured to:
acquiring the image resolution of the screenshot to be identified, and zooming the size of a preset matching template according to the image resolution so as to adapt the matching template to the screenshot to be identified;
and performing local mask matching on the screenshot to be recognized by using the matching template, positioning a time bar coordinate in the screenshot to be recognized, and taking an area corresponding to the time bar coordinate as the time character area.
Optionally, the image region picture set comprises a content picture region and a like picture region,
the feature identification unit is further configured to:
detecting and obtaining the pixel value distribution, the extreme value of the row-column pixels and the average difference value of the adjacent rows of pixels of the screenshot to be identified as the pixel value characteristics;
determining a content picture area in the screenshot to be identified according to the pixel value distribution;
determining a content character area in the screenshot to be identified according to the line and column pixel extreme value;
and determining a praise picture area in the screenshot to be identified according to the average difference value of the adjacent rows of pixels so as to obtain a text area picture set comprising the time text area and the content text area and an image area picture set comprising the content picture area and the praise picture area.
Optionally, the area detection and identification module includes:
the character recognition unit is used for recognizing the character region picture set according to a preset character recognition model based on a deep learning algorithm so as to obtain a corresponding character recognition result;
the image identification unit is used for identifying the image area picture set according to a preset target graph estimation method so as to obtain an image identification result corresponding to the image area picture set;
and the result summarizing unit is used for summarizing the character recognition result and the image recognition result to generate the area recognition result, wherein the area recognition result is stored in a block chain.
Optionally, the target pattern estimation algorithm comprises an head box size estimation algorithm, the image recognition result comprises a praise statistical quantity,
the image recognition unit is further configured to:
positioning a head photo frame region in the image region picture set, and dividing the head photo frame region according to lines according to a head photo frame size estimation algorithm to obtain a head photo frame set;
and counting the number of the head photo frames in the head photo frame set by utilizing a preset head photo frame interval to serve as the praise counting number.
Optionally, the audit result comprises a first audit result and a second audit result,
the identification result auditing module comprises:
the standard judging unit is used for judging whether the area identification result meets a preset auditing standard or not;
the first auditing unit is used for generating auditing passing information as a first auditing result if the area identification result meets a preset auditing standard;
and the second auditing unit is used for collecting error information in the auditing process if the area identification result does not accord with the preset auditing standard, and generating auditing failure information based on the error information to serve as a second auditing result.
Optionally, the screenshot content detection apparatus further includes:
the tag extraction unit is used for extracting tag information in the area identification result and acquiring user information of the client;
and the directional pushing unit is used for establishing a user portrait model based on the label information and the user information so as to select directional pushing content for the client by using the user portrait model.
In addition, in order to achieve the above object, the present invention further provides an electronic device, which includes a processor, a memory, and a screenshot content detection program stored on the memory and executable by the processor, wherein when the screenshot content detection program is executed by the processor, the steps of the screenshot content detection method as described above are implemented.
In addition, to achieve the above object, the present invention further provides a computer readable storage medium, which stores a screenshot content detection program, wherein the screenshot content detection program, when executed by a processor, implements the steps of the screenshot content detection method as described above.
The invention provides a screenshot content detection method, equipment and a computer-readable storage medium, wherein the screenshot content detection method is characterized in that a screenshot needing to be identified is subjected to region segmentation, so that a character part and an image part in the screenshot can be distinguished, and the condition of low identification precision caused by uniformly identifying the characters and the image in the screenshot is avoided; the character areas and the image areas after being segmented are respectively detected and identified, so that the areas of the two different types can be accurately identified in a targeted manner; the area identification result is automatically judged according to the preset auditing standard, so that the screenshot can be efficiently and accurately audited, and the technical problem of low accuracy of the conventional screenshot auditing mode is solved.
Drawings
Fig. 1 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart diagram illustrating a screenshot content detection method according to a first embodiment of the present invention;
fig. 3 is a schematic diagram of functional modules of the screenshot content detection apparatus according to the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The screenshot content detection method is mainly applied to electronic equipment, and the electronic equipment can be equipment with display and processing functions, such as a PC (personal computer), a portable computer, a mobile terminal and the like.
Referring to fig. 1, fig. 1 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present invention. In an embodiment of the present invention, the electronic device may include a processor 1001 (e.g., a CPU), a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005. The communication bus 1002 is used for realizing connection communication among the components; the user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard); the network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface); the memory 1005 may be a high-speed RAM memory, or may be a non-volatile memory (e.g., a magnetic disk memory), and optionally, the memory 1005 may be a storage device independent of the processor 1001.
Those skilled in the art will appreciate that the hardware configuration shown in fig. 1 does not constitute a limitation of the electronic device, and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
With continued reference to fig. 1, the memory 1005 of fig. 1, which is one type of computer-readable storage medium, may include an operating system, a network communication module, and a screenshot content detection program.
In fig. 1, the network communication module is mainly used for connecting to a server and performing data communication with the server; the processor 1001 may call the screenshot content detection program stored in the memory 1005, and execute the screenshot content detection method provided by the embodiment of the present invention.
Based on the hardware structure, the invention provides various embodiments of the screenshot content detection method.
In today's society with highly developed social networks, WeChat circles of friends are shared as an important avenue for business promotion. The businessman can achieve the purpose of publicity by enabling the customer to share the appointed content to reach the circle of friends and achieving a certain number of praise. After the customer completes the sharing of the friend circle appointed by the merchant, the corresponding screenshot is sent through the client side in a feedback mode so as to obtain the gift given by the merchant, and therefore the massive user feedback screenshot is generated. The existing auditing mode for the feedback screenshot is a manual auditing mode and a deep learning detection identification mode. If the screenshots are audited in a manual auditing mode, the number of feedback screenshots is very large under the actual condition, and a lot of screenshots which do not reach the preset standard of a merchant exist, and the workload of auditing the screenshots in a manual mode is large, so that the accuracy is unstable, and the technical problem that the accuracy of the conventional screenshot auditing mode is low is caused.
In order to solve the problems, the invention provides a screenshot content detection method, namely, a screenshot to be identified is firstly subjected to region segmentation, so that a character part and an image part in the screenshot can be distinguished, and the condition of low identification precision caused by uniformly identifying the characters and the image in the screenshot is avoided; the character areas and the image areas after being segmented are respectively detected and identified, so that the areas of the two different types can be accurately identified in a targeted manner; the area identification result is automatically judged according to the preset auditing standard, so that the screenshot can be efficiently and accurately audited, and the technical problem of low accuracy of the conventional screenshot auditing mode is solved.
Referring to fig. 2, fig. 2 is a flowchart illustrating a screenshot content detection method according to a first embodiment of the present invention.
A first embodiment of the present invention provides a screenshot content detection method, including the steps of:
step S10, receiving a screenshot to be identified sent by a client, and performing character image segmentation on the screenshot to be identified to generate a character area image set and an image area image set;
in this embodiment, the screenshot to be recognized is a screenshot sent by the user to the platform through the client, and specifically may be a screenshot containing text and image content, such as a screenshot of a WeChat friend circle or a screenshot of other web pages. The client can be specifically a mobile phone, a tablet, a computer and other terminal devices. The text region picture set may contain one or more text region pictures containing only text content, and similarly, the image region picture set may contain one or more image region pictures containing only image content. For example, if a current user takes a screenshot of a WeChat friend circle on a mobile phone and sends the screenshot to a screenshot content detection system, the system performs text and image segmentation on the screenshot by adopting a template matching mode and the like when receiving the screenshot of the WeChat friend circle to be identified sent by the current user, so as to segment a text area and an image area in the screenshot, and generate a text area picture set only containing text content and an image area picture set only containing image content.
Step S20, based on a preset deep learning algorithm and a preset target graph estimation algorithm, respectively detecting and identifying the character region picture set and the image region picture set to obtain a region identification result after the character region picture set and the image region picture set are aggregated;
in this embodiment, the preset deep learning algorithm may be a convolutional neural network, a cyclic neural network, or the like. The preset pattern estimation algorithm is an algorithm for positioning some specific patterns in the screenshot, such as a head frame, a time bar and the like in the WeChat friend circle screenshot. The region identification result is a final identification result of a type of identification result of the character region picture set and a type of identification result of the image region picture set, namely an identification result corresponding to the whole screenshot to be identified. Specifically, the system identifies a character area picture set of the screen shot of the WeChat friend circle through a character identification model based on deep learning, identifies an image area picture set of the screen shot of the WeChat friend circle through a target graph estimation algorithm, respectively obtains character identification results corresponding to the character area picture set, and summarizes the character identification results and the image identification results corresponding to the image area picture set to obtain a final identification result corresponding to the whole screen shot of the WeChat friend circle, namely the area identification result.
And step S30, determining an auditing result corresponding to the screenshot to be identified based on a preset auditing standard and the region identification result.
In this embodiment, the preset auditing standard is a standard for determining whether the recognition result corresponding to the screenshot meets the requirements of the screenshot activity initiator. For example, for the screenshot of the WeChat friend circle, the corresponding auditing criteria may be that the number of praise exceeds a preset threshold, the image content in the screenshot is activity-related content, the time information in the screenshot is within an activity effective time limit, and the like. The audit result is the result obtained after judging the area identification result through the audit standard, and is generally divided into two types, one type is that the audit is passed, and the screenshot meets the requirements of an activity initiator; the other type is that the audit is not passed and the screenshot does not meet the requirements of the activity initiator. Specifically, the system judges the currently generated identification result corresponding to the WeChat screenshot according to an audit standard sent by the activity initiator in advance, judges whether the identification result meets the standard or not, and judges whether the identification result meets the standard or not. In addition, after the system obtains the auditing results, the auditing results can be collected, and the auditing results of different clients are analyzed respectively to obtain the user characteristics of each client user, so that corresponding push strategies can be formulated according to different characteristics of users who are easy to use.
In this embodiment, the method includes receiving a screenshot to be recognized sent by a client, and performing character image segmentation on the screenshot to be recognized to generate a character area image set and an image area image set; respectively detecting and identifying the character region picture set and the image region picture set based on a preset deep learning algorithm and a preset target graph estimation algorithm to obtain a region identification result after the character region picture set and the image region picture set are summarized; and determining an auditing result corresponding to the screenshot to be identified based on a preset auditing standard and the area identification result. Through the mode, the screenshot needing to be identified is firstly subjected to region segmentation, so that the character part and the picture part in the screenshot can be distinguished, and the condition of low identification precision caused by uniformly identifying the characters and the picture in the screenshot is avoided; the character areas and the image areas after being segmented are respectively detected and identified, so that the areas of the two different types can be accurately identified in a targeted manner; the area identification result is automatically judged according to the preset auditing standard, so that the screenshot can be efficiently and accurately audited, and the technical problem of low accuracy of the conventional screenshot auditing mode is solved.
Further, based on the first embodiment shown in fig. 2, a second embodiment of the screenshot content detection method of the present invention is provided. In this embodiment, the text region picture set includes a time text region and a content text region, and step S10 includes:
receiving a screenshot to be recognized sent by a client, and determining a time character region in the screenshot to be recognized based on a preset multi-scale template matching algorithm;
in this embodiment, the preset multi-scale matching algorithm is preferably a multi-scale matching algorithm to be masked. The time text area refers to an area showing time information in the screenshot to be recognized.
Further, the receiving a screenshot to be recognized sent by a client, and determining a time text region in the screenshot to be recognized based on a preset multi-scale template matching algorithm includes:
acquiring the image resolution of the screenshot to be identified, and zooming the size of a preset matching template according to the image resolution so as to adapt the matching template to the screenshot to be identified;
and performing local mask matching on the screenshot to be recognized by using the matching template, positioning a time bar coordinate in the screenshot to be recognized, and taking an area corresponding to the time bar coordinate as the time character area.
In this embodiment, the screenshot to be recognized is the screenshot of the WeChat friend circle, and the time text area is the time bar in the screenshot of the WeChat friend circle. The screenshot of the WeChat friend circle can be a screenshot in a common mode or a screenshot in an album mode. The system locates the time bar coordinates of the friend circle screenshot through opencv (a cross-platform computer vision library) -based template matching, and the matching method is square error matching, namely: r (x, y) ═ Σx′,y′(T(x′,y′)-I(x+x′,y+y′))2. Where T denotes a template image, I denotes an original image, (x, y) denotes coordinates of a pixel matrix corresponding to the template image, and (x ', y') denotes coordinates of a pixel matrix corresponding to the original image. Due to different screen capture sizes of different mobile phones and different generated capture resolutions, on one hand, the picture height is unified to enable the picture resolutions to be consistent as much as possible, and on the other hand, the template is zoomed in [0.8,0.9,1.1,1.2 ]]Multiple multiplying power to match the pictures with different resolutions. When in matching, a local Mask matching method is adopted, for example, the key position of the time column is on the right side of the picture, so that the transverse right 1/4 area of the whole picture is covered, and template matching is performed, so that the matching calculation amount is only 1/4 of the whole picture. If only one time bar is positioned in the friend circle screenshot by the system, the position of the time bar is the position; if multiple time columns are located, the business logic may determine that the time column near the full graph height 2/3 is the target time column. The time column determines that the upper part and the lower part of the friend circle can be divided.
And detecting the pixel value characteristics of the screenshot to be identified, and determining a content text area and an image area in the screenshot to be identified according to the pixel value characteristics to obtain a text area picture set and an image area picture set.
Further, the step of detecting the pixel value characteristic of the screenshot to be identified, and determining the content text area and the image area in the screenshot to be identified according to the pixel value characteristic to obtain the text area picture set and the image area picture set comprises:
detecting and obtaining the pixel value distribution, the extreme value of the row-column pixels and the average difference value of the adjacent rows of pixels of the screenshot to be identified as the pixel value characteristics;
determining a content picture area in the screenshot to be identified according to the pixel value distribution;
determining a content character area in the screenshot to be identified according to the line and column pixel extreme value;
and determining a praise picture area in the screenshot to be identified according to the average difference value of the adjacent rows of pixels so as to obtain a text area picture set comprising the time text area and the content text area and an image area picture set comprising the content picture area and the praise picture area.
In this embodiment, the pixel value characteristics include: the pixel value distribution of the picture, the difference value between the maximum value and the minimum value of the pixels in the row or the column and the average value of the pixels in the adjacent row. Taking the screenshot to be identified as the screenshot of the WeChat friend circle as an example, because a content image area in the screenshot has the characteristic of wide pixel distribution, the system can detect and obtain the pixel value characteristic of the screenshot and position the boundary coordinates of the content image area according to the pixel value of the wide pixel distribution; because the content text area in the screenshot has the characteristic of large pixel difference between black characters and white bases, the system can locate the boundary coordinates of the content text area in the area where the large difference exists by acquiring each pixel value in the screenshot and calculating the difference between the pixel values; because the praise region of the WeChat friend circle screenshot is distributed in the gray background plate, the system can locate the boundary coordinates of the praise region by detecting and acquiring the average difference value of the pixels of the adjacent lines of the screenshot. The system can locate the boundary coordinates of the content text area, the content image area and the like through detecting and obtaining the pixel value characteristics, further determine the content text area, the content image area and the like, and obtain a text area picture set containing the time text area and the content text area and an image area picture set containing the content picture area and the like by combining the time text area.
Further, the size of the matching template is correspondingly adjusted according to the actual image resolution, so that the matching template is matched with the screenshot to be recognized; the time character area in the screenshot can be more efficiently positioned by matching the local mask of the screenshot; by detecting various pixel value characteristics in the screenshot, the content picture area, the content character area and the like picture area in the screenshot can be positioned according to the characteristics.
Further, based on the first embodiment shown in fig. 2, a third embodiment of the screenshot content detection method of the present invention is provided. In this embodiment, step S20 includes:
identifying the character region picture set according to a preset character identification model based on a deep learning algorithm to obtain a corresponding character identification result;
in this embodiment, a time text region in a text region picture set and whether or not there are some human visible icons are taken as examples. The system detects whether partial visible icons exist in the friend circle or not in a time region with concentrated regional pictures through template matching, and meanwhile, the time bar region is input into a preset character recognition model which is subjected to deep learning training to obtain time information, namely the sending time of the friend circle.
According to a preset target graph estimation method, identifying the image area picture set to obtain an image identification result corresponding to the image area picture set;
further, the step of identifying the image region picture set according to a preset target graph estimation algorithm to obtain an image identification result corresponding to the image region picture set includes:
positioning a head photo frame region in the image region picture set, and dividing the head photo frame region according to lines according to a head photo frame size estimation algorithm to obtain a head photo frame set;
and counting the number of the head photo frames in the head photo frame set by utilizing a preset head photo frame interval to serve as the praise counting number.
In this embodiment, the target graph estimation algorithm is taken as the head portrait frame estimation algorithm, and the like statistical manner is taken as the head portrait counting, which is suitable for the case that the ratio of the character color to the gray background is low according to the name character color statistics of the like area, that is, the WeChat friend circle screenshot of which the screenshot to be recognized is the common screenshot mode. Because the head portrait has random personalized characteristics, a target detection method cannot be used, and if a rectangular frame is searched, the color of a part of the head portrait is gray and is fused with the background. Therefore, the system can adopt an head portrait frame size estimation method, the estimation method is to divide the head portrait area according to lines, the division condition is grey-white lines among head portrait lines, and according to the square character of the head portrait, the height of each line of head portrait is the height of the head portrait. With the size of the head portrait, the size of the head portrait interval can also be obtained by calculating an empirical value (1/10 of the head portrait), and then the number N of the head portrait which is praised is calculated according to the length of each line of the head portrait, and the formula is as follows:
Figure BDA0002606192700000121
wherein n represents the total number of rows, LiThe length of the head portrait of the ith row is shown, W is the width of the head portrait, S is the interval size of the head portrait, and cell is rounded up.
In addition, for the case of counting the occupation ratio of the character color to the gray background according to the name character color of the praise region, namely, the screenshot of the WeChat friend circle to be identified as the album mode. The system can input the part of the name in the praise region into the character detection model, input the obtained character segmentation result into the character recognition model to obtain actual character information, and count commas in the actual character information to serve as the praise counting number.
And summarizing the character recognition result and the image recognition result to generate the area recognition result, wherein the area recognition result is stored in a block chain.
In this embodiment, after obtaining the text recognition result of the text region picture set and the image recognition result of the image region picture set, the system summarizes the text recognition result and the image recognition result to obtain a final region recognition result. For example, for the screen shot of the WeChat friend circle, the system acquires that the text recognition result is that the release time is 7/15/2020, the friend circle is partially visible, and the number of frames for which the image recognition result is like praise is 10. The system collects the information to serve as the area identification result of the friend circle screenshot. It is emphasized that, in order to further ensure the privacy and security of the region identification result, the region identification result is stored in a node of a block chain.
Further, in this embodiment, the audit result includes a first audit result and a second audit result, and the step S30 includes:
judging whether the area identification result meets a preset auditing standard or not;
if the area identification result meets the preset auditing standard, generating auditing passing information as a first auditing result;
and if the area identification result does not accord with the preset auditing standard, collecting error information in the auditing process, and generating auditing failure information based on the error information to serve as a second auditing result.
In this embodiment, the results of the auditing of the region identification results by the system are generally divided into two types, one is a first auditing result representing that the auditing is passed, and the other is a second auditing result representing that the auditing is not passed. It should be noted that, for the second audit result that is not passed, it is necessary to collect error information about which conditions are not passed in the standard determination process, and summarize the information into the second audit result, so as to analyze the specific reason why the audit is not passed.
Further, in this embodiment, after the step of determining the review result corresponding to the screenshot to be recognized based on the preset review standard and the area recognition result, the method further includes:
extracting label information in the area identification result, and acquiring user information of a client;
establishing a user representation model based on the tag information and the user information to select targeted push content for a client using the user representation model.
In the present embodiment, the tag information may be location information, activity type information, etc., and the user information may be gender information, age information, etc. The system may use the friend circle screenshots fed back by the user together with corresponding user information as a training data set, and learn a user portrait model for prediction from the labeled training data using supervised learning methods, such as machine learning classification and regression algorithms (bayesian, decision tree, logistic regression, support vector machine, etc.). The user portrait model can be constructed through three steps of target analysis, architecture construction and portrait building. First step, target analysis: the data related to all users are divided into static information data and dynamic information data, and the static information data are relatively stable information of the users, such as data in the aspects of population attributes, business attributes and the like. The dynamic information data is behavior information which changes constantly and needs to be obtained and analyzed in a key way; and secondly, system construction, wherein the current mainstream label system is hierarchical, firstly, the labels are divided into a plurality of large classes, and each large class is subdivided layer by layer. When constructing the labels, only the label at the lowest layer needs to be constructed, and the labels can be mapped to the labels at the upper two levels. The underlying labels are generally used for advertisement putting and precise marketing, and in addition, the granularity of the labels is also required to be noticed, the granularity of the labels is too coarse, the labels are not distinguished, and the granularity is too fine, so that the label system is too complex and not universal. The fact tag is constructed first based on the raw data, and can be obtained directly from the database (e.g., registration information) or through simple statistics. Such tags are difficult to build, have definite actual meanings, and can be used as basic features for subsequent tag mining (for example, the product purchase times can be used as input feature data of shopping preferences of users). The fact label constructing process is also a process for deepening understanding of the data. When the data are counted, the data processing and processing are finished, the distribution of the data is known to a certain degree, and preparation is made for the construction of high-grade labels. The model label is the core of the label system and is also the part with the largest user image workload, and the core of most user labels is the model label. The construction of model tags requires the use of machine learning and natural language processing techniques. And finally, constructing an advanced label, wherein the advanced label is obtained by performing statistical modeling based on a fact label and a model label, and the structure of the advanced label is closely related to actual service indexes. Only after the construction of the base tag is completed can the advanced tag be constructed. The model for constructing the advanced label can be simple data statistics or a complex machine learning model. Extracting a label corresponding to the user dynamic information data according to the constructed label system; and thirdly, establishing a portrait, namely establishing a user portrait according to a label corresponding to user data, wherein the user portrait model can be a user value model, a client activity model, a user loyalty model, a user shopping type model and the like, and a common modeling algorithm is a clustering algorithm generally and specifically can be a K-means clustering algorithm and a system clustering algorithm.
The merchant can predict the type of the propaganda information which is preferentially shared by the similar client users by using the user portrait model, and then directionally push interested propaganda information for the client users in a subsequent pushing plan so as to increase the probability of the user sharing the propaganda information. Specifically, if the system acquires the tendency of sharing the publicity information of the female client user groups of age groups 18-30 through the trained user portrait model to be the makeup type and the food type, the publicity information of the two types can be pushed to the new client users of the same type in the wechat public number, so that the probability that the publicity information can be shared by the users is increased.
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
Furthermore, a character recognition model based on a deep learning algorithm and a preset target graph estimation method are respectively adopted to respectively recognize a character region picture set and an image region picture set, so that the screenshot recognition operation is more pertinent, and the obtained recognition result is more accurate; counting the head photo frames of the praise region by adopting a head photo frame size estimation algorithm, so that the praise number reflected in the screenshot can be accurately obtained; the region identification result is audited through a preset audit standard, so that the audit result can be automatically and efficiently obtained; a corresponding user portrait model is established by extracting label information in the area identification result, so that the collected user related data can be fully utilized, and a pushing strategy suitable for different users is specified.
In addition, as shown in fig. 3, to achieve the above object, the present invention further provides a screenshot content detecting apparatus, including:
the character image segmentation module 10 is configured to receive a screenshot to be identified sent by a client, perform character image segmentation on the screenshot to be identified, and generate a character area image set and an image area image set;
the region detection and identification module 20 is configured to perform detection and identification on the text region picture set and the image region picture set respectively based on a preset deep learning algorithm and a preset target graph estimation algorithm, so as to obtain a region identification result after the text region picture set and the image region picture set are summarized;
and the identification result auditing module 30 is configured to determine an auditing result corresponding to the screenshot to be identified based on a preset auditing standard and the area identification result.
Optionally, the text region picture set comprises a time text region and a content text region,
the text image segmentation module 10 includes:
the template matching unit is used for receiving the screenshot to be recognized sent by the client and determining a time character area in the screenshot to be recognized based on a preset multi-scale template matching algorithm;
and the feature identification unit is used for detecting the pixel value features of the screenshot to be identified and determining the content text area and the image area in the screenshot to be identified according to the pixel value features so as to obtain the text area picture set and the image area picture set.
Optionally, the template matching unit is further configured to:
acquiring the image resolution of the screenshot to be identified, and zooming the size of a preset matching template according to the image resolution so as to adapt the matching template to the screenshot to be identified;
and performing local mask matching on the screenshot to be recognized by using the matching template, positioning a time bar coordinate in the screenshot to be recognized, and taking an area corresponding to the time bar coordinate as the time character area.
Optionally, the image region picture set comprises a content picture region and a like picture region,
the feature identification unit is further configured to:
detecting and obtaining the pixel value distribution, the extreme value of the row-column pixels and the average difference value of the adjacent rows of pixels of the screenshot to be identified as the pixel value characteristics;
determining a content picture area in the screenshot to be identified according to the pixel value distribution;
determining a content character area in the screenshot to be identified according to the line and column pixel extreme value;
and determining a praise picture area in the screenshot to be identified according to the average difference value of the adjacent rows of pixels so as to obtain a text area picture set comprising the time text area and the content text area and an image area picture set comprising the content picture area and the praise picture area.
Optionally, the area detection and identification module 20 includes:
the character recognition unit is used for recognizing the character region picture set according to a preset character recognition model based on a deep learning algorithm so as to obtain a corresponding character recognition result;
the image identification unit is used for identifying the image area picture set according to a preset target graph estimation method so as to obtain an image identification result corresponding to the image area picture set;
and the result summarizing unit is used for summarizing the character recognition result and the image recognition result to generate the area recognition result, wherein the area recognition result is stored in a block chain.
Optionally, the target pattern estimation algorithm comprises an head box size estimation algorithm, the image recognition result comprises a praise statistical quantity,
the image recognition unit is further configured to:
positioning a head photo frame region in the image region picture set, and dividing the head photo frame region according to lines according to a head photo frame size estimation algorithm to obtain a head photo frame set;
and counting the number of the head photo frames in the head photo frame set by utilizing a preset head photo frame interval to serve as the praise counting number.
Optionally, the audit result comprises a first audit result and a second audit result,
the recognition result auditing module 30 includes:
the standard judging unit is used for judging whether the area identification result meets a preset auditing standard or not;
the first auditing unit is used for generating auditing passing information as a first auditing result if the area identification result meets a preset auditing standard;
and the second auditing unit is used for collecting error information in the auditing process if the area identification result does not accord with the preset auditing standard, and generating auditing failure information based on the error information to serve as a second auditing result.
Optionally, the screenshot content detection apparatus further includes:
the tag extraction unit is used for extracting tag information in the area identification result and acquiring user information of the client;
and the directional pushing unit is used for establishing a user portrait model based on the label information and the user information so as to select directional pushing content for the client by using the user portrait model.
The invention also provides electronic equipment.
The electronic equipment comprises a processor, a memory and a screenshot content detection program which is stored on the memory and can run on the processor, wherein when the screenshot content detection program is executed by the processor, the steps of the screenshot content detection method are realized.
The method implemented when the screenshot content detection program is executed may refer to each embodiment of the screenshot content detection method of the present invention, and details are not described here.
In addition, the embodiment of the invention also provides a computer readable storage medium.
The computer readable storage medium of the present invention stores a screenshot content detection program, wherein the screenshot content detection program, when executed by a processor, implements the steps of the screenshot content detection method as described above.
The method implemented when the screenshot content detection program is executed may refer to each embodiment of the screenshot content detection method of the present invention, and details are not described here.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A screenshot content detection method is characterized by comprising the following steps:
receiving a screenshot to be identified sent by a client, and performing character image segmentation on the screenshot to be identified to generate a character area image set and an image area image set;
respectively detecting and identifying the character region picture set and the image region picture set based on a preset deep learning algorithm and a preset target graph estimation algorithm to obtain a region identification result after the character region picture set and the image region picture set are summarized;
and determining an auditing result corresponding to the screenshot to be identified based on a preset auditing standard and the area identification result.
2. The method of detecting screenshot content of claim 1 wherein said text area collection includes a time text area and a content text area,
the steps of receiving the screenshot to be identified sent by the client, carrying out character image segmentation on the screenshot to be identified and generating a character area picture set and an image area picture set comprise:
receiving a screenshot to be recognized sent by a client, and determining a time character region in the screenshot to be recognized based on a preset multi-scale template matching algorithm;
and detecting the pixel value characteristics of the screenshot to be identified, and determining a content text area and an image area in the screenshot to be identified according to the pixel value characteristics to obtain a text area picture set and an image area picture set.
3. The screenshot content detection method according to claim 2, wherein the step of receiving the screenshot to be recognized sent by the client and determining the time text region in the screenshot to be recognized based on a preset multi-scale template matching algorithm comprises:
acquiring the image resolution of the screenshot to be identified, and zooming the size of a preset matching template according to the image resolution so as to adapt the matching template to the screenshot to be identified;
and performing local mask matching on the screenshot to be recognized by using the matching template, positioning a time bar coordinate in the screenshot to be recognized, and taking an area corresponding to the time bar coordinate as the time character area.
4. The screenshot content detection method of claim 2, wherein said image region collection comprises a content picture region and a favorite picture region,
the step of detecting the pixel value characteristics of the screenshot to be identified and determining the content text area and the image area in the screenshot to be identified according to the pixel value characteristics to obtain the text area picture set and the image area picture set comprises the following steps:
detecting and obtaining the pixel value distribution, the extreme value of the row-column pixels and the average difference value of the adjacent rows of pixels of the screenshot to be identified as the pixel value characteristics;
determining a content picture area in the screenshot to be identified according to the pixel value distribution;
determining a content character area in the screenshot to be identified according to the line and column pixel extreme value;
and determining a praise picture area in the screenshot to be identified according to the average difference value of the adjacent rows of pixels so as to obtain a text area picture set comprising the time text area and the content text area and an image area picture set comprising the content picture area and the praise picture area.
5. The method for detecting the screenshot content according to claim 1, wherein the step of detecting and identifying the text region image set and the image region image set respectively based on a preset deep learning algorithm and a preset target graph estimation algorithm to obtain the region identification result after the text region image set and the image region image set are summarized comprises:
identifying the character region picture set according to a preset character identification model based on a deep learning algorithm to obtain a corresponding character identification result;
according to a preset target graph estimation method, identifying the image area picture set to obtain an image identification result corresponding to the image area picture set;
and summarizing the character recognition result and the image recognition result to generate the area recognition result, wherein the area recognition result is stored in a block chain.
6. The screenshot content detection method of claim 5, wherein said target graphic estimation algorithm comprises an avatar frame size estimation algorithm, said image recognition results comprise a statistical number of praise,
the step of identifying the image area picture set according to a preset target figure estimation algorithm to obtain an image identification result corresponding to the image area picture set comprises the following steps:
positioning a head photo frame region in the image region picture set, and dividing the head photo frame region according to lines according to a head photo frame size estimation algorithm to obtain a head photo frame set;
and counting the number of the head photo frames in the head photo frame set by utilizing a preset head photo frame interval to serve as the praise counting number.
7. The screenshot content detection method of claim 1, wherein said review results include a first review result and a second review result,
the step of determining the auditing result corresponding to the screenshot to be identified based on the preset auditing standard and the area identification result comprises the following steps:
judging whether the area identification result meets a preset auditing standard or not;
if the area identification result meets the preset auditing standard, generating auditing passing information as a first auditing result;
and if the area identification result does not accord with the preset auditing standard, collecting error information in the auditing process, and generating auditing failure information based on the error information to serve as a second auditing result.
8. The screenshot content detection method according to any one of claims 1-7, wherein after the step of determining the review result corresponding to the screenshot to be recognized based on the preset review criterion and the area recognition result, the method further comprises:
extracting label information in the area identification result, and acquiring user information of a client;
establishing a user representation model based on the tag information and the user information to select targeted push content for a client using the user representation model.
9. An electronic device comprising a processor, a memory, and a screenshot content detection program stored on the memory and executable by the processor, wherein the screenshot content detection program, when executed by the processor, implements the steps of the screenshot content detection method of any of claims 1-8.
10. A computer-readable storage medium, having stored thereon a screenshot content detection program, wherein the screenshot content detection program, when executed by a processor, performs the steps of a screenshot content detection method according to any one of claims 1 to 8.
CN202010739177.9A 2020-07-28 2020-07-28 Screenshot content detection method and device and computer-readable storage medium Pending CN111881901A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010739177.9A CN111881901A (en) 2020-07-28 2020-07-28 Screenshot content detection method and device and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010739177.9A CN111881901A (en) 2020-07-28 2020-07-28 Screenshot content detection method and device and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN111881901A true CN111881901A (en) 2020-11-03

Family

ID=73201845

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010739177.9A Pending CN111881901A (en) 2020-07-28 2020-07-28 Screenshot content detection method and device and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN111881901A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112989112A (en) * 2021-04-27 2021-06-18 北京世纪好未来教育科技有限公司 Online classroom content acquisition method and device
CN113569861A (en) * 2021-08-03 2021-10-29 天翼爱音乐文化科技有限公司 Mobile application illegal content scanning method, system, equipment and medium
CN114663878A (en) * 2022-05-25 2022-06-24 成都飞机工业(集团)有限责任公司 Finished product software version checking method, device, equipment and medium
CN114821567A (en) * 2022-06-23 2022-07-29 北京百炼智能科技有限公司 Praise number extraction method and device for social software screenshot
CN114937188A (en) * 2022-04-22 2022-08-23 北京智慧荣升科技有限公司 Information identification method, device, equipment and medium for sharing screenshot by user
CN115098579A (en) * 2022-08-24 2022-09-23 中关村科学城城市大脑股份有限公司 Business data publishing method and device, electronic equipment and computer readable medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103281474A (en) * 2013-05-02 2013-09-04 武汉大学 Image and text separation method for scanned image of multifunctional integrated printer
CN107832765A (en) * 2017-09-13 2018-03-23 百度在线网络技术(北京)有限公司 Picture recognition to including word content and picture material
KR101985612B1 (en) * 2018-01-16 2019-06-03 김학선 Method for manufacturing digital articles of paper-articles
CN110751500A (en) * 2019-09-06 2020-02-04 中国平安财产保险股份有限公司 Processing method and device for sharing pictures, computer equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103281474A (en) * 2013-05-02 2013-09-04 武汉大学 Image and text separation method for scanned image of multifunctional integrated printer
CN107832765A (en) * 2017-09-13 2018-03-23 百度在线网络技术(北京)有限公司 Picture recognition to including word content and picture material
KR101985612B1 (en) * 2018-01-16 2019-06-03 김학선 Method for manufacturing digital articles of paper-articles
CN110751500A (en) * 2019-09-06 2020-02-04 中国平安财产保险股份有限公司 Processing method and device for sharing pictures, computer equipment and storage medium

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112989112A (en) * 2021-04-27 2021-06-18 北京世纪好未来教育科技有限公司 Online classroom content acquisition method and device
CN113569861A (en) * 2021-08-03 2021-10-29 天翼爱音乐文化科技有限公司 Mobile application illegal content scanning method, system, equipment and medium
CN113569861B (en) * 2021-08-03 2022-12-06 天翼爱音乐文化科技有限公司 Mobile application illegal content scanning method, system, equipment and medium
CN114937188A (en) * 2022-04-22 2022-08-23 北京智慧荣升科技有限公司 Information identification method, device, equipment and medium for sharing screenshot by user
CN114663878A (en) * 2022-05-25 2022-06-24 成都飞机工业(集团)有限责任公司 Finished product software version checking method, device, equipment and medium
CN114821567A (en) * 2022-06-23 2022-07-29 北京百炼智能科技有限公司 Praise number extraction method and device for social software screenshot
CN115098579A (en) * 2022-08-24 2022-09-23 中关村科学城城市大脑股份有限公司 Business data publishing method and device, electronic equipment and computer readable medium

Similar Documents

Publication Publication Date Title
CN111881901A (en) Screenshot content detection method and device and computer-readable storage medium
CN108665355B (en) Financial product recommendation method, apparatus, device and computer storage medium
US10936915B2 (en) Machine learning artificial intelligence system for identifying vehicles
CN107016387B (en) Method and device for identifying label
CN112329659B (en) Weak supervision semantic segmentation method based on vehicle image and related equipment thereof
WO2019089578A1 (en) Font identification from imagery
CN105809178A (en) Population analyzing method based on human face attribute and device
CN115002200B (en) Message pushing method, device, equipment and storage medium based on user portrait
CN113435335B (en) Microscopic expression recognition method and device, electronic equipment and storage medium
CN113761253A (en) Video tag determination method, device, equipment and storage medium
CN111222585B (en) Data processing method, device, equipment and medium
CN111738252B (en) Text line detection method, device and computer system in image
CN108665513B (en) Drawing method and device based on user behavior data
CN106778851A (en) Social networks forecasting system and its method based on Mobile Phone Forensics data
CN112417315A (en) User portrait generation method, device, equipment and medium based on website registration
CN112487284A (en) Bank customer portrait generation method, equipment, storage medium and device
CN108764232B (en) Label position obtaining method and device
CN111598600A (en) Multimedia information pushing method and system and terminal equipment
CN113762257A (en) Identification method and device for marks in makeup brand images
Anggoro et al. Classification of Solo Batik patterns using deep learning convolutional neural networks algorithm
CN110795995A (en) Data processing method, device and computer readable storage medium
Fernández et al. Implementation of a face recognition system as experimental practices in an artificial intelligence and pattern recognition course
CN113918769A (en) Method, device and equipment for marking key actions in video and storage medium
CN113806638A (en) Personalized recommendation method based on user portrait and related equipment
CN114283492B (en) Staff behavior-based work saturation analysis method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination