CN116389227A - Intelligent early warning system and method based on Internet of things - Google Patents

Intelligent early warning system and method based on Internet of things Download PDF

Info

Publication number
CN116389227A
CN116389227A CN202310291448.2A CN202310291448A CN116389227A CN 116389227 A CN116389227 A CN 116389227A CN 202310291448 A CN202310291448 A CN 202310291448A CN 116389227 A CN116389227 A CN 116389227A
Authority
CN
China
Prior art keywords
image
track
target
index
character
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310291448.2A
Other languages
Chinese (zh)
Other versions
CN116389227B (en
Inventor
李涵
王瑾
王卫郑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Jinhua Electronics Co ltd
Original Assignee
Jiangsu Jinhua Electronics Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Jinhua Electronics Co ltd filed Critical Jiangsu Jinhua Electronics Co ltd
Priority to CN202310291448.2A priority Critical patent/CN116389227B/en
Publication of CN116389227A publication Critical patent/CN116389227A/en
Application granted granted Critical
Publication of CN116389227B publication Critical patent/CN116389227B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • G06V30/153Segmentation of character regions using recognition of characters or words
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/06Management of faults, events, alarms or notifications
    • H04L41/0631Management of faults, events, alarms or notifications using root cause analysis; using analysis of correlation between notifications, alarms or events based on decision criteria, e.g. hierarchy, tree or time analysis
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/029Location-based management or tracking services
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A50/00TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE in human health protection, e.g. against extreme weather
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Image Analysis (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention relates to the technical field of intelligent early warning systems, in particular to an intelligent early warning system and method based on the Internet of things, wherein the intelligent early warning system and method comprises a user data acquisition module, a target terminal determining module, an image data dividing module, a character extraction index analysis module, a character extraction condition analysis module and an intelligent transmission judgment module; the user data acquisition module is used for acquiring user images and user track data recorded by different display equipment terminals in the same local area network; the target terminal determining module is used for determining a target terminal based on target track analysis; the image data dividing module is used for dividing the image data into a single-path image and a multi-path image; the character extraction index analysis module is used for analyzing character extraction indexes based on the single-path images; the character extraction condition analysis module is used for analyzing character extraction conditions of the multipath images; the intelligent transmission early warning module is used for judging whether the multipath image generated in real time carries out intelligent transmission early warning.

Description

Intelligent early warning system and method based on Internet of things
Technical Field
The invention relates to the technical field of intelligent early warning systems, in particular to an intelligent early warning system and method based on the Internet of things.
Background
Along with the progress and development of science and technology and the popularization of intelligent equipment, people enjoy the convenience brought to our daily lives by intelligent equipment such as computers and mobile phones; the mobile phone can be used for carrying out some data operations, such as quick processing and text extraction on image data, but meanwhile, due to the different characteristics of different equipment attribute structures and the like, obvious differences exist in data processing such as text extraction, such as text extraction by using the mobile phone is more convenient and quick than text extraction by using a computer, but when equipment used by a user in real time is not the mobile phone but the computer, the user can generate the operation of text extraction by transmitting an image to the mobile phone through the computer, the complexity of image text extraction is increased in the process, the text extraction efficiency is reduced, and the processing of the image data is not intelligent and convenient enough.
Disclosure of Invention
The invention aims to provide an intelligent early warning system and method based on the Internet of things, which are used for solving the problems in the background technology.
In order to solve the technical problems, the invention provides the following technical scheme: an intelligent early warning method based on the Internet of things comprises the following analysis steps:
step S1: acquiring user image data and user track data recorded by different display equipment terminals in the same local area network, wherein the user image data refers to image data recorded and generated by the display equipment terminals, and the user track data refers to operation tracks of users on different equipment on the image data; the number of display equipment terminals under the same local area network is more than or equal to two;
step S2: extracting a track for performing text extraction operation on an image in user track data as a target track based on the user image data and the user track data, and determining a target terminal based on target track analysis;
step S3: the image data comprises a single-path image and a multipath image, wherein the single-path image refers to the image data corresponding to the target track only on the target terminal, and the multipath image refers to the image data recorded on the non-target terminal; analyzing a character extraction index based on the single-path image;
step S4: calculating an effective transmission index of the multipath image conforming to the character extraction index based on the character extraction index, and analyzing character extraction conditions of the multipath image; judging whether the multipath image generated in real time carries out intelligent transmission early warning according to the character extraction condition, wherein the intelligent transmission refers to that the image data on the non-target terminal is transmitted from the non-target terminal to the target terminal under the condition that a user does not actively transmit the image data.
Further, step S2 includes the following analysis steps:
acquiring a target track recorded in user track data;
when the display equipment terminal corresponding to the target track is unique, the display equipment terminal is made to be the target terminal;
when the display equipment terminal corresponding to the target track is not unique, acquiring the track length hi of the image character extraction track corresponding to the ith display equipment terminal, and extracting the minimum value of m track lengths hi as a first track min [ hi ], wherein m represents the total number of target track types; when the display equipment terminals are different, the operation flow for extracting the characters in the images is naturally different, so when the target track is not unique, the condition that the user performs image character extraction is recorded in a plurality of display equipment terminals is indicated; the minimum value of the track length is analyzed because the size of the track length reflects the convenience and simplicity degree of user operation, and the shorter the track length is, the more convenient the user operates, so that the possibility of the track length as user extraction image text equipment is higher;
extracting the number of times N1 recorded in the monitoring period of the first track min [ hi ], and the total number of times N2 recorded in the monitoring period of the m-1 target tracks except the first track; using the formula: n= (N1-N2)/N (1+n2), calculating coverage N of the first track, and if N is greater than or equal to a preset coverage threshold N0, outputting a display device terminal corresponding to the first track as a target terminal; and if n is smaller than the preset coverage threshold n0, selecting a display equipment terminal corresponding to the maximum value of the recording times of the ith image and text extraction track in the monitoring period as target equipment. The analysis coverage is because the actual recording times can truly reflect the habituation of the display device terminal used by the user for the text extraction operation.
Further, analyzing the text extraction index based on the single-path image comprises the following analysis steps:
acquiring a text area s1 in a single-diameter image and an area s0 of the image, wherein the text area refers to an image area formed by a minimum rectangle covered by text; calculating the average single-diameter character ratio r1, r1= (1/n) [ Σ (s 1/s 0) ]ofthe single-diameter image; wherein n represents the number of single-diameter images in the monitoring period of the target terminal;
marking other images except the single-diameter image on the target terminal in the monitoring period as random images, acquiring the text area s2 of the random image and the area s0 'of the image per se in the target terminal, and calculating the average random text ratio r2, r 2= (1/m) [ Σ (s 2/s 0') ]; wherein m represents the number of random images in the target terminal;
calculating an average difference value r0 of the average single-diameter character ratio r1 and the average random character r2, wherein r 0= |r1-r2|;
outputting a text extraction index, wherein the similarity between the real-time difference value and the average difference value in the image data acquired in real time is greater than or equal to a similarity threshold value; the real-time difference value refers to the absolute value of the difference value between the character ratio in the image data and the average random character r2, and the character ratio is the ratio of the character area of the real-time image to the area of the real-time image.
Further, analyzing the character extraction condition of the multipath image, and judging whether the multipath image generated in real time performs intelligent transmission early warning according to the character extraction condition, comprising the following steps:
acquiring images meeting the character extraction index in the multipath images as images to be inspected, and acquiring the number k1 of the images to be inspected, which are actively transmitted, and the number k2 of the images to be inspected, which are not actively transmitted;
calculating an effective transmission index f1, f1=k2/k 1; setting an effective transmission index threshold f0, and outputting a character extraction condition as a first condition when the effective transmission index f1 is smaller than the effective transmission index threshold f0, wherein the first condition is a character extraction index; calculating a real-time difference value corresponding to the real-time multipath image, and judging whether the real-time multipath image meets a character extraction index; when the real-time multipath image meets the character extraction index, carrying out intelligent transmission early warning on the real-time multipath image;
when the effective transmission index is smaller than the effective transmission index threshold, the analysis of the character extraction index in the multipath image of the active transmission can effectively judge that the image is a multipath image which needs to be transmitted to the target terminal for character extraction operation; and the unique condition is set to be effective and quick;
when the effective transmission index f1 is larger than or equal to the effective transmission index threshold f0, outputting a text extraction condition as a second condition, and when the real-time multipath image meets the second condition, performing intelligent transmission early warning on the real-time multipath image.
Further, the analysis of the second condition includes the steps of:
acquiring a multipath image actively transmitted by a user as a target image, extracting user track data corresponding to the target image, and extracting the number pj of the target image when j-th user track data exist in the target image and the track similarity fj of the j-th user track data and the multipath image which is not actively transmitted by the user; then the formula is used:
Figure BDA0004141627650000031
calculating a distinguishing index f2j of the j-th user track data, wherein p0 represents the total number of target images; a1, a2 represent reference coefficients a1+a2=1, a1 is greater than zero, a2 is greater than zero;
setting a distinguishing index threshold f0', and extracting user track data corresponding to the distinguishing index f2j which is larger than or equal to the distinguishing index threshold f0' as target tracks; generating a track sequence according to the sequence from big to small based on the target track according to the numerical value of the corresponding distinguishing index; analyzing the number of track images contained in the target image to determine whether the track has characteristics or not and user operation characteristics before active transmission, and analyzing the similarity to determine the difference between the active transmission operation behavior and the inactive transmission operation behavior, wherein the larger the difference is, the more representative and characteristic the user track corresponding to the active transmission is; the importance of judging whether the image data is intelligently transmitted in real-time monitoring is improved;
and the second condition is that the similarity of the user track data acquired in real time and any target track in the track sequence is more than or equal to 80%.
An intelligent early warning system based on the Internet of things comprises a user data acquisition module, a target terminal determination module, an image data division module, a character extraction index analysis module, a character extraction condition analysis module and an intelligent transmission judgment module;
the user data acquisition module is used for acquiring user image data and user track data recorded by different display equipment terminals in the same local area network;
the target terminal determining module is used for determining a target terminal based on target track analysis;
the image data dividing module is used for dividing the image data into a single-path image and a multi-path image;
the character extraction index analysis module is used for analyzing character extraction indexes based on the single-path images;
the character extraction condition analysis module is used for analyzing character extraction conditions of the multipath images;
the intelligent transmission early warning module is used for judging whether the multipath image generated in real time carries out intelligent transmission early warning or not according to the character extraction conditions.
Further, the target terminal determining module comprises a target track acquiring unit, a terminal number analyzing unit, a coverage calculating unit and a target terminal analyzing unit;
the target track acquisition unit is used for extracting a track for performing text extraction operation on the image in the user track data as a target track;
the terminal number analysis unit is used for judging whether the display equipment terminal corresponding to the target track is unique;
the coverage calculating unit is used for calculating the coverage by utilizing the track length and the number of target track types when the display equipment terminals corresponding to the target tracks are not unique;
the target terminal analysis unit is used for marking the display equipment terminal corresponding to the target track as a target terminal when the display equipment terminal is unique, and determining the target terminal by analyzing the data of the coverage calculation unit when the display equipment terminal is not unique.
Further, the character extraction index analysis module comprises a character area acquisition unit, a character ratio calculation unit and a character extraction index determination unit;
the character area acquisition unit is used for acquiring the Chinese area and the area of the image per se in the single-diameter image;
the character ratio calculating unit is used for calculating an average single-diameter character ratio, an average random character ratio and a character ratio in the real-time image data based on the data in the character area acquiring unit;
the character extraction index determining unit is used for analyzing the character extraction index according to the data of the character ratio calculating unit.
Further, the character extraction condition analysis module comprises an image to be inspected determining unit, an effective transmission index calculating unit and a condition judging unit;
the image to be inspected determining unit is used for acquiring images which accord with the character extraction index in the multipath images as images to be inspected;
the effective transmission index calculation unit is used for calculating an effective transmission index based on data corresponding to the image to be examined; and setting an effective transmission index threshold;
the condition judging unit is used for determining the type of the character extraction condition based on the numerical relation between the effective transmission index and the effective transmission index threshold value, and outputting the character extraction condition as a first condition when the effective transmission index is smaller than the effective transmission index threshold value; and outputting a text extraction condition as a second condition when the effective transmission index is greater than or equal to the effective transmission index threshold.
Compared with the prior art, the invention has the following beneficial effects: according to the method, the target terminal for character extraction of the user is determined by analyzing the user data of different equipment terminals under the same local area network, the target terminal is determined to provide a transmission direction for subsequent intelligent transmission, after the target terminal is determined, the characteristic index and the characteristic track for character extraction in the image are analyzed by analyzing the user operation data and the transmission data, so that effective judgment basis is extracted, and when the user data generated in real time are monitored, comparison analysis is effectively carried out to determine whether the image data is subjected to intelligent transmission; according to the method and the device, the intelligent image transmission can be realized through the system analysis when the user uses the text extraction equipment inconveniently, so that the operation steps of active transmission of the user are reduced, the complexity of text extraction is reduced, and the text extraction efficiency and convenience in the image are improved.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention. In the drawings:
fig. 1 is a schematic structural diagram of an intelligent early warning system based on the internet of things.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, the present invention provides the following technical solutions: an intelligent early warning method based on the Internet of things comprises the following analysis steps:
step S1: acquiring user image data and user track data recorded by different display equipment terminals in the same local area network, wherein the user image data refers to image data recorded and generated by the display equipment terminals, and the user track data refers to operation tracks of users on different equipment on the image data; the number of display equipment terminals under the same local area network is more than or equal to two;
step S2: extracting a track for performing text extraction operation on an image in user track data as a target track based on the user image data and the user track data, and determining a target terminal based on target track analysis;
step S2 comprises the following analysis steps:
acquiring a target track recorded in user track data;
when the display equipment terminal corresponding to the target track is unique, the display equipment terminal is made to be the target terminal;
when the display equipment terminal corresponding to the target track is not unique, acquiring the track length hi of the image character extraction track corresponding to the ith display equipment terminal, and extracting the minimum value of m track lengths hi as a first track min [ hi ], wherein m represents the total number of target track types; when the display equipment terminals are different, the operation flow for extracting the characters in the images is naturally different, so when the target track is not unique, the condition that the user performs image character extraction is recorded in a plurality of display equipment terminals is indicated; the minimum value of the track length is analyzed because the size of the track length reflects the convenience and simplicity degree of user operation, and the shorter the track length is, the more convenient the user operates, so that the possibility of the track length as user extraction image text equipment is higher;
the specific track of the image and text extraction track in the mobile phone is as follows: long press image- & gt clicking "extract text";
the specific track in the computer is: clicking a picture-to-text function, clicking an add button, selecting a picture, and selecting a conversion mode to extract a text;
extracting the number of times N1 recorded in the monitoring period of the first track min [ hi ], and the total number of times N2 recorded in the monitoring period of the m-1 target tracks except the first track; using the formula: n= (N1-N2)/N (1+n2), calculating coverage N of the first track, and if N is greater than or equal to a preset coverage threshold N0, outputting a display device terminal corresponding to the first track as a target terminal; and if n is smaller than the preset coverage threshold n0, selecting a display equipment terminal corresponding to the maximum value of the recording times of the ith image and text extraction track in the monitoring period as target equipment. The analysis coverage is because the actual recording times can truly reflect the habituation of the display device terminal used by the user for the text extraction operation.
Step S3: the image data comprises a single-path image and a multipath image, wherein the single-path image refers to the image data corresponding to the target track only on the target terminal, and the multipath image refers to the image data recorded on the non-target terminal; analyzing a character extraction index based on the single-path image; active transmission refers to that a user transmits an image from one terminal to another terminal through active operations such as "forwarding", "copying" and the like;
the text extraction index is analyzed based on the single-path image, and the method comprises the following analysis steps:
acquiring a text area s1 in a single-diameter image and an area s0 of the image, wherein the text area refers to an image area formed by a minimum rectangle covered by text; calculating the average single-diameter character ratio r1, r1= (1/n) [ Σ (s 1/s 0) ]ofthe single-diameter image; wherein n represents the number of single-diameter images in the monitoring period of the target terminal;
marking other images except the single-diameter image on the target terminal in the monitoring period as random images, acquiring the text area s2 of the random image and the area s0 'of the image per se in the target terminal, and calculating the average random text ratio r2, r 2= (1/m) [ Σ (s 2/s 0') ]; wherein m represents the number of random images in the target terminal;
calculating an average difference value r0 of the average single-diameter character ratio r1 and the average random character r2, wherein r 0= |r1-r2|;
outputting a text extraction index, wherein the similarity between the real-time difference value and the average difference value in the image data acquired in real time is greater than or equal to a similarity threshold value; the real-time difference value refers to the absolute value of the difference value between the character ratio in the image data and the average random character r2, and the character ratio is the ratio of the character area of the real-time image to the area of the real-time image.
Step S4: calculating an effective transmission index of the multipath image conforming to the character extraction index based on the character extraction index, and analyzing character extraction conditions of the multipath image; judging whether the multipath image generated in real time carries out intelligent transmission early warning according to the character extraction condition, wherein the intelligent transmission refers to that the image data on the non-target terminal is transmitted from the non-target terminal to the target terminal under the condition that a user does not actively transmit the image data.
Analyzing the character extraction condition of the multipath image, judging whether the multipath image generated in real time carries out intelligent transmission early warning according to the character extraction condition, and comprising the following steps:
acquiring images meeting the character extraction index in the multipath images as images to be inspected, and acquiring the number k1 of the images to be inspected, which are actively transmitted, and the number k2 of the images to be inspected, which are not actively transmitted;
calculating an effective transmission index f1, f1=k2/k 1; setting an effective transmission index threshold f0, and outputting a character extraction condition as a first condition when the effective transmission index f1 is smaller than the effective transmission index threshold f0, wherein the first condition is a character extraction index; calculating a real-time difference value corresponding to the real-time multipath image, and judging whether the real-time multipath image meets a character extraction index; when the real-time multipath image meets the character extraction index, carrying out intelligent transmission early warning on the real-time multipath image;
when the effective transmission index is smaller than the effective transmission index threshold, the analysis of the character extraction index in the multipath image of the active transmission can effectively judge that the image is a multipath image which needs to be transmitted to the target terminal for character extraction operation; and the unique condition is set to be effective and quick;
when the effective transmission index f1 is larger than or equal to the effective transmission index threshold f0, outputting a text extraction condition as a second condition, and when the real-time multipath image meets the second condition, performing intelligent transmission early warning on the real-time multipath image. The intelligent transmission early warning means that after intelligent transmission, reminding is carried out on the equipment terminal currently used by the user that the image data is transmitted to the target terminal.
When f1 is greater than or equal to f0, the character extraction index obtained by analyzing the historical data is not capable of accurately and reasonably determining the image requirement of the multipath image for active transmission, so that further analysis of which image features of the multipath image need to be transmitted to the target terminal for character extraction is needed; and the first condition and the second condition do not exist together.
The analysis of the second condition comprises the steps of:
acquiring a multipath image actively transmitted by a user as a target image, extracting user track data corresponding to the target image, and extracting the number pj of the target image when j-th user track data exist in the target image and the track similarity fj of the j-th user track data and the multipath image which is not actively transmitted by the user; then the formula is used:
Figure BDA0004141627650000081
calculating a distinguishing index f2j of the j-th user track data, wherein p0 represents the total number of target images; a1, a2 represent reference coefficients a1+a2=1, a1 is greater than zero, a2 is greater than zero;
setting a distinguishing index threshold f0', and extracting user track data corresponding to the distinguishing index f2j which is larger than or equal to the distinguishing index threshold f0' as target tracks; generating a track sequence according to the sequence from big to small based on the target track according to the numerical value of the corresponding distinguishing index; analyzing the number of track images contained in the target image to determine whether the track has characteristics or not and user operation characteristics before active transmission, and analyzing the similarity to determine the difference between the active transmission operation behavior and the inactive transmission operation behavior, wherein the larger the difference is, the more representative and characteristic the user track corresponding to the active transmission is; the importance of judging whether the image data is intelligently transmitted in real-time monitoring is improved;
and the second condition is that the similarity of the user track data acquired in real time and any target track in the track sequence is more than or equal to 80%.
The judgment of the type of the user track data is to take any user track data as a target track, and extract tracks corresponding to the target track with the similarity of more than or equal to 80% to belong to the same user track data; the method belongs to a process of extracting characters through screenshot after the characters copied by a website are unsuccessful, and is characterized by comprising the steps of intercepting website pictures at a computer end, saving the pictures, opening the pictures, and performing right-key operation selection;
because the operation tracks of the user often exist before the image data on the non-target terminal is transmitted to the target terminal for character extraction, the operation tracks often show the idea and willingness of the user to extract characters in the image;
as shown in the examples: if there are two kinds of user trajectory data in the target image:
track 1, intercepting website pictures at a computer end, storing pictures, opening the pictures, selecting right key operation, transmitting to a mobile phone end;
track 2: "copy picture, edit picture, delete operation, save original picture → transmission to handset side";
the number of the target images is 10, the number of the target images is 4 when the track 1 exists in the target images, and the number of the target images is 6 when the track 2 exists in the target images; the track similarity of the track 1 and the multipath image which is not actively transmitted by the user is 13%, and the track similarity of the track 2 and the multipath image which is not actively transmitted by the user is 21%;
let a1=0.45, a2=0.55;
f21=0.45 (2/5) +0.55 (1/13) =0.22 is calculated
f22 =0.45 x (3/5) +0.55 x (1/21) =0.31; setting the threshold to 0.18, then both track 1 and track 2 are satisfied;
and f21 is smaller than f22, indicating that track 2 has a higher priority than track 1.
An intelligent early warning system based on the Internet of things comprises a user data acquisition module, a target terminal determination module, an image data division module, a character extraction index analysis module, a character extraction condition analysis module and an intelligent transmission judgment module;
the user data acquisition module is used for acquiring user image data and user track data recorded by different display equipment terminals in the same local area network;
the target terminal determining module is used for determining a target terminal based on target track analysis;
the image data dividing module is used for dividing the image data into a single-path image and a multi-path image;
the character extraction index analysis module is used for analyzing character extraction indexes based on the single-path images;
the character extraction condition analysis module is used for analyzing character extraction conditions of the multipath images;
the intelligent transmission early warning module is used for judging whether the multipath image generated in real time carries out intelligent transmission early warning or not according to the character extraction conditions.
The target terminal determining module comprises a target track acquiring unit, a terminal number analyzing unit, a coverage calculating unit and a target terminal analyzing unit;
the target track acquisition unit is used for extracting a track for performing text extraction operation on the image in the user track data as a target track;
the terminal number analysis unit is used for judging whether the display equipment terminal corresponding to the target track is unique;
the coverage calculating unit is used for calculating the coverage by utilizing the track length and the number of target track types when the display equipment terminals corresponding to the target tracks are not unique;
the target terminal analysis unit is used for marking the display equipment terminal corresponding to the target track as a target terminal when the display equipment terminal is unique, and determining the target terminal by analyzing the data of the coverage calculation unit when the display equipment terminal is not unique.
The character extraction index analysis module comprises a character area acquisition unit, a character ratio calculation unit and a character extraction index determination unit;
the character area acquisition unit is used for acquiring the Chinese area and the area of the image per se in the single-diameter image;
the character ratio calculating unit is used for calculating an average single-diameter character ratio, an average random character ratio and a character ratio in the real-time image data based on the data in the character area acquiring unit;
the character extraction index determining unit is used for analyzing the character extraction index according to the data of the character ratio calculating unit.
The character extraction condition analysis module comprises an image to be inspected determining unit, an effective transmission index calculating unit and a condition judging unit;
the image to be inspected determining unit is used for acquiring images which accord with the character extraction index in the multipath images as images to be inspected;
the effective transmission index calculation unit is used for calculating an effective transmission index based on data corresponding to the image to be examined; and setting an effective transmission index threshold;
the condition judging unit is used for determining the type of the character extraction condition based on the numerical relation between the effective transmission index and the effective transmission index threshold value, and outputting the character extraction condition as a first condition when the effective transmission index is smaller than the effective transmission index threshold value; and outputting a text extraction condition as a second condition when the effective transmission index is greater than or equal to the effective transmission index threshold.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Finally, it should be noted that: the foregoing description is only a preferred embodiment of the present invention, and the present invention is not limited thereto, but it is to be understood that modifications and equivalents of some of the technical features described in the foregoing embodiments may be made by those skilled in the art, although the present invention has been described in detail with reference to the foregoing embodiments. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (9)

1. The intelligent early warning method based on the Internet of things is characterized by comprising the following analysis steps of:
step S1: acquiring user image data and user track data recorded by different display equipment terminals in the same local area network, wherein the user image data refers to image data recorded and generated by the display equipment terminals, and the user track data refers to operation tracks of users on different equipment on the image data; the number of the display equipment terminals under the same local area network is more than or equal to two;
step S2: extracting a track for performing text extraction operation on an image in user track data as a target track based on the user image data and the user track data, and determining a target terminal based on target track analysis;
step S3: the image data comprises a single-path image and a multi-path image, wherein the single-path image refers to the image data corresponding to the target track only on the target terminal, and the multi-path image refers to the image data recorded on the non-target terminal; analyzing a character extraction index based on the single-path image;
step S4: calculating an effective transmission index of the multipath image conforming to the character extraction index based on the character extraction index, and analyzing character extraction conditions of the multipath image; judging whether the multipath image generated in real time carries out intelligent transmission early warning according to the character extraction condition, wherein the intelligent transmission refers to that the image data on the non-target terminal is transmitted from the non-target terminal to the target terminal under the condition that a user does not actively transmit the image data.
2. The intelligent early warning method based on the internet of things according to claim 1, wherein the intelligent early warning method based on the internet of things is characterized in that: the step S2 includes the following analysis steps:
acquiring a target track recorded in user track data;
when the display equipment terminal corresponding to the target track is unique, the display equipment terminal is made to be the target terminal;
when the display equipment terminal corresponding to the target track is not unique, acquiring the track length hi of the image character extraction track corresponding to the ith display equipment terminal, and extracting the minimum value of m track lengths hi as a first track min [ hi ], wherein m represents the total number of target track types;
extracting the number of times N1 recorded in the monitoring period of the first track min [ hi ], and the total number of times N2 recorded in the monitoring period of the m-1 target tracks except the first track; using the formula: n= (N1-N2)/N (1+n2), calculating coverage N of the first track, and if N is greater than or equal to a preset coverage threshold N0, outputting a display device terminal corresponding to the first track as a target terminal; and if n is smaller than the preset coverage threshold n0, selecting a display equipment terminal corresponding to the maximum value of the recording times of the ith image and text extraction track in the monitoring period as target equipment.
3. The intelligent early warning method based on the internet of things according to claim 2, wherein the intelligent early warning method based on the internet of things is characterized in that: the character extraction index analysis method based on the single-path image comprises the following analysis steps:
acquiring a text area s1 in a single-diameter image and an area s0 of the image, wherein the text area refers to an image area formed by a minimum rectangle covered by text; calculating the average single-diameter character ratio r1, r1= (1/n) [ Σ (s 1/s 0) ]ofthe single-diameter image; wherein n represents the number of single-diameter images in the monitoring period of the target terminal;
marking other images except the single-diameter image on the target terminal in the monitoring period as random images, acquiring the text area s2 of the random image and the area s0 'of the image per se in the target terminal, and calculating the average random text ratio r2, r 2= (1/m) [ Σ (s 2/s 0') ]; wherein m represents the number of random images in the target terminal;
calculating an average difference value r0 of the average single-diameter character ratio r1 and the average random character r2, wherein r 0= |r1-r2|;
outputting a text extraction index, wherein the similarity between the real-time difference value and the average difference value in the image data acquired in real time is greater than or equal to a similarity threshold value; the real-time difference value refers to the absolute value of the difference value between the character ratio in the real-time acquired image data and the average random character r2, wherein the character ratio is the ratio of the character area of the real-time image to the area of the real-time image.
4. The intelligent early warning method based on the internet of things according to claim 3, wherein the intelligent early warning method based on the internet of things is characterized in that: the method comprises the steps of analyzing the character extraction condition of the multipath image, judging whether the multipath image generated in real time carries out intelligent transmission early warning according to the character extraction condition, and the method comprises the following steps:
acquiring images meeting the character extraction index in the multipath images as images to be inspected, and acquiring the number k1 of the images to be inspected, which are actively transmitted, and the number k2 of the images to be inspected, which are not actively transmitted;
calculating an effective transmission index f1, f1=k2/k 1; setting an effective transmission index threshold f0, and outputting a text extraction condition as a first condition when the effective transmission index f1 is smaller than the effective transmission index threshold f0, wherein the first condition is a text extraction index; calculating a real-time difference value corresponding to the real-time multipath image, and judging whether the real-time multipath image meets a character extraction index; when the real-time multipath image meets the character extraction index, carrying out intelligent transmission early warning on the real-time multipath image;
when the effective transmission index f1 is larger than or equal to the effective transmission index threshold f0, outputting a text extraction condition as a second condition, and when the real-time multipath image meets the second condition, performing intelligent transmission early warning on the real-time multipath image.
5. The intelligent early warning method based on the internet of things according to claim 4, wherein the intelligent early warning method based on the internet of things is characterized in that: the analysis of the second condition comprises the steps of:
acquiring a multipath image actively transmitted by a user as a target image, extracting user track data corresponding to the target image, and extracting the number pj of the target image when j-th user track data exist in the target image and the track similarity fj of the j-th user track data and the multipath image which is not actively transmitted by the user; then the formula is used:
Figure FDA0004141627630000031
calculating a distinguishing index f2j of the j-th user track data, wherein p0 represents the total number of target images; a1, a2 represent reference coefficients a1+a2=1, a1 is greater than zero, a2 is greater than zero;
setting a distinguishing index threshold f0', and extracting user track data corresponding to the distinguishing index f2j which is larger than or equal to the distinguishing index threshold f0' as target tracks; generating a track sequence according to the sequence from big to small based on the target track according to the numerical value of the corresponding distinguishing index;
and the second condition is that the similarity of the user track data acquired in real time and any target track in the track sequence is more than or equal to 80%.
6. An intelligent early warning system based on the internet of things, which applies the intelligent early warning method based on the internet of things as claimed in any one of claims 1 to 5, is characterized by comprising a user data acquisition module, a target terminal determination module, an image data division module, a character extraction index analysis module, a character extraction condition analysis module and an intelligent transmission judgment module;
the user data acquisition module is used for acquiring user image data and user track data recorded by different display equipment terminals in the same local area network;
the target terminal determining module is used for determining a target terminal based on target track analysis;
the image data dividing module is used for dividing the image data into a single-path image and a multi-path image;
the character extraction index analysis module is used for analyzing character extraction indexes based on the single-path images;
the character extraction condition analysis module is used for analyzing character extraction conditions of the multipath images;
the intelligent transmission early warning module is used for judging whether the multipath image generated in real time carries out intelligent transmission early warning or not according to the character extraction conditions.
7. The intelligent early warning system based on the internet of things according to claim 6, wherein: the target terminal determining module comprises a target track acquiring unit, a terminal number analyzing unit, a coverage calculating unit and a target terminal analyzing unit;
the target track acquisition unit is used for extracting a track for performing text extraction operation on the image in the user track data as a target track;
the terminal number analysis unit is used for judging whether the display equipment terminal corresponding to the target track is unique;
the coverage calculating unit is used for calculating the coverage by utilizing the track length and the number of target track types when the display equipment terminals corresponding to the target tracks are not unique;
the target terminal analysis unit is used for marking the display equipment terminal corresponding to the target track as a target terminal when the display equipment terminal is unique, and analyzing the data of the coverage calculation unit to determine the target terminal when the display equipment terminal is not unique.
8. The intelligent early warning system based on the internet of things according to claim 7, wherein: the character extraction index analysis module comprises a character area acquisition unit, a character ratio calculation unit and a character extraction index determination unit;
the character area acquisition unit is used for acquiring the Chinese area and the area of the image per se in the single-diameter image;
the character ratio calculating unit is used for calculating an average single-diameter character ratio, an average random character ratio and a character ratio in real-time image data based on the data in the character area acquiring unit;
the character extraction index determining unit is used for analyzing character extraction indexes according to the data of the character ratio calculating unit.
9. The intelligent early warning system based on the internet of things according to claim 8, wherein: the character extraction condition analysis module comprises an image to be inspected determining unit, an effective transmission index calculating unit and a condition judging unit;
the image to be examined determining unit is used for acquiring images which accord with the character extraction indexes in the multipath images as images to be examined;
the effective transmission index calculation unit is used for calculating an effective transmission index based on data corresponding to the image to be examined; and setting an effective transmission index threshold;
the condition judging unit is used for determining the type of the character extraction condition based on the numerical relation between the effective transmission index and the effective transmission index threshold value, and outputting the character extraction condition as a first condition when the effective transmission index is smaller than the effective transmission index threshold value; and outputting a text extraction condition as a second condition when the effective transmission index is greater than or equal to the effective transmission index threshold.
CN202310291448.2A 2023-03-23 2023-03-23 Intelligent early warning system and method based on Internet of things Active CN116389227B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310291448.2A CN116389227B (en) 2023-03-23 2023-03-23 Intelligent early warning system and method based on Internet of things

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310291448.2A CN116389227B (en) 2023-03-23 2023-03-23 Intelligent early warning system and method based on Internet of things

Publications (2)

Publication Number Publication Date
CN116389227A true CN116389227A (en) 2023-07-04
CN116389227B CN116389227B (en) 2023-11-17

Family

ID=86974315

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310291448.2A Active CN116389227B (en) 2023-03-23 2023-03-23 Intelligent early warning system and method based on Internet of things

Country Status (1)

Country Link
CN (1) CN116389227B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0795308A (en) * 1993-09-21 1995-04-07 Advantest Corp Transmission method for document by terminal equipment
DE19945533A1 (en) * 1999-09-23 2001-03-29 Mm Lesestift Manager Memory Text recognition for portable device evaluates pixel information line for line, pixel for pixel to detect start pixel of character
KR20130062471A (en) * 2011-12-01 2013-06-13 엘지전자 주식회사 Mobile terminal and control method thereof
KR20170059331A (en) * 2015-11-20 2017-05-30 주식회사 티슈 Image extraction and sharing system using application program, and the method thereof
CN109801161A (en) * 2019-03-13 2019-05-24 上海诚数信息科技有限公司 Intelligent credit and authentification of message system and method
CN111797823A (en) * 2020-07-09 2020-10-20 广州市多米教育科技有限公司 Image personalized semantic analysis method based on deep learning
WO2020238556A1 (en) * 2019-05-30 2020-12-03 深圳壹账通智能科技有限公司 Configuration platform-based data transmission method, apparatus and computer device
CN112099645A (en) * 2020-09-04 2020-12-18 北京百度网讯科技有限公司 Input image generation method and device, electronic equipment and storage medium
DE202020004941U1 (en) * 2019-11-26 2021-04-23 Manuel Eckert System for position-related emergency call control and communication
CN113761968A (en) * 2020-06-01 2021-12-07 阿里巴巴集团控股有限公司 Data processing method and device, electronic equipment and computer storage medium
CN115659049A (en) * 2022-11-14 2023-01-31 深圳市秦丝科技有限公司 Intelligent supervision system and method for purchase, sales and inventory software platform based on Internet of things

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0795308A (en) * 1993-09-21 1995-04-07 Advantest Corp Transmission method for document by terminal equipment
DE19945533A1 (en) * 1999-09-23 2001-03-29 Mm Lesestift Manager Memory Text recognition for portable device evaluates pixel information line for line, pixel for pixel to detect start pixel of character
KR20130062471A (en) * 2011-12-01 2013-06-13 엘지전자 주식회사 Mobile terminal and control method thereof
KR20170059331A (en) * 2015-11-20 2017-05-30 주식회사 티슈 Image extraction and sharing system using application program, and the method thereof
CN109801161A (en) * 2019-03-13 2019-05-24 上海诚数信息科技有限公司 Intelligent credit and authentification of message system and method
WO2020238556A1 (en) * 2019-05-30 2020-12-03 深圳壹账通智能科技有限公司 Configuration platform-based data transmission method, apparatus and computer device
DE202020004941U1 (en) * 2019-11-26 2021-04-23 Manuel Eckert System for position-related emergency call control and communication
CN113761968A (en) * 2020-06-01 2021-12-07 阿里巴巴集团控股有限公司 Data processing method and device, electronic equipment and computer storage medium
CN111797823A (en) * 2020-07-09 2020-10-20 广州市多米教育科技有限公司 Image personalized semantic analysis method based on deep learning
CN112099645A (en) * 2020-09-04 2020-12-18 北京百度网讯科技有限公司 Input image generation method and device, electronic equipment and storage medium
CN115659049A (en) * 2022-11-14 2023-01-31 深圳市秦丝科技有限公司 Intelligent supervision system and method for purchase, sales and inventory software platform based on Internet of things

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张龙坤;何舟桥;万武南;: "基于机器学习的截图识别翻译应用研究", 网络安全技术与应用, no. 08 *
王宁;韩国强;顾国生;: "打印文件鉴别打印机型的文字图像模糊识别", 计算机应用研究, no. 03 *

Also Published As

Publication number Publication date
CN116389227B (en) 2023-11-17

Similar Documents

Publication Publication Date Title
CN110209810B (en) Similar text recognition method and device
CN111447137A (en) Browsing condition data analysis method and device, server and storage medium
JP2022088304A (en) Method for processing video, device, electronic device, medium, and computer program
CN111125523A (en) Searching method, searching device, terminal equipment and storage medium
CN112947807A (en) Display method and device and electronic equipment
CN111191096B (en) Method for identifying public opinion events and tracking popularity of whole-network patriotic
CN108415807A (en) A method of crawling whether monitoring electronic equipment browses flame
US11620327B2 (en) System and method for determining a contextual insight and generating an interface with recommendations based thereon
CN111479168B (en) Method, device, server and medium for marking multimedia content hot spot
CN116389227B (en) Intelligent early warning system and method based on Internet of things
CN111353422B (en) Information extraction method and device and electronic equipment
CN110781390A (en) Information recommendation method and mobile terminal
CN111666485B (en) Information recommendation method, device and terminal
CN114020384A (en) Message display method and device and electronic equipment
CN113362069A (en) Dynamic adjustment method, device and equipment of wind control model and readable storage medium
CN112417095A (en) Voice message processing method and device
CN112269730A (en) Abnormal log detection method, abnormal log detection device, and storage medium
CN112702258A (en) Chat message sharing method and device and electronic equipment
CN112464027A (en) Video detection method, device and storage medium
CN113098762B (en) Information output method and information output device
CN111131605B (en) Message management method, electronic device, and computer-readable storage medium
CN111857467B (en) File processing method and electronic equipment
CN111428060B (en) Media content recommendation method and related device
CN116582417B (en) Data processing method, device, computer equipment and storage medium
CN115278292B (en) Video reasoning information display method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant