CN116188814A - Stroke area-oriented identification method and system - Google Patents
Stroke area-oriented identification method and system Download PDFInfo
- Publication number
- CN116188814A CN116188814A CN202211428714.3A CN202211428714A CN116188814A CN 116188814 A CN116188814 A CN 116188814A CN 202211428714 A CN202211428714 A CN 202211428714A CN 116188814 A CN116188814 A CN 116188814A
- Authority
- CN
- China
- Prior art keywords
- identification
- text
- image data
- arrow
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 60
- 238000001514 detection method Methods 0.000 claims abstract description 93
- 238000012545 processing Methods 0.000 claims abstract description 14
- 238000004458 analytical method Methods 0.000 claims description 17
- 238000004590 computer program Methods 0.000 claims description 11
- 238000012937 correction Methods 0.000 claims description 11
- 238000010276 construction Methods 0.000 claims description 9
- 238000007405 data analysis Methods 0.000 claims description 9
- 238000003860 storage Methods 0.000 claims description 7
- 230000001149 cognitive effect Effects 0.000 claims description 2
- 238000007726 management method Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 4
- 238000002372 labelling Methods 0.000 description 3
- 238000012544 monitoring process Methods 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000010223 real-time analysis Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/26—Government or public services
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/62—Text, e.g. of license plates, overlay texts or captions on TV images
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Business, Economics & Management (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Tourism & Hospitality (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Computing Systems (AREA)
- Development Economics (AREA)
- Educational Administration (AREA)
- Software Systems (AREA)
- Economics (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Primary Health Care (AREA)
- Strategic Management (AREA)
- General Business, Economics & Management (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
The invention provides a method and a system for identifying a travel area, which belong to the technical field of image identification, wherein the method specifically comprises the following steps: step 1, constructing a risk area database and storing the current risk area; step 2, acquiring travel image data through a camera; step 3, constructing a journey identification model and receiving journey image data; step 4, carrying out risk area identification on the journey image data by utilizing the journey identification model; step 5, comparing the identification result with data in a risk area database; step 6, if the comparison result is that the risk area does not exist, confirming release; otherwise, a risk prompt is generated, the identification result is transmitted to a manager, and registration report is carried out for subsequent management and control. According to the invention, through analyzing and processing the image data, a manual checking mode is replaced, so that the labor investment is reduced, the detection speed is improved, and meanwhile, the real-time data feedback is realized, so that the real-time property of the data is effectively improved.
Description
Technical Field
The invention belongs to the technical field of image recognition, and particularly relates to a recognition method and a recognition system for a trip area.
Background
In daily production life, the identification of journey is generally checked by adopting a manual checking mode, but when an application scene is a public place with large traffic, the manual operation mode not only needs a large amount of labor input cost, but also often causes the phenomenon of checking errors to be generated when the people are tired, thereby influencing the control of real-time situation of journey.
Disclosure of Invention
The invention aims to: a method and a system for identifying travel areas are provided to solve the above problems in the prior art. The image data is analyzed and processed to replace an artificial checking mode, so that the labor investment is reduced, the detection speed is improved, and meanwhile, the real-time data feedback is performed, so that the real-time performance of the data is effectively improved, and the monitoring and grasping are more real-time and comprehensive.
The technical scheme is as follows: in a first aspect, a method for identifying a trip area is provided, wherein video data detected in real time is analyzed, text identification is performed on trip image data when the trip image data is detected, and an identification result is compared with a background data risk database. If the comparison result is that the area does not exist, generating an incomplete prompt and prompting; otherwise, confirm the release. The method specifically comprises the following steps:
step 1, constructing a region database and storing the current region; in order to meet the real-time requirement of the region, the region database updates the stored region data after a preset time period passes each time.
Step 2, acquiring travel image data through a camera;
step 3, constructing a journey identification model and receiving journey image data; the constructed travel recognition model specifically comprises an arrow detection model and a text detection recognition model.
Step 4, carrying out region identification on the journey image data by utilizing the journey identification model; the method specifically comprises the following steps:
step 4.1, receiving travel image data;
step 4.2, detecting and identifying an arrow in the travel image data by using the arrow detection model; in order to ensure that the whole journey image data is in the range acquired by the camera, the method specifically comprises the following steps:
step 4.2.1, recognizing an arrow direction in the travel image data by adopting an arrow detection model;
step 4.2.2, recording corresponding arrow parameters based on the identified arrow direction;
step 4.2.3, setting a threshold value, and judging the ratio of the arrow parameter to the threshold value;
step 4.2.4, outputting an analysis result of the arrow recognition model when the threshold condition is met, and taking the analysis result as a premise of text line detection and text recognition;
step 4.3, based on the arrow detection and recognition result, performing text line detection and text recognition on the text in the travel image data by using the text detection and recognition model; the method specifically comprises the following steps:
step 4.3.1, based on the result detected by the arrow detection model, adjusting the received travel image data to be in a preset orientation, and inputting a text detection recognition model;
step 4.3.2, the text detection and recognition model completes data matching according to a preset value pair, and required text data are obtained;
the data matching process comprises the following steps: text data following the keyword is automatically matched according to the detected keyword. In order to improve the recognition result of the text detection recognition model, the process of using the trip recognition model to perform region recognition on the trip image data further comprises:
step 4.4, further performing shape-similar word error correction processing on the matched travel track, and comparing the shape-similar word error correction processing with information in a standard address library of a management background to obtain a more accurate travel track; the method specifically comprises the following steps:
step 4.4.1, reading a result obtained by preliminary matching;
step 4.4.2, dividing the read matching result by using punctuation marks to obtain an address character string list;
step 4.4.3, comparing all addresses in the address character string list with addresses in the standard address library;
step 4.4.4, optimizing matching accuracy according to the comparison result;
step 4.4.5, outputting a more accurate travel track;
the standard address library is used for storing all address information in the cognitive range.
Step 5, comparing the identification result with data in a regional database;
step 6, if the comparison result is that the area does not exist, confirming release; otherwise, a risk prompt is generated, the identification result is transmitted to a manager, and registration report is carried out for subsequent management.
In some implementations of the first aspect, in a process of performing text line detection and text recognition on text in the trip image data by using the text detection recognition model, in order to avoid an influence on a detection result caused by the occurrence of screenshot data, real-time judgment on an acquired image is further implemented by combining analysis of update time and arrow dynamic change.
In a second aspect, a trip area-oriented identification system is provided, and is used for implementing a trip area identification method, where the system specifically includes the following modules:
the database construction module is used for constructing a regional database according to application requirements;
the data acquisition module is used for adopting current journey image data according to requirements;
the model construction module is used for constructing a journey identification model according to analysis requirements;
the data analysis module is used for detecting and analyzing the travel image data by utilizing the travel identification model;
the data comparison module is used for comparing the identification result of the data analysis module with the data in the regional database;
and the data output module is used for outputting a comparison result of the data comparison module and generating a processing strategy according to the comparison result.
Wherein, the journey analysis model includes: an arrow detection model and a text detection recognition model; the arrow detection model is used for identifying the arrow position in the travel image data; the text detection and recognition model is used for detecting text lines and recognizing characters of texts in the journey image data.
In a third aspect, a trip zone oriented identification device is provided, the device comprising: a processor and a memory storing computer program instructions.
The processor reads and executes the computer program instructions to implement the method of identifying the trip area.
In a fourth aspect, a computer-readable storage medium having computer program instructions stored thereon is presented. The computer program instructions, when executed by the processor, implement a method of identifying a trip zone.
The beneficial effects are that: the invention provides a method and a system for identifying a travel area, which replace an artificial checking mode by analyzing and processing image data, reduce the labor investment, improve the detection speed, and simultaneously feed back the data in real time, effectively improve the real-time performance of the data and ensure that the monitoring situation is mastered more in real time and comprehensively.
In addition, the invention further analyzes the screenshot data in the image recognition process, reduces the negligence possibly generated in the manual operation process, effectively reduces the influence of non-real-time data on the current form,
drawings
FIG. 1 is a flow chart of data processing according to the present invention.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a more thorough understanding of the present invention. It will be apparent, however, to one skilled in the art that the invention may be practiced without one or more of these details. In other instances, well-known features have not been described in detail in order to avoid obscuring the invention.
Through checking the travel, the travel track of the person can be effectively mastered, and accurate mastering of each person is realized. At present, checking of the public places aiming at the journey is mainly based on a manual mode, and when the journey in the area is found, registration and backup are carried out on the journey in the area so as to facilitate subsequent management and control; however, in the manual mode, a large amount of personnel is required to be input in a place with a large flow of people, and the manual check speed is low, so that the place with a small outlet is prone to be jammed. Aiming at the defects existing in the manual operation process, the invention introduces an AI identification technology, and provides the identification method and the identification system for the trip area, which are used for scanning and identifying the trip picture information, so that the detection efficiency is improved while the manpower input resources are reduced.
Example 1
In one embodiment, a method for identifying a trip area is provided, through introduction of an AI technology, the generated real-time trip image data is analyzed and identified by utilizing an OCR technology, and when no trip of the area is found, the trip of the area is registered and prepared so as to facilitate subsequent management. As shown in fig. 1, the method specifically includes the following steps:
step 1, constructing a region database and storing the current region;
specifically, in order to attach the real-time property of the data in the practical application, the data in the regional database is updated every hour according to the current condition of the monitored region.
Step 2, acquiring travel image data through a camera;
step 3, constructing a journey identification model and receiving journey image data;
step 4, carrying out region identification on the journey image data by utilizing the journey identification model;
specifically, for the existing travel image presentation form, the travel recognition model comprises an arrow detection model and a text detection recognition model, in the process of region recognition, the arrow position in the image is recognized by the arrow detection model firstly, and when the position of the arrow meets the requirement, text line detection and text recognition are performed on the text in the image by the text detection recognition model according to the received travel image data.
In the process of detecting whether an arrow appears in the travel image data by using the arrow detection model, in order to ensure that the whole travel image data is in the range of the camera, whether the position of the arrow is at the middle position of shooting is firstly judged. Then, the arrow detection model is adopted to identify the direction of an arrow in the graph, and the direction corresponding to the arrow, the minimum value xmin of the coordinate x, the maximum value xmax of the coordinate x, the minimum value ymin of the coordinate y and the maximum value ymax of the coordinate y are recorded. And setting a threshold value, comparing the currently recorded arrow parameter with the set threshold value in a numerical value mode, and performing character recognition when the threshold value condition is met.
Wherein, the threshold condition is:
wherein w represents the width of the stroke image data; h represents the height of the travel image data; xmin_thr, ymin_thr, xmax_thr, ymax_thr are corresponding to preset thresholds.
In order to facilitate the text detection recognition model to execute text content recognition, when receiving the travel image data, firstly, adjusting the travel image to a preset orientation according to the arrow direction recognized by the arrow detection model, inputting the text detection recognition model, and performing key value pair matching. The matching process of the key value pairs is as follows: and automatically matching corresponding text contents according to the identified key words, thereby obtaining the required contents. In a preferred embodiment, when the keyword is "dynamic travel card", the mobile phone number in the travel image data is automatically matched; when the keyword is updated, automatically matching the time in the journey image data; when the keyword is "arrival" or "pass", the travel track in the travel image data is automatically matched. And finally, outputting the key value pair matching result as a recognition result by the text detection recognition model.
Step 5, comparing the identification result with data in a regional database;
step 6, if the comparison result is that the area does not exist, confirming release; otherwise, a risk prompt is generated, the identification result is transmitted to a manager, and registration report is carried out for subsequent management. In a preferred embodiment, if no address is selected, a pass window is presented and each address is displayed in the window, if a zone is present, a no pass window is presented and the address of the zone is highlighted in red.
Before the region judgment process of the next trip image data starts, it is necessary to end the previous trip judgment process, and therefore, after the previous process ends, it is necessary to capture a blank picture of n frames, that is, at least no arrow is detected in the n frames of pictures. After n blank pictures, when the arrow is detected again, a new journey recognition process is started. In a preferred embodiment, n takes the value of 5 frames.
In a further embodiment, in order to improve the recognition performance of the arrow detection model, the actual arrow detection using the arrow detection model is first performance trained during its application.
Specifically, a large number of travel data sets are collected at first, labeling is carried out, two-point labeling is carried out on the arrow, namely, the coordinates of the upper left corner point and the coordinates of the lower right corner point of the arrow are labeled, and the arrow is divided into an upper direction, a lower direction, a left direction and a right direction according to the pointing direction of the arrow. And performing performance training on the arrow detection model by using the marked travel data set. The trained arrow detection model can detect the acquired real-time travel image data of each frame, and when an arrow is detected, the arrow direction and the detected coordinates of two points of the arrow are output.
In a preferred embodiment, a rolabelmg tool is used for marking two points of an arrow in the travel image data, and lightweight detection models such as nanodet, yolo and the like are used as arrow detection models for realizing arrow detection and identification. The lightweight detection model can effectively improve the detection speed, and has the characteristic of small size.
In a further embodiment, in order to improve the recognition performance of the text detection recognition model, the text detection recognition model is first performance trained during the application of the actual text detection using it.
Specifically, a large number of travel data sets are collected first and labeled according to requirements, in a preferred embodiment, according to requirements of actual text recognition, four-point labeling is performed by using a rolabelmg tool, that is, coordinates of four points of a text line are labeled, and labeled text contents include: cell phone number, update time and travel track. And then training the text detection and recognition model by using the marked data set to achieve the aim of recognizing the characters corresponding to the marked words. In a preferred embodiment, for text line detection, a fast DBnet model is adopted; for the detection of word recognition, a crnn word recognition model is used.
According to the embodiment, the identification of the travel image data is realized by combining the proposed travel identification model with the AI technology, so that the labor cost investment of a plurality of areas is effectively saved; meanwhile, the proposed model does not need to manually mark a lot of areas, so that the phenomenon of omission of risk groups caused by manual negligence or memory deviation can be effectively reduced. In addition, in the process of arrow detection, judgment and correction of four directions are given, so that the device is suitable for placement of all angles of a stroke.
Example two
In a further embodiment based on the first embodiment, in order to avoid the occurrence of a real-time data recognition error phenomenon caused by using an intercepted image as a real-time trip for analysis in the process of performing region recognition on received trip image data, the embodiment synchronously proposes an analysis method for screenshot data in the process of recognizing texts in a graph by using a text detection recognition model, and realizes real-time judgment for an acquired image by combining with analysis of update time and arrow dynamic change, thereby ensuring that the acquired trip image data is not screenshot data and further improving the accuracy of a subsequent analysis result.
Specifically, the real-time judging process for the travel image data comprises the following steps: comparing the time ocr _time detected by the text detection and identification model with the time sys_time of the current system, and entering the next arrow judgment when the comparison result meets the preset threshold condition; otherwise, the current journey image data is considered as screenshot. In a preferred embodiment, the preset threshold condition is that the data identified by subtracting the file detection mode from the system time is smaller than a threshold err_time, that is, sys_time-ocr _time < err_time; and then, when the preset condition is met, executing arrow dynamic judgment on the image data of the next frame, and further avoiding the identification of the screenshot of the current time as a normal stroke, thereby improving the identification accuracy.
For dynamic identification of an arrow, it is necessary to determine whether an arrow in the acquired travel image data is moving while still identifying the arrow using an arrow detection model, and record the width of the identified arrow. And finding a maximum value max_w and a minimum value min_w in the acquired multi-frame arrow widths, judging the current arrow data as screenshot data when max_w-min_w is smaller than thr, and giving a screenshot alarm prompt. In a further embodiment, in order to prevent the mobile phone and the information collecting device from moving relatively, and cause the arrow on the screenshot to be misjudged as dynamic or the dynamic arrow to be misjudged as the screenshot, it is necessary to keep the identification device and the mobile phone relatively stationary. Therefore, a bracket is adopted, the identification equipment is placed at one end, and the mobile phone is placed at one end for identification, so that the relative stillness of the mobile phone and the equipment is ensured.
Aiming at the real-time requirement, the embodiment further carries out real-time analysis on the acquired travel image data, and judges whether the current travel image data is a screenshot or not by combining the update time and the characteristics of arrow dynamic change.
Example III
In a further embodiment based on the first embodiment, when the text detection recognition model is used to perform text line detection and text recognition, and the key value pair is used to obtain the matching value, the presence of the form-like word often leads to the misidentification result, and further leads to the final detection result being inaccurate, so that the embodiment further performs the form-like word error correction processing on the matched travel track, and obtains a more accurate travel track by comparing with the information in the management background standard address library.
Specifically, firstly, a result obtained by preliminary matching is read; secondly, dividing the read matching result by using punctuation marks to obtain an address character string list; thirdly, comparing all addresses in the address character string list with addresses in a standard address library; optimizing matching accuracy according to the comparison result; finally, a more accurate travel track is output.
Wherein, the optimization process is: obtaining a comparison result with a standard address library, and if the comparison result is consistent, extracting the corresponding address and outputting the address as a standard address; otherwise, for inconsistent comparison results, firstly constructing a storage list, then enabling the length of the character strings to be compared to be L, then extracting addresses with the length of L from a standard address library, and storing the addresses into the storage list; and then, calculating the difference between each address in the storage list and the character string to be compared through bit-by-bit comparison, adding 1 to the difference if the two numerical values are different, and extracting the address corresponding to the smallest difference. If the final extracted result is 1, directly taking the extracted address as a correction address; if the extracted result is more than 1, a shape near word dictionary is constructed, chinese characters with different positions of each address and the character strings to be compared are extracted, the shape near words are used for replacing the Chinese characters at the corresponding positions of the character strings to be compared, if the address information obtained after the replacement is consistent with the address information in the standard address library, correction processing is executed, and the matching result to be processed is replaced with corrected data.
In a preferred embodiment, the text detection recognition model recognizes that a result may have a word-like error, and the specific processing procedure is as follows: first, the city address strings ocr _address_str obtained by the ocr text detection and identification model are separated by punctuation marks, and a corresponding address string list ocr _address_list is obtained. And then, comparing each address ocr _address in the character string list with the addresses in the standard address library, and if the standard address library has the same numerical value as the ocr _address character string, extracting the corresponding address and outputting the address as the standard address. If the standard address library does not have the same numerical value as the ocr _address character string, the character string ocr _address length is assumed to be L, then the addresses with the length of L in the standard address library are extracted and put into a constructed list std_add_L, the difference between each address of the list std_add_L and the character string ocr _address is calculated, namely, the character strings of the two addresses are compared bit by bit, if the character strings are not equal, the difference is increased by 1, and the standard addresses with the smallest difference are extracted.
If only one result is extracted, directly taking the address as correction data to correct the result recognized by the text detection recognition model; if a plurality of Chinese characters exist, a shape near word dictionary is constructed, chinese characters at positions where each extracted standard address is different from the character string ocr _address are carried out, then the character string ocr _address is replaced by the shape near word, if the shape near word is consistent with the standard address after the replacement, correction processing is executed, and error correction of the recognition result of the text detection recognition model is completed.
Example IV
In one embodiment, a trip area-oriented identification system is provided, and is used for implementing a trip area identification method, and the system specifically comprises the following modules: the system comprises a database construction module, a data acquisition module, a model construction module, a data analysis module, a data comparison module and a data output module.
Specifically, the database construction module is used for constructing a regional database according to the storage requirement and storing the region under the current real-time condition; the data acquisition module is used for acquiring travel image data; the model building module is used for building a journey identification model according to analysis requirements; the data analysis module is used for analyzing text data in the journey image data by adopting a journey identification model according to the detection requirement; the data comparison module is used for comparing the identification result of the data analysis module with the data in the regional database; the data output module is used for outputting the comparison result of the data comparison module and outputting a processing strategy generated according to the comparison result.
Wherein the constructed trip identification model further comprises: an arrow detection model and a text detection recognition model. In the process of region identification, for received travel image data, an arrow position in a graph is identified by using an arrow detection model, and when the arrow position meets requirements, text line detection and text identification are performed on texts in the graph by using a text detection identification model.
In a further embodiment, in the process of implementing the method for the region in trip by using the recognition system for the region in trip, a database construction module is firstly utilized to construct a region database for storing the current region; secondly, capturing travel image data in a current picture by utilizing a data acquisition module; thirdly, constructing a journey identification model by using a model construction module, and receiving journey image data acquired by a data acquisition module; secondly, the data analysis module utilizes the constructed journey image data analysis model to carry out detection analysis on the journey image data; and finally, comparing the detection and analysis result with the data in the regional database by using the data comparison module, generating corresponding warning information according to the comparison result, and outputting the comparison result and the warning information by using the data output module.
Example five
In one embodiment, a trip zone oriented identification device is presented, the device comprising: a processor and a memory storing computer program instructions.
The processor reads and executes the computer program instructions to implement the trip area-oriented identification method.
Example six
In one embodiment, a travel zone oriented identification medium is provided having computer program instructions stored thereon.
Wherein the computer program instructions, when executed by the processor, implement a trip zone oriented identification method.
As described above, although the present invention has been shown and described with reference to certain preferred embodiments, it is not to be construed as limiting the invention itself. Various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (10)
1. The identification method for the trip-oriented areas is characterized by comprising the following steps of:
step 1, constructing a region database and storing the current region;
step 2, acquiring travel image data through a camera; the trip image data includes an indication arrow and a text expression box;
step 3, constructing a journey identification model and receiving journey image data;
step 4, carrying out region identification on the journey image data by utilizing the journey identification model;
step 5, comparing the identification result with data in a regional database;
step 6, if the comparison result is that the area exists, confirming release; otherwise, generating an unfinished prompt, transmitting the identification result to a manager, and registering and preparing for subsequent management.
2. The trip-oriented regional identification method of claim 1, wherein the regional database updates the stored regional data after each preset time period has elapsed in order to meet regional real-time requirements.
3. The trip zone-oriented identification method of claim 1, wherein said trip identification model further comprises: an arrow detection model and a text detection recognition model;
the process of carrying out region identification on the travel image data by using the travel identification model specifically comprises the following steps:
step 4.1, receiving travel image data;
step 4.2, detecting and identifying an arrow in the travel image data by using the arrow detection model;
and 4.3, detecting text lines and recognizing characters of the texts in the journey image data by using the text detection and recognition model based on the arrow detection and recognition results.
4. A method for identifying a trip area according to claim 3, wherein, when the arrow detection model is used to perform arrow detection and identification, in order to ensure that the whole trip image data is within the range collected by the camera, the method specifically comprises the following steps:
step 4.2.1, recognizing an arrow direction in the travel image data by adopting an arrow detection model;
step 4.2.2, recording corresponding arrow parameters based on the identified arrow direction;
step 4.2.3, setting a threshold value, and judging the ratio of the arrow parameter to the threshold value;
and 4.2.4, outputting an analysis result of the arrow recognition model when the threshold condition is met, and taking the analysis result as a premise of text line detection and character recognition.
5. A method for identifying a trip area according to claim 3, wherein the text detection and identification model is used for text line detection and text identification of text in trip image data, and specifically comprising the following steps:
step 4.3.1, based on the result detected by the arrow detection model, adjusting the received travel image data to be in a preset orientation, and inputting a text detection recognition model;
step 4.3.2, aiming at the result of the text detection and recognition model, completing data matching according to the preset key value pair, and obtaining the required text data;
the data matching process comprises the following steps: text data following the keyword is automatically matched according to the detected keyword.
6. The method for recognizing a trip area according to claim 3, wherein in the process of text line detection and text recognition of text in trip image data by using a text detection recognition model, in order to avoid the influence on detection results caused by the occurrence of screenshot data, real-time judgment of acquired images is further realized by combining update time and analysis of arrow dynamic changes.
7. A method for identifying a trip area according to claim 3, wherein in order to improve the identification result of the text detection identification model, the process of identifying the area of the trip image data by using the trip identification model further comprises:
step 4.4, further performing shape-similar word error correction processing on the matched travel track, and comparing the shape-similar word error correction processing with information in a standard address library of a management background to obtain a more accurate travel track; the method specifically comprises the following steps:
step 4.4.1, reading a result obtained by preliminary matching;
step 4.4.2, dividing the read matching result by using punctuation marks to obtain an address character string list;
step 4.4.3, comparing all addresses in the address character string list with addresses in the standard address library;
step 4.4.4, optimizing matching accuracy according to the comparison result;
step 4.4.5, outputting a more accurate travel track;
the standard address library is used for storing all address information in the cognitive range.
8. A trip zone-oriented identification system for implementing a trip zone identification method as defined in any one of claims 1-7, comprising in particular the following modules:
the database construction module is arranged to construct a regional database according to application requirements;
the data acquisition module is arranged for acquiring current journey image data according to requirements;
the model construction module is used for constructing a journey recognition model according to the recognition analysis requirements;
the data analysis module is used for detecting and analyzing the journey image data by utilizing the journey identification model;
the data comparison module is used for comparing the identification result of the data analysis module with the data in the regional database;
the data output module is arranged to output the comparison result of the data comparison module and a processing strategy generated according to the comparison result;
the journey analysis model comprises: an arrow detection model and a text detection recognition model;
the arrow detection model is configured to identify arrow locations in the graph;
the text detection and recognition model is configured to perform text line detection and text recognition on text in the graph.
9. A trip zone oriented identification device, the device comprising:
a processor and a memory storing computer program instructions;
the processor reads and executes the computer program instructions to implement the method of identifying a trip zone as claimed in any one of claims 1 to 7.
10. A computer readable storage medium, characterized in that the computer readable storage medium has stored thereon computer program instructions, which when executed by a processor, implement a method of identifying a trip zone according to any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211428714.3A CN116188814A (en) | 2022-11-15 | 2022-11-15 | Stroke area-oriented identification method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211428714.3A CN116188814A (en) | 2022-11-15 | 2022-11-15 | Stroke area-oriented identification method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116188814A true CN116188814A (en) | 2023-05-30 |
Family
ID=86439087
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211428714.3A Pending CN116188814A (en) | 2022-11-15 | 2022-11-15 | Stroke area-oriented identification method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116188814A (en) |
-
2022
- 2022-11-15 CN CN202211428714.3A patent/CN116188814A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110826538B (en) | Abnormal off-duty identification system for electric power business hall | |
CN107818322A (en) | A kind of vehicle VIN code tampering detection system and methods for vehicle annual test | |
CN106778737A (en) | A kind of car plate antidote, device and a kind of video acquisition device | |
CN117557570B (en) | Rail vehicle abnormality detection method and system | |
CN112633343A (en) | Power equipment terminal strip wiring checking method and device | |
CN111652117B (en) | Method and medium for segmenting multiple document images | |
CN115995056A (en) | Automatic bridge disease identification method based on deep learning | |
CN114549993A (en) | Method, system and device for scoring line segment image in experiment and readable storage medium | |
CN116912880A (en) | Bird recognition quality assessment method and system based on bird key point detection | |
CN115019294A (en) | Pointer instrument reading identification method and system | |
CN109086643B (en) | Color box label detection method and system based on machine vision | |
CN111223081A (en) | Part hole opening recognition and detection method and system based on deep learning | |
CN117114420B (en) | Image recognition-based industrial and trade safety accident risk management and control system and method | |
CN111460198B (en) | Picture timestamp auditing method and device | |
CN116188814A (en) | Stroke area-oriented identification method and system | |
CN111047731A (en) | AR technology-based telecommunication room inspection method and system | |
CN114937269B (en) | Ship number plate identification method and system based on English and Chinese character combination | |
CN116128853A (en) | Production line assembly detection method, system, computer and readable storage medium | |
CN110956174A (en) | Device number identification method | |
CN113553965B (en) | Person identity recognition method combining face recognition and human body recognition | |
CN113837206A (en) | Image corner detection method based on machine learning SVM | |
CN113537197B (en) | Meter automatic modeling method based on machine vision | |
CN113792614B (en) | Position matching and number identification method for photovoltaic array string assembly | |
CN117911768A (en) | Problem map identification method based on key point detection and color inconsistency discrimination | |
CN116229368A (en) | Rail transit train image distributed detection system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |