CN112766263A - Identification method for multi-layer stock control relation share graph - Google Patents

Identification method for multi-layer stock control relation share graph Download PDF

Info

Publication number
CN112766263A
CN112766263A CN202110083415.XA CN202110083415A CN112766263A CN 112766263 A CN112766263 A CN 112766263A CN 202110083415 A CN202110083415 A CN 202110083415A CN 112766263 A CN112766263 A CN 112766263A
Authority
CN
China
Prior art keywords
arrow
company
graph
many
coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110083415.XA
Other languages
Chinese (zh)
Other versions
CN112766263B (en
Inventor
张贝贝
仵晨伟
郭仲穗
郑浩然
魏嵬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Technology
Original Assignee
Xian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Technology filed Critical Xian University of Technology
Priority to CN202110083415.XA priority Critical patent/CN112766263B/en
Publication of CN112766263A publication Critical patent/CN112766263A/en
Application granted granted Critical
Publication of CN112766263B publication Critical patent/CN112766263B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an identification method for a multilayer stock control relationship share graph, which comprises the following steps: step 1, inputting a share graph to be identified; step 2, extracting coordinates of a company (individual), an arrow, a arrow with a line and a percentage by adopting a Faster R-CNN network; step 3, dividing the share graph to be identified into a plurality of single-layer one-to-many or many-to-one share graphs according to the dividing and treating idea; step 4, for each one-to-many or many-to-one share image, determining corner coordinates according to the arrow coordinates, determining the direction of the arrow according to the arrow corner coordinates, dividing a company (individual) into a pointed object and a pointed object, and binding the pointed object and the pointed object with more parties and percentages; recognizing characters in a company (a person) by using an OCR (optical character recognition) method; and 5, constructing a stock control flow directional weighted graph according to the directional relation. The invention solves the problem that the original stock graph in the prior art is difficult to intuitively reflect the stocks of the company.

Description

Identification method for multi-layer stock control relation share graph
Technical Field
The invention belongs to the technical field of image recognition, and relates to a recognition method for a multilayer stock control relationship share graph.
Background
With the change of internet technology, the field of artificial intelligence is developed more vigorously, and the proportion of related technologies and products in daily life of people is increased. The image recognition technology is an important field in artificial intelligence, is the basis of a plurality of practical technologies, such as stereoscopic vision, motion analysis, data fusion and the like, and has important application value in the fields of navigation, weather forecast, natural resource analysis, environment monitoring, physiological lesion research and the like. The specific identification and analysis of the complex image is an important field of artificial intelligence, and the target identification of the current image is mature for identifying the characteristics of license plates, human faces, pedestrians and the like; therefore, researchers hope to identify and analyze more complex relationship images (such as stock images), so that related personnel get rid of the traditional manual stock analysis method, stock right distribution can be mastered efficiently and accurately, and work efficiency is improved.
However, most of the existing stock maps come from annual or quarterly reports and related software (such as sky-eye search) published by companies, the pictures are complicated, the architecture of the stocks of the companies is difficult to understand intuitively, and in addition, the analysis is not only performed on one map and one stock of one company, so that the work is time-consuming, labor-consuming and difficult to clear. In addition, currently, no research on identification of share graphs by using an image identification technology exists at home and abroad, and no technology on aspects such as analysis of share relation graphs is researched.
Disclosure of Invention
The invention aims to provide a method for identifying a multi-layer stock control relation stock graph, which solves the problem that the original stock graph in the prior art cannot visually reflect the stocks of a company.
The technical scheme adopted by the invention is that,
a method for identifying a multi-layer stock control relationship share graph comprises the following specific steps:
step 1, inputting a share graph to be identified of a multilayer stock control relationship;
step 2, extracting coordinates of companies (individuals), arrows with lines and percentages in the picture by adopting a Faster R-CNN network;
step 3, dividing the share graph to be identified into a plurality of single-layer one-to-many or many-to-one share graphs by using the coordinates of the arrowheads with lines according to the dividing and treating thought;
step 4, determining the corner coordinates of each single-layer one-to-many or many-to-one stock image according to the arrow coordinates, and determining the direction of the arrow according to the arrow corner coordinates; dividing a company (person) into a pointing object and a pointed object according to the direction of an arrow, and then binding the pointed object and the pointed object with more parties and percentages in a one-to-one mode; finally, recognizing characters in the pointing object and the pointed object by using an OCR recognition method;
and 5, constructing an object-arrow-percentage-pointed object-oriented stock control flow directional weighting graph according to the pointing relation obtained in the step 3.
The invention is also characterized in that:
the step 2 comprises the following steps:
step 2.1, a large number of share graphs are adopted, and companies (individuals), arrows with lines and percentages in the graphs are manually marked to serve as a data set; wherein the share graph is manually divided into a plurality of single-layer one-to-many or many-to-one share graphs, and an arrow exceeding the single-layer one-to-many or many-to-one share graphs is defined as a strip line arrow;
step 2.2, building a VGG-16 network model, wherein the VGG-16 comprises 13 convolution layers, 3 full connection layers and 5 pooling layers;
step 2.3, training a data set by the VGG-16 network model;
and 2.4, detecting the share graph to be recognized by adopting the trained VGG-16 network model, and outputting a detection result, wherein the detection result is coordinates of a company (individual), an arrow and a percentage.
In step 2, the sizes of convolution kernels adopted by the 13 convolution layers are 3x3 convolution, stride 1 is adopted, padding is same as same, and each convolution layer uses a relu activation function; respectively generating positive anchors and corresponding bounding box regression offsets, and then calculating prosages;
the adopted pooling nuclear parameters of the pooling layer are all 2 multiplied by 2, stride is 2, max pooling mode; the pro-usals of the convolutional layer are used to extract the pro-visual feature from the feature maps and send it to the subsequent full-connection and softmax network for classification (i.e. what object the pro-visual is).
The step 3 is:
step 3.1, setting the upper bound, the lower bound, the left bound and the right bound of the area with the line arrow as U, D, L, R based on the coordinate of the certain line arrow obtained in step 2, and further sequentially searching and expanding the coordinate of the company (personal) name according to the four bounds, wherein the specific operation is as follows:
and expanding the upper bound U: when the absolute value of the difference between the upper bound U of the arrowed area and the lower bound D 'of the company (individual) name area is within the error mu, the upper bound U of the arrowed area is expanded to the upper bound U' of the company (individual) name area; expanding the lower bound D: when the absolute value of the difference between the lower bound D of the arrowed area and the upper bound U 'of the company (individual) name area is within the error mu, the lower bound D of the arrowed area is expanded to the lower bound D' of the company (individual) name area; and (3) expanding L: finding a group of company (personal) names under the condition that the difference between the lower boundary of the company (personal) name area and U is within the error mu range, then finding the difference between the left boundary of the group of company (personal) names and L, and expanding L into the left boundary L' of the company (personal) name area with the minimum difference; expanding R: finding a group of company (personal) names under the condition that the difference between the upper bound of the company (personal) name area and D is within the error mu range, then finding the difference between the right boundary of the group of company (personal) names and R, and expanding R into the right boundary R' of the company (personal) name area with the minimum difference; the area formed by the upper boundary U ', the lower boundary D', the left boundary L 'and the right boundary R' is the final expanded target range of the arrow coordinate with the line;
and 3.2, traversing the coordinates of the arrowheads with lines of all the stock images, repeatedly executing the step 3.1 until the whole coordinates of the trend of each arrowhead are completely expanded, and finally dividing the stock image to be identified into a plurality of single-layer one-to-many or many-to-one stock images.
The error mu is within a range of 10-30 pixels.
Step 4 comprises the following operations for each single-layer one-to-many or many-to-one strand graph:
step 4.1, determining corner point coordinates according to arrow coordinates for a single-layer one-to-many or many-to-one stock picture, and determining the direction of an arrow according to the arrow corner point coordinates:
three corner points of a certain arrow are set as (A (x)1,y1),B(x1,y1),C(x3,y3)): suppose y1,y2Is less than a given threshold e1Then, the two points of the angular points A and B are considered to be on a horizontal line, and then y is judged3And y1If y is3>y1The arrow is considered to be downward if y3<y1The arrow is considered to be up; traversing all the coordinates of the arrow corner points, and judging the directions one by one;
step 4.2, dividing the company (personal) name into a pointing object and a pointed object according to the pointing direction of an arrow, and then binding the pointed object and the pointed object with a large number of the pointing objects and the pointed objects in a one-to-one manner according to the percentage, wherein the inputted stock drawings are all single-layer, so that the stock drawings can be divided into two groups according to the size of the ordinate of the company name, according to the pointing direction of the arrow obtained in step 3.1, if the pointing direction is upward, the group with the largest ordinate in the company (personal) coordinate is the pointed object, and if the pointing direction of the arrow is downward, the group with the smallest ordinate in the company (personal) coordinate is the pointed object; and then one-to-one binding the pointed object and the more part of the pointed object with the percentage is carried out: the minimum abscissa and the maximum abscissa of the coordinates of four points of one of the pointed object and the pointed object are set to (x)min,xmax) Then, thenFind the percent abscissa at (x)min,xmax) Then binding the two in a specific data structure (such as a dictionary), traversing the remaining objects of one party with a large number, and carrying out one-to-one binding with the percentage;
and 4.3, recognizing characters in the coordinates of the pointing object and the pointed object by utilizing an OCR technology.
The step 5 comprises the following steps:
step 5.1, establishing an empty directed graph G, and using the company (individual) names obtained in the step 3.3 as nodes to be sequentially added into the directed graph G to obtain a basic directed graph G' only storing the nodes;
step 5.2, on the basis of the directed graph G' in the step 5.1, converting the pointing relationship in the step 3.2 into a triple [ u, v, w ], wherein u is a starting point and represents a pointing object; v is an end point and represents a pointed object, w is a weight and represents the percentage of stock occupation, the converted triple is used as a parameter and is added into a directed graph G ', and finally, a stock control flow directed weighted graph G' is formed.
The invention has the advantages that
The stock image is identified and analyzed by utilizing the fast R-CNN technology and the image identification technology of the deep learning framework, the defects of time and labor waste and difficulty in understanding when the stock analysis is carried out on individuals or companies are overcome, the defects of research on the aspect at home and abroad are made up, and an efficient and accurate method is provided.
Drawings
FIG. 1 is a schematic diagram of a single-layer one-to-many or many-to-one graph recognition and analysis method according to the present invention;
FIG. 2 is a schematic view of VGG-16 network structure of fast R-CNN in the single-layer one-to-many or many-to-one graph identification and analysis method of the present invention;
FIG. 3 is a diagram of the shares input in embodiment 1 of the method for identifying and parsing a single-layer one-to-many or many-to-one share diagram of the present invention;
FIG. 4 is a graph of the results obtained after performing step 3 in example 1 of the method for identifying and parsing a single-layer one-to-many or many-to-one strand graph according to the present invention;
fig. 5 is a complex network diagram finally obtained in embodiment 1 of the method for identifying and resolving a single-layer one-to-many or many-to-one share diagram according to the present invention.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
As shown in fig. 1, a method for identifying a multi-layer stock control relationship share graph includes the following specific steps:
step 1, inputting a share graph to be identified of a multilayer stock control relationship;
step 2, extracting coordinates of companies (individuals), arrows with lines and percentages in the picture by adopting a Faster R-CNN network;
step 3, dividing the share graph to be identified into a plurality of single-layer one-to-many or many-to-one share graphs by using the coordinates of the arrowheads with lines according to the dividing and treating thought;
step 4, determining the corner coordinates of each single-layer one-to-many or many-to-one stock image according to the arrow coordinates, and determining the direction of the arrow according to the arrow corner coordinates; dividing a company (person) into a pointing object and a pointed object according to the direction of an arrow, and then binding the pointed object and the pointed object with more parties and percentages in a one-to-one mode; finally, recognizing characters in the pointing object and the pointed object by using an OCR recognition method;
and 5, constructing an object-arrow-percentage-pointed object-oriented stock control flow directional weighting graph according to the pointing relation obtained in the step 3.
In the step 1, for a share graph to be identified of a multilayer stock control relation, the share graph needs to be scaled to a fixed size;
the step 2 comprises the following steps:
step 2.1, a large number of share graphs are adopted, and companies (individuals), arrows with lines and percentages in the graphs are manually marked to serve as a data set; wherein the share graph is manually divided into a plurality of single-layer one-to-many or many-to-one share graphs, and an arrow exceeding the single-layer one-to-many or many-to-one share graphs is defined as a strip line arrow;
step 2.2, building a VGG-16 network model, wherein the VGG-16 comprises 13 convolution layers, 3 full connection layers and 5 pooling layers;
step 2.3, training a data set by the VGG-16 network model;
and 2.4, detecting the share graph to be recognized by adopting the trained VGG-16 network model, and outputting a detection result, wherein the detection result is coordinates of a company (individual), an arrow and a percentage.
In step 2, the sizes of convolution kernels adopted by the 13 convolution layers are 3x3 convolution, stride 1 is adopted, padding is same as same, and each convolution layer uses a relu activation function; respectively generating positive anchors and corresponding bounding box regression offsets, and then calculating prosages;
the adopted pooling nuclear parameters of the pooling layer are all 2 multiplied by 2, stride is 2, max pooling mode; the pro-usals of the convolutional layer are used to extract the pro-visual feature from the feature maps and send it to the subsequent full-connection and softmax network for classification (i.e. what object the pro-visual is).
The step 3 is:
step 3.1, setting the upper bound, the lower bound, the left bound and the right bound of the area with the line arrow as U, D, L, R based on the coordinate of the certain line arrow obtained in step 2, and further sequentially searching and expanding the coordinate of the company (personal) name according to the four bounds, wherein the specific operation is as follows:
and expanding the upper bound U: when the absolute value of the difference between the upper bound U of the arrowed area and the lower bound D 'of the company (individual) name area is within the error mu, the upper bound U of the arrowed area is expanded to the upper bound U' of the company (individual) name area; expanding the lower bound D: when the absolute value of the difference between the lower bound D of the arrowed area and the upper bound U 'of the company (individual) name area is within the error mu, the lower bound D of the arrowed area is expanded to the lower bound D' of the company (individual) name area; and (3) expanding L: finding a group of company (personal) names under the condition that the difference between the lower boundary of the company (personal) name area and U is within the error mu range, then finding the difference between the left boundary of the group of company (personal) names and L, and expanding L into the left boundary L' of the company (personal) name area with the minimum difference; expanding R: finding a group of company (personal) names under the condition that the difference between the upper bound of the company (personal) name area and D is within the error mu range, then finding the difference between the right boundary of the group of company (personal) names and R, and expanding R into the right boundary R' of the company (personal) name area with the minimum difference; the area formed by the upper boundary U ', the lower boundary D', the left boundary L 'and the right boundary R' is the final expanded target range of the arrow coordinate with the line;
and 3.2, traversing the coordinates of the arrowheads with lines of all the stock images, repeatedly executing the step 3.1 until the whole coordinates of the trend of each arrowhead are completely expanded, and finally dividing the stock image to be identified into a plurality of single-layer one-to-many or many-to-one stock images.
The error mu is within a range of 10-30 pixels.
Step 4 comprises the following operations for each single-layer one-to-many or many-to-one strand graph:
step 4.1, determining corner point coordinates according to arrow coordinates for a single-layer one-to-many or many-to-one stock picture, and determining the direction of an arrow according to the arrow corner point coordinates:
three corner points of a certain arrow are set as (A (x)1,y1),B(x1,y1),C(x3,y3)): suppose y1,y2Is less than a given threshold e1Then, the two points of the angular points A and B are considered to be on a horizontal line, and then y is judged3And y1If y is3>y1The arrow is considered to be downward if y3<y1The arrow is considered to be up; traversing all the coordinates of the arrow corner points, and judging the directions one by one;
step 4.2, dividing the company (personal) name into a pointed object and a pointed object according to the pointing direction of an arrow, binding the pointed object and the pointed object with a larger number of the pointed objects and the pointed objects one by one with a percentage, dividing the stock drawings into two groups according to the size of the ordinate of the company name because the input stock drawings are all single-layer, and if the arrow points upwards, dividing the company (personal) name into one group with the largest ordinate in the company (personal) coordinatesThe group is the pointed object, if the arrow points downwards, the group with the smallest ordinate in the company (personal) coordinates is the pointed object; and then one-to-one binding the pointed object and the more part of the pointed object with the percentage is carried out: the minimum abscissa and the maximum abscissa of the coordinates of four points of one of the pointed object and the pointed object are set to (x)min,xmax) Then find the abscissa of the percentage in (x)min,xmax) Then binding the two in a specific data structure (such as a dictionary), traversing the remaining objects of one party with a large number, and carrying out one-to-one binding with the percentage;
and 4.3, recognizing characters in the coordinates of the pointing object and the pointed object by utilizing an OCR technology.
The step 5 comprises the following steps:
step 5.1, establishing an empty directed graph G, and using the company (individual) names obtained in the step 3.3 as nodes to be sequentially added into the directed graph G to obtain a basic directed graph G' only storing the nodes;
step 5.2, on the basis of the directed graph G' in the step 5.1, converting the pointing relationship in the step 3.2 into a triple [ u, v, w ], wherein u is a starting point and represents a pointing object; v is an end point and represents a pointed object, w is a weight and represents the percentage of stock occupation, the converted triple is used as a parameter and is added into a directed graph G ', and finally, a stock control flow directed weighted graph G' is formed.
Example 1
Step 1 is executed, and a share graph to be identified is input as a graph 3;
executing step 2, wherein the data sets mainly come from a Chinese bidding network and a huge tide information network, the total quantity value exceeds 100G, and the number of the original data sets is 3200 because a single image of a stock right image contains the characteristics of a plurality of target images, the number of the existing data sets is overturned by utilizing an open-cv library, and the like, the number of the expanded data sets is 11000, and the number of the target images of each category exceeds 60000; the OCR technology calls an existing mature OCR interface (such as an API of hundred-degree OCR) to perform recognition, so that the recognition rate is improved;
step 3 is executed, wherein mu is 15, the output result is shown in figure 4, as can be seen, each frame in the figure is a one-to-many or many-to-one share graph of one layer;
and 4, executing the step 4, wherein the complex network for constructing the directional relation is a visual network constructed based on graph theory and a complex network modeling tool NetworkX, and the finally obtained stock control flow directional weighting graph is shown in FIG. 4.

Claims (7)

1. A method for identifying a multi-layer stock control relationship share graph is characterized by comprising the following specific steps:
step 1, inputting a share graph to be identified of a multilayer stock control relationship;
step 2, extracting coordinates of companies (individuals), arrows with lines and percentages in the picture by adopting a Faster R-CNN network;
step 3, dividing the share graph to be identified into a plurality of single-layer one-to-many or many-to-one share graphs by using the coordinates of the arrowheads with lines according to the dividing and treating thought;
step 4, determining the corner coordinates of each single-layer one-to-many or many-to-one stock image according to the arrow coordinates, and determining the direction of the arrow according to the arrow corner coordinates; dividing a company (person) into a pointing object and a pointed object according to the direction of an arrow, and then binding the pointed object and the pointed object with more parties and percentages in a one-to-one mode; finally, recognizing characters in the pointing object and the pointed object by using an OCR recognition method;
and 5, constructing an object-arrow-percentage-pointed object-oriented stock control flow directional weighting graph according to the pointing relation obtained in the step 3.
2. The identification method for the multi-layer stock control relationship share graph as claimed in claim 1, wherein the step 2 comprises:
step 2.1, a large number of share graphs are adopted, and companies (individuals), arrows with lines and percentages in the graphs are manually marked to serve as a data set; wherein the share graph is manually divided into a plurality of single-layer one-to-many or many-to-one share graphs, and an arrow exceeding the single-layer one-to-many or many-to-one share graphs is defined as a strip line arrow;
step 2.2, building a VGG-16 network model, wherein the VGG-16 comprises 13 convolution layers, 3 full-connection layers and 5 pooling layers;
step 2.3, training a data set by the VGG-16 network model;
and 2.4, detecting the share graph to be recognized by adopting the trained VGG-16 network model, and outputting a detection result, wherein the detection result is coordinates of a company (individual), an arrow and a percentage.
3. The identification method for the multi-layer stock control relationship share graph in claim 1, wherein the size of the convolution kernel used in 13 convolution layers in step 2 is 3x3 convolution, the stride 1 is used, the padding is padding same, and each convolution layer uses a relu activation function; respectively generating positive anchors and corresponding bounding box regression offsets, and then calculating prosages;
the adopted pooling nuclear parameters of the pooling layer are all 2 multiplied by 2, stride equals to 2, and max pooling mode; the pro-usals of the convolutional layer are used to extract the pro-visual feature from the feature maps and send it to the subsequent full-connection and softmax network for classification (i.e. what object the pro-visual is).
4. The identification method for the multilayer stock control relationship share graph as claimed in claim 1, wherein the step 3 is:
step 3.1, setting the upper bound, the lower bound, the left bound and the right bound of the area with the line arrow as U, D, L, R based on the coordinate of the certain line arrow obtained in step 2, and further sequentially searching and expanding the coordinate of the company (personal) name according to the four bounds, wherein the specific operation is as follows:
and expanding the upper bound U: when the absolute value of the difference between the upper bound U of the arrowed area and the lower bound D 'of the company (individual) name area is within the error mu, the upper bound U of the arrowed area is expanded to the upper bound U' of the company (individual) name area; expanding the lower bound D: when the absolute value of the difference between the lower bound D of the arrowed area and the upper bound U 'of the company (individual) name area is within the error mu, the lower bound D of the arrowed area is expanded to the lower bound D' of the company (individual) name area; and (3) expanding L: finding a group of company (personal) names under the condition that the difference between the lower boundary of the company (personal) name area and U is within the error mu range, then finding the difference between the left boundary of the group of company (personal) names and L, and expanding L into the left boundary L' of the company (personal) name area with the minimum difference; expanding R: finding a group of company (personal) names under the condition that the difference between the upper bound of the company (personal) name area and D is within the error mu range, then finding the difference between the right boundary of the group of company (personal) names and R, and expanding R into the right boundary R' of the company (personal) name area with the minimum difference; the area formed by the upper boundary U ', the lower boundary D', the left boundary L 'and the right boundary R' is the final expanded target range of the arrow coordinate with the line;
and 3.2, traversing the coordinates of the arrowheads with lines of all the stock images, repeatedly executing the step 3.1 until the whole coordinates of the trend of each arrowhead are completely expanded, and finally dividing the stock image to be identified into a plurality of single-layer one-to-many or many-to-one stock images.
5. The identification method for the multi-layer stock control relationship share graph as claimed in claim 4, wherein the error μ is in a range of 10-30 pixels.
6. The method according to claim 1, wherein the step 4 comprises the following operations for each single-layer one-to-many or many-to-one stock graph:
step 4.1, determining corner point coordinates according to arrow coordinates for a single-layer one-to-many or many-to-one stock picture, and determining the direction of an arrow according to the arrow corner point coordinates:
three corner points of a certain arrow are set as (A (x)1,y1),B(x1,y1),C(x3,y3)): suppose y1,y2Is less than a given threshold e1Then, the two points of the angular points A and B are considered to be on a horizontal line, and then y is judged3And y1If y is3>y1The arrow is considered to be downward if y3<y1The arrow is considered to be up; traversing all the coordinates of the arrow corner points, and judging the directions one by one;
step 4.2, dividing the company (personal) name into a pointing object and a pointed object according to the pointing direction of an arrow, and then binding the pointed object and the pointed object with a large number of the pointing objects and the pointed objects in a one-to-one manner according to the percentage, wherein the inputted stock drawings are all single-layer, so that the stock drawings can be divided into two groups according to the size of the ordinate of the company name, according to the pointing direction of the arrow obtained in step 3.1, if the pointing direction is upward, the group with the largest ordinate in the company (personal) coordinate is the pointed object, and if the pointing direction of the arrow is downward, the group with the smallest ordinate in the company (personal) coordinate is the pointed object; and then one-to-one binding the pointed object and the more part of the pointed object with the percentage is carried out: the minimum abscissa and the maximum abscissa of the coordinates of four points of one of the pointed object and the pointed object are set to (x)min,xmax) Then find the abscissa of the percentage in (x)min,xmax) Then binding the two in a specific data structure (such as a dictionary), traversing the remaining objects of one party with a large number, and carrying out one-to-one binding with the percentage;
and 4.3, recognizing characters in the coordinates of the pointing object and the pointed object by utilizing an OCR technology.
7. The identification method for the multi-layer stock control relationship share graph as claimed in claim 1, wherein said step 5 comprises:
step 5.1, establishing an empty directed graph G, and using the company (individual) names obtained in the step 3.3 as nodes to be sequentially added into the directed graph G to obtain a basic directed graph G' only storing the nodes;
step 5.2, on the basis of the directed graph G' in the step 5.1, converting the pointing relationship in the step 3.2 into a triple [ u, v, w ], wherein u is a starting point and represents a pointing object; v is an end point and represents a pointed object, w is a weight and represents the percentage of stock occupation, the converted triple is used as a parameter and is added into a directed graph G ', and finally, a stock control flow directed weighted graph G' is formed.
CN202110083415.XA 2021-01-21 2021-01-21 Identification method for multi-layer control stock relationship share graphs Active CN112766263B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110083415.XA CN112766263B (en) 2021-01-21 2021-01-21 Identification method for multi-layer control stock relationship share graphs

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110083415.XA CN112766263B (en) 2021-01-21 2021-01-21 Identification method for multi-layer control stock relationship share graphs

Publications (2)

Publication Number Publication Date
CN112766263A true CN112766263A (en) 2021-05-07
CN112766263B CN112766263B (en) 2024-02-02

Family

ID=75703627

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110083415.XA Active CN112766263B (en) 2021-01-21 2021-01-21 Identification method for multi-layer control stock relationship share graphs

Country Status (1)

Country Link
CN (1) CN112766263B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108009548A (en) * 2018-01-09 2018-05-08 贵州大学 A kind of Intelligent road sign recognition methods and system
WO2019192397A1 (en) * 2018-04-04 2019-10-10 华中科技大学 End-to-end recognition method for scene text in any shape
CN111626292A (en) * 2020-05-09 2020-09-04 北京邮电大学 Character recognition method of building indication mark based on deep learning technology
CN111782772A (en) * 2020-07-24 2020-10-16 平安银行股份有限公司 Text automatic generation method, device, equipment and medium based on OCR technology

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108009548A (en) * 2018-01-09 2018-05-08 贵州大学 A kind of Intelligent road sign recognition methods and system
WO2019192397A1 (en) * 2018-04-04 2019-10-10 华中科技大学 End-to-end recognition method for scene text in any shape
CN111626292A (en) * 2020-05-09 2020-09-04 北京邮电大学 Character recognition method of building indication mark based on deep learning technology
CN111782772A (en) * 2020-07-24 2020-10-16 平安银行股份有限公司 Text automatic generation method, device, equipment and medium based on OCR technology

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
任明;许光;王文祥;: "家谱文本中实体关系提取方法研究", 中文信息学报, no. 06 *
张新峰 , 沈兰荪: "模式识别及其在图像处理中的应用", 测控技术, no. 05 *
杜恩宇;张宁;李艳荻;: "基于自适应分块编码SVM的车道导向箭头多分类方法", 光学学报, no. 10 *
梅继霞;李伟;: "控股股东与中小股东之间关系的博弈分析", 石河子大学学报(哲学社会科学版), no. 04 *

Also Published As

Publication number Publication date
CN112766263B (en) 2024-02-02

Similar Documents

Publication Publication Date Title
CN108564097B (en) Multi-scale target detection method based on deep convolutional neural network
CN110532900B (en) Facial expression recognition method based on U-Net and LS-CNN
CN111444821A (en) Automatic identification method for urban road signs
CN109635694B (en) Pedestrian detection method, device and equipment and computer readable storage medium
CN111489358A (en) Three-dimensional point cloud semantic segmentation method based on deep learning
CN110084131A (en) A kind of semi-supervised pedestrian detection method based on depth convolutional network
CN104573669A (en) Image object detection method
CN110222767B (en) Three-dimensional point cloud classification method based on nested neural network and grid map
CN109783666A (en) A kind of image scene map generation method based on iteration fining
CN110175248B (en) Face image retrieval method and device based on deep learning and Hash coding
CN112801146A (en) Target detection method and system
CN109299303B (en) Hand-drawn sketch retrieval method based on deformable convolution and depth network
CN111191664A (en) Training method of label identification network, label identification device/method and equipment
CN114677687A (en) ViT and convolutional neural network fused writing brush font type rapid identification method
CN109360179A (en) A kind of image interfusion method, device and readable storage medium storing program for executing
CN113807347A (en) Kitchen waste impurity identification method based on target detection technology
Qiao et al. A weakly supervised semantic segmentation approach for damaged building extraction from postearthquake high-resolution remote-sensing images
CN114463837A (en) Human behavior recognition method and system based on self-adaptive space-time convolution network
CN113487610B (en) Herpes image recognition method and device, computer equipment and storage medium
CN110321803A (en) A kind of traffic sign recognition method based on SRCNN
CN107368832A (en) Target detection and sorting technique based on image
CN111199199B (en) Action recognition method based on self-adaptive context area selection
CN112766263B (en) Identification method for multi-layer control stock relationship share graphs
Ling et al. A facial expression recognition system for smart learning based on YOLO and vision transformer
CN112766262B (en) Identification method for single-layer one-to-many and many-to-one share graphs

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant