CN109543716B - K-line form image identification method based on deep learning - Google Patents

K-line form image identification method based on deep learning Download PDF

Info

Publication number
CN109543716B
CN109543716B CN201811238452.8A CN201811238452A CN109543716B CN 109543716 B CN109543716 B CN 109543716B CN 201811238452 A CN201811238452 A CN 201811238452A CN 109543716 B CN109543716 B CN 109543716B
Authority
CN
China
Prior art keywords
network
line form
financial
input
taking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811238452.8A
Other languages
Chinese (zh)
Other versions
CN109543716A (en
Inventor
张智军
江荣埻
颜子毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201811238452.8A priority Critical patent/CN109543716B/en
Publication of CN109543716A publication Critical patent/CN109543716A/en
Application granted granted Critical
Publication of CN109543716B publication Critical patent/CN109543716B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/29Graphical models, e.g. Bayesian networks

Abstract

The invention discloses a K-line shape image recognition method based on deep learning, which comprises the following steps: 1) the method comprises the steps of taking a financial K line form image to be identified and coordinates corresponding to the form as input of a neural network, and inputting the input into the neural network containing a multilayer convolution layer; 2) taking the output of the convolution layer in the step 1) as the input of a region generation network, and carrying out region generation network training; 3) pooling the output of the area generation network of step 2) as an area of interest; 4) taking the region-of-interest pooling result of the step 3) as the input of a fast-RCNN detection network; 5) and finally generating the position information and the recommendation score of the recommendation box by the fast-RCNN detection network in the step 4). The method overcomes the problem that the existing financial quantification program is difficult to express the financial K-line form characteristics obtained by an analyst according to experience, and can learn the financial K-line form which the analyst wants to identify and be used for real-time image identification containing the financial K-line form characteristics.

Description

K-line form image identification method based on deep learning
Technical Field
The invention relates to the technical field of image recognition, in particular to a K-line form image recognition method based on deep learning.
Background
Morphological identification of K-lines and indices is an important part of financial quantitative investment analysis. Its recognition accuracy directly affects the odds of the transaction, determining the feasibility of the quantitative procedure, however many modalities (such as w-bottom modality, entangled center, trend and inventory modality, etc.) are only feasibly inaudible to the financial analyst. Due to the ambiguity in the identification of the morphology of the financial K-line, the identification of the morphology must be separated from the fixed quantization procedure based on the time series and converted from experience, so that the identification can be applied to the morphology features which are not always transmitted and are difficult to be uniformly expressed by the fixed procedure. The research in the deep learning theory field mainly focuses on algorithms, the algorithms are rarely applied to the financial investment field, and an independent and systematic theoretical analysis framework is lacked in the exploration and creation stages. The recognition of the K-line form images based on the neural network hardly appears in documents, and few people study the practical financial quantitative development, but the combination of the K-line form images, such as different forms in a K-line graph, the K-line graph and other data, transaction amount, transaction indexes and the like, is an important basis for most investors to invest, only the information of a time sequence may not be enough to reflect the transaction situation, the combination of time and space is needed, and the recognition of the K-line form images is particularly important at this time.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a K-line form image recognition method based on deep learning, which overcomes the problem that the prior financial quantization program is difficult to express financial K-line form characteristics obtained by an analyst according to experience, and can learn the financial K-line form which the analyst wants to recognize and be used for real-time image recognition containing the financial K-line form characteristics.
The purpose of the invention can be realized by the following technical scheme:
a K-line shape image identification method based on deep learning comprises the following steps:
1) the method comprises the steps of taking a financial K line form image to be identified and coordinates corresponding to the form as input of a neural network, and inputting the input into the neural network containing a multilayer convolution layer;
2) taking the output of the convolution layer in the step 1) as the input of a region generation network, and carrying out region generation network training;
3) pooling the output of the area generation network of step 2) as an area of interest;
4) taking the region-of-interest pooling result of the step 3) as the input of a fast-RCNN detection network;
5) and finally generating the position information and the recommendation score of the recommendation box by the fast-RCNN detection network in the step 4).
Further, the financial K line form image is an image formed by combining one or more of K lines and indexes thereof and other financial data indexes.
Further, the specific process of performing the area generation network training in step 2) is as follows: taking the output of the convolutional layer in the step 1) as the input of a region generation network, using fixed-size windows to slide on the last layer of feature map of convolution, outputting fixed-size dimension features by each window, performing regression coordinate and classification on 9 candidate regression frames by each window, performing windowing on the feature map in different sizes in order to identify an object in different sizes, and generating training data by firstly judging whether the coverage of ground channel by the anchor exceeds a threshold value or not and then classifying the object of the current anchor as 'present'; if the coverage ratio of the mobile terminal does not exceed the threshold value, selecting a mark with the maximum coverage ratio as 'present'; wherein the loss function of the area generating network is defined as:
Figure BDA0001838732320000021
where the subscript i is the number in the small batch of training samples, piIs the predicted probability of the target, p if the target is a positive examplei *1, otherwise pi *=0,tiVector of four parameters for the predicted bounding box, ti *A parameter vector corresponding to the ground channel; the specific calculation method is as follows:
tx=(x-xa)/wa,ty=(y-ya)/ha,
tw=log(w/wa),th=log(h/ha),
Figure BDA0001838732320000022
Figure BDA0001838732320000023
wherein x, y, w and h represent the exact center coordinates of the recommended frame predicted by the proposal network and the height and width of the recommended frame, subscript a and superscript represent the anchor frame and ground channel frame, respectively, and NclsIs the size of a small batch, NregIs the number of anchors, LclsUsing a cross entropy, LregEmployed is Smooth L1, which is defined as:
Figure BDA0001838732320000024
where x is the difference between the target value and the regression value.
Further, in the step 3), the output of the Region generation network in the step 2) is pooled as the Region of interest, that is, a candidate Region of interest (Roi) list is obtained from a candidate Region generation network (Region pro laboratory Net, RPN), all the features are obtained through a convolutional neural network, and the subsequent classification and regression are performed.
Compared with the prior art, the invention has the following advantages and beneficial effects:
the K-line form image recognition method based on deep learning provided by the invention promotes the recognition of the K-line form to an image recognition layer, more truly simulates the disk surface data seen by a common stock investor, thereby being capable of very intuitively researching the information seen by the disk surface, meanwhile, the successful application of the method can subvert the quantization mode of the current financial K-line form, the financial K-line form is not required to be interpreted by a code language, a fixed program is not required to be used for recognizing the financial K-line form, the purposes of learning and automatic recognition of the financial K-line form can be achieved only by taking the picture containing the form and the corresponding coordinate position as the input of a neural network, and the financial K-line form with lower misjudgment rate and lower omission rate can be learned under a small number of training samples.
Drawings
Fig. 1 is a flowchart of a K-line shape image recognition method based on deep learning according to an embodiment of the present invention.
FIG. 2 is a block diagram of a fast-RCNN detection network according to an embodiment of the present invention.
FIG. 3 is a flow chart of a testing phase according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to examples and drawings, but the present invention is not limited thereto.
Example (b):
the embodiment provides a method for recognizing a K-line form image based on deep learning, and a flowchart of the method is shown in fig. 1, and the method comprises the following steps:
1) the method comprises the steps of taking a financial K line form image to be identified and coordinates corresponding to the form as input of a neural network, and inputting the input into the neural network containing a multilayer convolution layer;
2) taking the output of the convolution layer in the step 1) as the input of a region generation network, and carrying out region generation network training;
3) pooling the output of the area generation network of step 2) as an area of interest;
4) taking the region-of-interest pooling result of the step 3) as the input of a fast-RCNN detection network;
5) finally generating the position information and recommendation score of the recommendation box by the fast-RCNN detection network in the step 4) (the test flow is shown in FIG. 3).
The main core of the neural network framework used in this embodiment, namely, fast-RCNN (as shown in fig. 2), is divided into three parts, namely, a candidate Region generation network (RPN) part, a Region generation network training part and a joint training part.
The function of the candidate area generation network is to input an image and output a batch of rectangular candidate areas. The method comprises the steps of feature extraction, candidate region (anchor), window classification and position refinement. The feature extraction comprises a plurality of convolutional layers, the ResNet50 residual neural network is directly used as a convolutional network layer in the embodiment, an anchor is the core of an RPN network and is used for giving a reference window size, and nine candidate windows with different sizes are obtained according to the multiple and length-width ratio. The final output of conv4_ x can be found to be the portion shared by RPN and region of interest pooling (Roi posing), while conv5_ x both act on the feature map after region of interest pooling, and finally average the pooled layers one by one to get 2048 dimensional features for classification and box regression, respectively. Wherein the classification section outputs probabilities of being a target and a non-target, and the box regression section outputs four parameters of a box including center coordinates x and y of the box, a box width w and a length h.
The function of the region generation network training is to screen images belonging to the labels through a cost function for learning, and minimize classification errors and window position deviation of the foreground samples.
The joint training comprises four steps: 1) training the RPN network independently, wherein the network parameters are loaded by a pre-training model; 2) training a fast-RCNN network independently, and taking an output candidate region of the RPN in the first step as the input of a detection network; 3) training the RPN again, wherein the parameters of the public part of the fixed network only update the parameters of the unique part of the RPN; 4) taking the RPN result to fine-tune the fast-RCNN network again, the parameters of the public part of the network are fixed, and only the parameters of the unique part of the fast-RCNN are updated.
For the identification of K-line shape images, we divide into the identification of a single K-line shape and the identification of a complex K-line shape. The identification of the single K line shape refers to that only the K line picture is taken as input to identify the shape characteristic formed by combining a plurality of K lines; the identification of the composite K line form means that a picture formed by combining the K lines and the indexes is used as input, and form characteristics formed by combining a plurality of K lines and the indexes are identified. The identification of the compound K line form is labeled and learned by two methods, one is to use the K line and the index as one label for learning, and the other is to label the K line and the index with different labels for learning.
The above description is only for the preferred embodiments of the present invention, but the protection scope of the present invention is not limited thereto, and any person skilled in the art can substitute or change the technical solution of the present invention and the inventive concept within the scope of the present invention, which is disclosed by the present invention, and the equivalent or change thereof belongs to the protection scope of the present invention.

Claims (4)

1. A K-line form image identification method based on deep learning is characterized by comprising the following steps:
1) the method comprises the steps of taking a financial K line form image to be identified and coordinates corresponding to the form as input of a neural network, and inputting the input into the neural network containing a multilayer convolution layer;
2) taking the output of the convolution layer in the step 1) as the input of a region generation network, and carrying out region generation network training;
3) pooling the output of the area generation network of step 2) as an area of interest;
4) taking the region-of-interest pooling result of the step 3) as the input of a fast-RCNN detection network;
5) and finally generating the position information and the recommendation score of the recommendation box by the fast-RCNN detection network in the step 4).
2. The method for recognizing K-line form image based on deep learning of claim 1, wherein: the financial K line form image is an image formed by one or a plurality of K lines and indexes thereof and other financial data indexes.
3. The method for recognizing K-ray shape images based on deep learning of claim 1, wherein the specific process of performing the area generation network training in the step 2) is as follows: taking the output of the convolution layer in the step 1) as the input of a region generation network, using fixed-size windows to slide on the last layer of feature map of convolution, outputting fixed-size dimension features by each window, performing regression coordinates and classification on 9 candidate regression frames by each window, performing windowing on the feature map in different sizes in order to identify an object in different sizes, and generating training data by judging whether the real label covered by the anchor frame exceeds a threshold value or not and classifying items of the current anchor frame into 'existence' if the real label is covered by the anchor frame; if the coverage ratio of the mobile terminal does not exceed the threshold value, selecting a mark with the maximum coverage ratio as 'present'; wherein the loss function of the area generating network is defined as:
Figure FDA0003197405060000011
where the subscript i is the number in the small batch of training samples, piIs the predicted probability of the target, p if the target is a positive examplei *1, otherwise pi *=0,tiVector of four parameters for the predicted bounding box, ti *A parameter vector corresponding to the real label; concrete meterThe calculation method is as follows:
tX=(X-Xa)/wa,ty=(y-ya)/ha,
tw=log(w/wa),th=log(h/ha),
Figure FDA0003197405060000021
Figure FDA0003197405060000022
wherein X, y, w and h represent the exact center coordinates of the recommended frame predicted by the proposal network and the height and width of the recommended frame, subscript a and superscript represent the anchor frame body and the real label frame body respectively, and N represents the anchor frame body and the real label frame body respectivelyclsIs the size of a small batch of training samples, NregIs the number of anchor frames, LclsUsing a cross entropy, LregEmployed is Smooth L1, which is defined as:
Figure FDA0003197405060000023
where x is the difference between the target value and the regression value.
4. The method for recognizing K-line form image based on deep learning of claim 1, wherein: and 3) pooling the output of the area generation network in the step 2) as the area of interest, namely obtaining a candidate area of interest list from the candidate area generation network, taking all the characteristics through a convolutional neural network, and performing subsequent classification and regression.
CN201811238452.8A 2018-10-23 2018-10-23 K-line form image identification method based on deep learning Active CN109543716B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811238452.8A CN109543716B (en) 2018-10-23 2018-10-23 K-line form image identification method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811238452.8A CN109543716B (en) 2018-10-23 2018-10-23 K-line form image identification method based on deep learning

Publications (2)

Publication Number Publication Date
CN109543716A CN109543716A (en) 2019-03-29
CN109543716B true CN109543716B (en) 2021-12-21

Family

ID=65844367

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811238452.8A Active CN109543716B (en) 2018-10-23 2018-10-23 K-line form image identification method based on deep learning

Country Status (1)

Country Link
CN (1) CN109543716B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110136126A (en) * 2019-05-17 2019-08-16 东南大学 Cloth textured flaw detection method based on full convolutional neural networks
CN110263843A (en) * 2019-06-18 2019-09-20 苏州梧桐汇智软件科技有限责任公司 Stock K line recognition methods based on deep neural network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105701450A (en) * 2015-12-31 2016-06-22 上海银天下科技有限公司 K line form identification method and device
CN106056244A (en) * 2016-05-30 2016-10-26 重庆大学 Stock price optimization prediction method
CN106355500A (en) * 2016-11-10 2017-01-25 洪志令 Stock prediction method based on positive and negative related trend matching
CN107832897A (en) * 2017-11-30 2018-03-23 浙江工业大学 A kind of Stock Price Forecasting method based on deep learning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150154700A1 (en) * 2013-12-03 2015-06-04 Michael Steven Hackett Method of Tracking and Displaying Stocks Information Utilizing Candlestick Charts

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105701450A (en) * 2015-12-31 2016-06-22 上海银天下科技有限公司 K line form identification method and device
CN106056244A (en) * 2016-05-30 2016-10-26 重庆大学 Stock price optimization prediction method
CN106355500A (en) * 2016-11-10 2017-01-25 洪志令 Stock prediction method based on positive and negative related trend matching
CN107832897A (en) * 2017-11-30 2018-03-23 浙江工业大学 A kind of Stock Price Forecasting method based on deep learning

Also Published As

Publication number Publication date
CN109543716A (en) 2019-03-29

Similar Documents

Publication Publication Date Title
CN110443143B (en) Multi-branch convolutional neural network fused remote sensing image scene classification method
JP6843086B2 (en) Image processing systems, methods for performing multi-label semantic edge detection in images, and non-temporary computer-readable storage media
CN107784288B (en) Iterative positioning type face detection method based on deep neural network
US9330336B2 (en) Systems, methods, and media for on-line boosting of a classifier
CN111950453A (en) Optional-shape text recognition method based on selective attention mechanism
CN108305260B (en) Method, device and equipment for detecting angular points in image
CN111462120A (en) Defect detection method, device, medium and equipment based on semantic segmentation model
CN110929802A (en) Information entropy-based subdivision identification model training and image identification method and device
CN110245620B (en) Non-maximization inhibition method based on attention
CN112419202B (en) Automatic wild animal image recognition system based on big data and deep learning
CN111462140B (en) Real-time image instance segmentation method based on block stitching
Cepni et al. Vehicle detection using different deep learning algorithms from image sequence
CN110827236A (en) Neural network-based brain tissue layering method and device, and computer equipment
CN113139543A (en) Training method of target object detection model, target object detection method and device
CN111738074B (en) Pedestrian attribute identification method, system and device based on weak supervision learning
CN109543716B (en) K-line form image identification method based on deep learning
Yang et al. Semantic segmentation in architectural floor plans for detecting walls and doors
CN115439458A (en) Industrial image defect target detection algorithm based on depth map attention
CN115761834A (en) Multi-task mixed model for face recognition and face recognition method
CN115019133A (en) Method and system for detecting weak target in image based on self-training and label anti-noise
CN111583417B (en) Method and device for constructing indoor VR scene based on image semantics and scene geometry joint constraint, electronic equipment and medium
CN113378852A (en) Key point detection method and device, electronic equipment and storage medium
CN114077892A (en) Human body skeleton sequence extraction and training method, device and storage medium
CN112052730A (en) 3D dynamic portrait recognition monitoring device and method
CN116612382A (en) Urban remote sensing image target detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant