CN108510068A - A kind of ultra-deep regression analysis learning method - Google Patents

A kind of ultra-deep regression analysis learning method Download PDF

Info

Publication number
CN108510068A
CN108510068A CN201710123049.XA CN201710123049A CN108510068A CN 108510068 A CN108510068 A CN 108510068A CN 201710123049 A CN201710123049 A CN 201710123049A CN 108510068 A CN108510068 A CN 108510068A
Authority
CN
China
Prior art keywords
probability
scale
distribution
space
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710123049.XA
Other languages
Chinese (zh)
Inventor
顾泽苍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201710123049.XA priority Critical patent/CN108510068A/en
Priority to JP2018047267A priority patent/JP6998561B2/en
Publication of CN108510068A publication Critical patent/CN108510068A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/043Architecture, e.g. interconnection topology based on fuzzy logic, fuzzy membership or fuzzy inference, e.g. adaptive neuro-fuzzy inference systems [ANFIS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Evolutionary Biology (AREA)
  • Automation & Control Theory (AREA)
  • Computational Mathematics (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Complex Calculations (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present invention is a kind of ultra-deep regression analysis learning method, specific implementation step is being calculated in distance between beeline and dot step, for given straight line, ask each data in given range to the distance of straight line, in probability scale self-organizing step, the machine learning that probability scale self-organizing is carried out for the distance of point to straight line, obtains the data distribution of maximum probability, in Regression Analysis Result step:Closest straight line is found out again for the data distribution of maximum probability, judges that regression analysis learns no completion, the no processing for continuing to probability scale self-organizing step;It is the Regression Analysis Result for just obtaining maximum probability.Implementation result of the present invention is:The Regression Analysis Result that can get maximum probability, can be with cancelling noise, and can pass through the obstacle of noise data, the region for approaching maximum probability of self-discipline, and ultra-deep regression analysis learning outcome has apparent Approximation effect with the calculated result of practical formula.

Description

A kind of ultra-deep regression analysis learning method
【Technical field】
The invention belongs to a kind of ultra-deep regression analysis learning methods in artificial intelligence field.
【Background technology】
As the AlphaGo that Google invests and develops defeats the brilliant military success of all mankind chess players to start the whole world again Pursue the upsurge of deep learning.The number of applications in relation to artificial intelligence also almost exceeds all in the past in the past year The total number of patent of the time in relation to artificial intelligence.
Japanese famous Furukawa mechanical & electrical corporation has delivered " image processing method and image processing apparatus " (patent text in this respect Patent application 1) is offered, which proposes the processing threshold values of the algorithm picks image of the neural network by artificial intelligence, to The high-precision profile by image is extracted out.
Famous Toyota Company of Japan has delivered the special of " drive and be directed toward estimating device " in the application of automatic driving Sharp (patent document 2), during which proposes according to automatic driving, the case where for burst, even if driver does not have In the case of reflection, by the machine learning algorithm of the inverse transmission network of artificial intelligence, driving condition is automatically selected, with Avoid the generation etc. of driving accident.
The Japanese law and politics university of application in terms of image analysis delivered " phytopathy diagnostic system, phytopathy diagnostic method, And phytopathy diagnostic program " patent (patent document 3), the patent propose import deep learning CNN models, to leaves of plants The image of son is identified and analyzes, to the lesion of diagnosis of plant.
The Fuji-Xerox of Japan as one of duplicator maximum manufacturer of the world " makes from the angle application of anti-counterfeit recognition With the false proof device and method of small anti-fake mark " patent (patent document 4), the patent try hard to use intelligent algorithm structure The small anti-fake mark of information can be recorded by making, to achieve the purpose that solve commodity counterfeit prevention identification.
【Patent document】
【Patent document 1】(special open 2013-109762)
【Patent document 2】(special open 2008-225923)
【Patent document 3】(special open 2016-168046)
【Patent document 4】(special open 2012-124957)
Above-mentioned (patent document 1), (patent document 2) and (patent document 3) all mention the neural network using artificial intelligence The information of object function is still mainly carried on the weighted value of magnanimity by algorithm by " training " in neural network algorithm In parameter, weighted value W, with threshold values T during study, to obtain best solution, needing will be by all states Row test, the total degree to be combined is { (W × T)n} × P, the number of nodes for the neural network that n is one layer here, P is nerve net The computation complexity of the number of plies of network, such high index keeps calculation amount huge, and required hardware spending is huge, deep learning Probability gradient descent method abbreviation SGD used by the loss function of effect is practised, obtained trained values are a local optimum solutions, Therefore it inevitably will appear "black box" problem, then there is the threshold values in the model of neural network to belong to artificially defined, with people's The mechanism of the neural network of brain is very different, and the principle of the stimulus signal of cranial nerve cannot be in traditional neural network model It fully demonstrates, excitement degree difference caused by the nerve signal of the brains of people according to neuron carries out the mechanism of different judgements Can not be embodied in the model of current neural network etc., current neural network model can only be learned, represent A kind of theory of directionality is very big with the level disparity for reaching practical application.Nowadays the stage for entering deep learning, with traditional Neural network compares the quantity for merely adding hidden layer, this more increases the complexity of calculating, although being imported in study Some optimization algorithms, but and without departing from the basis of original neural network, the fatal problem of traditional neural network cannot solve Certainly, widely applied foreground is difficult to expect.
It is more forced that above-mentioned (patent document 4) proposed generate small anti-fake mark by artificial intelligence, first not It is proposed what any special measures is passed through using artificial intelligenceAny achievement outstanding obtainedSmall anti-fake mark is in microarray The method record information of the used size by dot matrix on information recording method, color and different location has been open Means, import artificial intelligence after still without change conventional method technical characteristic, therefore without advance.Indicating information In the gimmick in the direction of dot matrix, the characteristic symbol of position or fiducial mark, using the non-information more much larger than information lattice dimensions The image of dot pattern makes small anti-fake mark be easy to be found by illegal person in terms of concealment, does not meet concealed anti-false generation The composition rule of code image, maximum problem is cannot to solve the problems, such as that illegal person replicates small anti-fake mark by scanner, Because the precision of scanner is much greater more than printing precision in reality, although the patent application is it has been suggested that with laser processing Method forms small anti-fake mark, but can be realized by many methods by the way that minute pattern is anti-fake, presently the most pays close attention to Be how by the mobile phone of consumer to realize that public anti-counterfeit recognition, this patent application do not touch at all.
First the technical term involved by present patent application is defined below, includes in the present invention in defined below Hold.
Probability space (Probability Space):
" probability theory is based on measure theory " based on Soviet Union mathematician Andrey Kolmogorov is theoretical, so-called Probability space be exactly one and total estimate for the measurable space of " 1 ".
Probability distribution (Probability Distribution):
So-called probability distribution is to be directed to probability function, by the arrangement of the size of its possibility occurrence.
Probability scale (Probability Scale):
Any one probability distribution in probability space certainly exists a probability scale, can demarcate the journey of probability distribution Degree.
Probability density (Probability Density):
Probability value of the event on given area.
Fuzzy event probability is estimated (Probability Measure of Fazzy Event)
In the Euclidean space S including probability space, if p (x) meets the additive property of probability measure, μA(x) It is a membership function (Membership Function) while also meets the additive property of fuzzy mearue, then fuzzy event probability The P (A) that estimates of set A be:
Formula 1
P (A)=∫s μA(x)p(x)dx
Its discrete expression formula is:
Formula 2
Intelligence system (Intelligent System):
Intelligence system is the system realized according to deterministic algorithm, is to realize certain object function according to a kind of algorithm Processing, the system of being to determine property of handling result.
Artificial intelligence (Artificial Intelligence):
What is artificial intelligenceIt is exactly briefly the brains function of realizing people with computer, i.e., people is realized by computer Brains thinking caused by effect, problem to be solved, handling result is often probabilistic, or perhaps in advance It is unpredictable.
It may also be said that artificial intelligence is by effect model caused by artificial intervention, formulation, more specifically definition, Artificial intelligence is that the most deep application of probabilistic model is realized by the algorithm of machine learning.
It clusters (Clustering):
Scale based on Euclidean space can allow data to realize aimless migration, finally only available given range Data result.
Self-organizing (Self-organization):
Scale based on probability space can allow what data restrained oneself to be migrated to the direction of high probability, and final can get can not be pre- The object function of survey.
The definition of machine learning (Machine Learning):
The model of rule is obtained in the slave data of computer self-discipline.
Probability scale self-organizing (Self-organizing based on a Probability Scale):
If probability space there is the set G of a following probability distribution, contains ζ data in set:
Formula 3
gf∈ G (f=1,2 ..., ζ)
In this probability distribution g of probability spacefIn (f=1,2 ..., ζ), one is certainly existed at characteristic value A (G), by It is measure space in probability space, therefore certainly exists a probability scale M [G, A (G)] for characteristic value A (G), meets as follows When the condition of probability scale self-organizing, set G can be allowed on the basis of probability scale(n)It is migrated towards maximum probability direction.
Formula 4
A(n)=A (G(n))
M(n)=M [G(n), A (G(n))]
G(n)=G { [A (G(n-1)), M [G(n-1), A (G(n-1))]]
When n >=β (β is the numerical value more than 4), A (G can be set(n)) it is maximum probability characteristic value, M [G(n), A (G(n))] it is maximum probability scale centered on maximum probability characteristic value.
Maximum probability (Maximum Probability):
It is that beyond tradition statistics is predicted as a result, i.e. closest parent prediction result.
Probability space distance (Probability Space Distance):
If there is two probability distribution w for probability spacej∈ W, vj∈ V, and corresponding vjThe vector of the probability distribution of ∈ V Each element maximum probability scale Mj, (j=1,2 ..., n) so w herejTo vjThe maximum probability of two probability distribution The distance definition of scale is as follows:
Formula 5
Ultra-deep study (Super Deep Learning):
By sensing layer, what nervous layer and cortex were constituted, and by between node between layers with probability scale from What the machine learning of tissue was connected, the nerve for the unsupervised machine learning that can be restrained oneself directly against random distribution data Network model.
【Invention content】
The present invention first purpose be:It is proposed that the algorithm of the probability scale self-organizing of a multi-scale, construction confrontation are learned Model is practised, to greatest extent by ultra-deep study, by all probability distribution informations of probability space, fuzzy message obtains maximum The utilization of limit, so that the precision of image recognition speech recognition obtains optimum level.
Second object of the present invention is:Based on the probability scale self-organizing of multi-scale, one is provided more strictly The different spaces including probability space in strict distance definition.
Third object of the present invention is:Based on the probability scale self-organizing of multi-scale, packet can be passed through by providing one Include the definition that the fuzzy event probability of the different spaces including probability space is estimated.
Fourth object of the present invention is:Ultra-deep regression analysis learning model based on probability scale self-organizing is provided.
The present invention the 5th purpose be:The ultra-deep manifold learning model of nonlinear probability scale self-organizing is provided.
The present invention the 6th purpose be:The ultra-deep of the mark that can distinguish the true from the false by public mobile phone is provided to fight by force Learning model.
To realize above-mentioned at least one purpose, propose that a kind of ultra-deep manifold learning, the present invention propose following technology Scheme:
A kind of ultra-deep manifold learning, it is characterised in that obtain as follows:
(1) probability scale self-organizing machine learning is used by the data approached;Obtain next data of approaching The terminal position of maximum probability;
(2) data distribution covered by the terminal position of data and above-mentioned maximum probability to having approached;It carries out The machine learning of ultra-deep regression analysis;It obtains and next approaches data;
(3) above-mentioned two step cycle obtains whole manifold learning data approximation results successively.
Moreover, the machine learning of probability scale self-organizing refers to:The unsupervised learning that small data directly inputs;It is to pass through The machine learning migrated towards maximum probability direction restrained oneself in different spaces.
Moreover, probability scale refers to:Based on including having normal distribution;Multivariate normal is distributed;Logarithm normal distribution;Refer to Number distribution;T is distributed;F is distributed;X2Distribution;Bi-distribution;Negative bi-distribution;Multinomial distribution;Poisson distribution;Erlangian distribution (Erlang Distribution);Hypergeometric distribution;Geometry is distributed;Traffic distribution;Weibull distribution (Weibull Distribution);Angular distribution;Beta is distributed (Bete Distribution);Gamma is distributed (Gamma Distribution it any one in) or extends to bayes method (Bayesian Analysis);Gaussian process Module in arbitrariness probability distributing in (Gaussian Processes) set by any one probability density characteristics.
Moreover, different spaces refer to:Euclidean space;Probability space includes Manhattan space (Manhattan Space);Chebyshev space (Chebyshev Space);Minkowski sky (Minkowski Space);Mahalanobis space (Mahalanobis Space);Arbitrary Spatial Coupling in included angle cosine space (Cosine Space).
Moreover, maximum probability information refers to:It is as obtained by after the unsupervised learning that small data directly inputs;Beyond tradition The Regression Analysis Result that statistics is obtained, i.e., the Regression Analysis Result of closest parent.
Moreover, more probability scale self-organizings can be expressed by following formula:
If there is a following set G for probability space, and have gf∈ G,
In this probability distribution g of probability spacef(f=1,2 ..., ζ), certainly exists one at characteristic value A (G), due to Probability space is measure space, therefore certainly exists a probability scale M [G, A (G)] for characteristic value A (G), is met following general When the condition of rate scale self-organizing,
A(n)=A (G(n))
M(n)=M [G(n), A (G(n))]
Set G can be allowed on the basis of probability scale(n)It is migrated towards maximum probability direction,
G(n)=G { [A (G(n-1)), M [G(n-1), A (G(n-1))]]
When n >=β (β is the numerical value more than 4), A (G can be set(n)) it is maximum probability characteristic value, M [G(n), A (G(n))] it is maximum probability scale centered on maximum probability characteristic value.
A kind of ultra-deep regression analysis learning method proposed by the present invention, its main feature is that:It can get the recurrence of maximum probability Analysis result, can be with cancelling noise, and can pass through the obstacle of noise data, the region for approaching maximum probability of self-discipline, surpasses Depth regression analysis learning outcome has apparent Approximation effect with the calculated result of practical formula.
Description of the drawings
Fig. 1 is the schematic diagram for more probability scales defined in probability distribution
Fig. 2 is the machine learning flow chart of more probability scale self-organizings
Fig. 3 is the definition schematic diagram for the distance for passing through the different spaces including probability space
Fig. 4 is the pattern recognition model schematic diagram of ultra-deep confrontation study
Fig. 5 is the schematic diagram of the optimal classification model of ultra-deep confrontation study
Fig. 6 is the optimal classification flow chart of ultra-deep confrontation study
Fig. 7 is the flow chart of the scale for more probability scales that actual probability distribution is obtained by study
Fig. 8 is the schematic diagram of ultra-deep confrontation learning neural network
Fig. 9 is the flow chart of ultra-deep confrontation learning neural network
Figure 10 is the ultra-deep strong confrontation learning process figure for mobile phone anti-counterfeit recognition
Figure 11 is the characteristic schematic diagram for printing color of image space and electronic image color space
Figure 12 is the flow chart that regression analysis is solved using ultra-deep confrontation study
Figure 13 is the comparison schematic diagram of two kinds of Regression Analysis Results
Figure 14 is the algorithm of ultra-deep manifold learning and the schematic diagram of result of calculation
Symbol description
101 be a probability distribution of probability space
102 be the probability distribution central value in a certain region
103 be first scale value
104 be second scale value
105 be third scale value
106 be that first scale value 103 corresponds to probability distribution value in region
107 be that second scale value 104 corresponds to probability distribution value in region
108 be that third scale value 105 corresponds to probability distribution value in region
301 be an Euclidean space for covering probability space
302 be the central point w of a probability distribution of probability spacej
303 be first scale M of more probability scales of a probability distribution of probability space1
304 be second scale M of more probability scales of a probability distribution of probability space2
305 be the third scale M of more probability scales of a probability distribution of probability space3
309 be the point v of Euclidean space onej
310 be the point v of Euclidean space onejCentral point ws of the ∈ V to probability distributionjArbitrary one accounts for r between ∈ Wj∈R (j=1,2 ..., n)
4000 be identified object vectors
4001 be the characteristic element sv of identified image1
4002 be the characteristic element sv of identified image2
4003 be the characteristic element sv of identified image3
400e is the characteristic element sv of identified imagee
4100 be by the characteristic vector data FV after more probability scale self-organizings1
4200 be by the characteristic vector data FV after more probability scale self-organizings2
4110 be fv11Central value
4111 be characteristic vector data FV1Fv11First scale
4112 be fv11Second scale 4113 be fv11Third scale
4120 be fv12Central value 4121 be fv12First scale
4122 be fv12Second scale 4123 be fv12Third scale
4130 be fv13Central value 4131 be fv13First scale
4132 be fv13Second scale 4133 be fv13Third scale
41e0 is fv1eCentral value 41e1 be fv1eFirst scale
41e2 is fv1eSecond scale 41e3 be fv1eThird scale
4210 be fv21Central value
4211 be characteristic vector data FV2Fv21First scale
4212 be fv21Second scale 4213 be fv21Third scale
4220 be fv22Central value 4221 be fv22First scale
4222 be fv22Second scale 4223 be fv22Third scale
4230 be fv23Central value 4231 be fv23First scale
4232 be fv23Second scale 4233 be fv23Third scale
42e0 is fv2eCentral value 42e1 be fv2eFirst scale
42e2 is fv2eSecond scale 42e3 be fv2eThird scale
500 be any point
501 be the Euclidean space schematic diagram for covering probability space
502 be the central value of probability distribution 520
520 be probability distribution
503 be first scale of probability distribution 520
504 be second scale of probability distribution 520
505 be the third scale of probability distribution 520
506 be first scale zone of probability distribution 520, if probability value is p1j (520)
507 be second scale zone of probability distribution 520, if probability value is p2j (520)
508 be the third scale zone of probability distribution 520, if probability value is p3j (520)
510 be the central value of probability distribution 530
511 be first scale of probability distribution 530
512 be second scale of probability distribution 530
513 be the third scale of probability distribution 530
514 be first scale zone of probability distribution 530, if probability value is p1j (530)
515 be second scale zone of probability distribution 530, if probability value is p2j (530)
516 be the third scale zone of probability distribution 530, probability value p3j (530)
520 and 530 be the probability distribution of two probability spaces in the Euclidean space 501 for cover probability space
500 be to be connect with 510 at the center of two probability distribution 502 any one o'clock of the joining place of two probability distribution Straight line on subpoint
801 be a space reflection of perceived image
802 and 803 be the two different mapping space images of difference
804 be a regional area of image 801
805 be the machine learning unit for the more probability scale self-organizings being connected between perceived object and sensing layer
806 be the sensing layer of neural network
807 be a node of the sensing layer of neural network
808 be to be connected between sensing layer and nervous layer to have confrontation learning ability, more probability scale self-organizings Machine learning unit
809 be database storage unit
810 be the nervous layer of neural network
811 be the node with the nervous layer of the neural network corresponding to each node of sensing layer
812 be the machine learning unit for the probability scale self-organizing more than one being connected between nervous layer and cortex
813 be the database between nervous layer and cortex
814 be the cortex of neural network
815 be cortex node
1101 be original image color
1102 be the color of image become after being scanned through
1301 be to use the calculated regression analysis straight line of traditional formula
1302 and 1303 be the data that given range is clipped together with regression analysis straight line
1304 be to fight the regression analysis straight line for learning by ultra-deep
1305 and 1306 be given (a) same range with Figure 13, learns obtained recurrence with ultra-deep confrontation The data that analysis straight line is clipped together
Specific implementation mode
The embodiment of the present invention is further described below in conjunction with attached drawing, but embodiment of the present invention is illustrative , without being restrictive.
Fig. 1 is the schematic diagram for more probability scales defined in probability distribution.
As shown in Figure 1:If a probability distribution 101 of probability space, certainly exists probability scale more than one, can be with table The probability distribution situation for showing probability distribution plural number position, it is 102 to correspond to the probability distribution central value in a certain region here When, first scale value can be set as 103, the probability distribution value corresponded in region is 106, can set second scale value as 104, It is 107 to correspond to probability distribution value in region, can set third scale value as 105, correspond to the probability distribution value in region It is 108, the benchmark that the probability distribution value in plural region can be carried out to quantitatively calibrating is known as more probability scales.
The calibration of more probability scales is according to including normal distribution, multivariate normal distribution, logarithm normal distribution, index point Cloth, t distributions, F distributions, X2Distribution, bi-distribution, negative bi-distribution, multinomial distribution, Poisson distribution, Erlangian distribution (Erlang Distribution), hypergeometric distribution, geometry distribution, traffic distribution, Weibull distribution (Weibull Distribution), angular distribution, beta are distributed (Bete Distribution), and gamma is distributed (Gamma The different probability in the calculated given plural number region of probability nature institute at least one of Distribution) with probability distribution Value.
The related coefficient of probability space also can be used as more probability scales with correlation distance.
It can also extend to the distance (Euclidean of Euclidean space as more probability scales for non-probability space Distance) scale, manhatton distance (Manhattan Distance) scale, Chebyshev's distance (Chebyshev Distance) scale, Minkowski Distance (Minkowski Distance) scale, mahalanobis distance (Mahalanobis Distance) scale, included angle cosine (Cosine) scale, W distance (Wasserstein Distance) scales, KL distances (Kullback-Leibler Distance) scale, PE distance (Pearson Distance) scales.
The distance in other spaces can also extend to more probability scales, such as:Jie Kade similarity factors (Jaccardsimilarity Coefficient) scale, Hamming distance (Hamming Distance) scale, comentropy (Information Entropy) scale.
Some probabilistic algorithm such as bayes method (Bayesian Analysis);Gaussian process (Gaussian Processes);Or Gaussian process may be constructed algorithm of benchmark etc. also with Bayes's hybrid algorithm and can extend to more probability rulers The definition of degree.
Simplify for the statement of technical characteristic, the above-mentioned specific mathematical formulae in relation to probability scale is just not listed one by one , all can all imitate the scale that data are demarcated the processing method of following more probability scales, constitute machine learning Model belongs in the scope of the present invention.
The constructive method for summarizing above-mentioned more probability scales for machine learning is that a kind of be directed to passes through complex number space Such as the probability distribution of the data of Euclidean space and probability space carries out the measure of more than one field division.
The data for passing through complex number space refer to:Including at least Euclidean space can be passed through;Probability space;Manhattan Space (Manhattan Space);Chebyshev space (Chebyshev Space);Minkowski sky (Minkowski Space);Mahalanobis space (Mahalanobis Space);One or more of included angle cosine space (Cosine Space) Data.
The fuzzy event probability, which is estimated, refers to:It is established between data by passing through the distance in above-mentioned at least one space Fuzzy relation;By microcosmic fuzzy message, the establishing of the stabilization information of macroscopic view can be obtained after integral with probabilistic information The algorithm of closeness relationship between different data.
Fig. 2 is the machine learning flow chart of more probability scale self-organizings.
Here, the definition according to above-mentioned more probability scale self-organizings, the engineering of more probability scale self-organizings as shown in Figure 2 The process flow of habit is as follows:
S1Initialization step:It is the initializing set step of the machine learning program of more probability scale self-organizings, at this Input object function D (x, y) is first had in step, this data can be one-dimensional data, can also be two-dimensional data, any Dimension can.Its secondary more probability scale M for providing initialization in advance(0)' and self-organizing initial centered value, Huo Chengwei Initial characteristic values (x0, y0)(0), more probability scale self-organizings can there are two types of methods to be unfolded, the first is by more probability scales maximum Scale starts, and gradually carries out self-organizing toward small scale, second method be by the scale of minimum more probability scales, by It is secondary to carry out self-organizing toward big scale, thus the processing initialized also should a point two methods initiation parameter is set separately.
The initial method of the first more probability scale self-organizing:
More probability scale M of initialization(0)' and probability scale M defined above(0)The difference is that M(0)Maximum only need to be provided The valuation of the scale of probable range can, and M(0)' because being more probability scales, the data of entire probability distribution are carried out certainly Tissue, therefore to provide the valuation of the probability scale of entire probability distribution, i.e. M(0)' > M(0), general desirable 3 times or so can.
The initial method of the self-organizing of probability scale more than second:
More probability scale M of initialization(0)' and probability scale M defined above(0)Identical M(0)'=M(0)It can.
Above two method is in M(0)It is similar on the choosing method of value, strict setting need not be carried out, by artificial pre- It surveys, with (x0, y0)(0)Centered on, M(0)' be radius in the range of, have to include the data of final learning outcome a certain portion Point, therefore initial stage M(0)' value selects larger as far as possible.Initialization probability yardstick M(0)' bigger calculating time it is longer, instead It is too small, it is possible to cannot get correct result.
Setting in relation to other initialization, convergency values of the V as self-organizing, i.e., the result of last tissue and this from group Whether the result knitted has gap, that is to say, that whether the handling result of self-organizing has reached requirementConvergency value V too may well It cannot be correctly as a result, the convergency value V too small times for calculating cost be longer.Correct setting method is initial probability scale 5-10% or so.MN can be generally set in 5-10 times as self-organizing maximum tissue time numerical value, to prevent self-organizing to be in unlimited Endless loop state, the setting of the scale value m of more probability scales, m are set as the scale quantity of more probability scales, such as:More probability There are three scales for scale, can demarcate the probability value of three probability distribution areas, at this moment m=3 can.As reflection self-organizing Current initial number of frequency n=0 as self-organizing.
S2It is more probability scale self-organizing steps:The n-th of the m scales of probability scale is carried out in this step certainly Tissue treatment, for object function D (x, y), (x0, y0)m (n-1)As self-organizing central value, the probability scale M of m scalesm (n-1)As radius, by radius Mm (n-1)Within object function data dm (n)(xi, yj) (i=1,2 ..., k;J=1,2 ..., l) A probability distribution of the data as new probability space, it is inevitable to generate a new feature again for new probability distribution Value is (x0, y0)m (n)And the probability scale M of m scalesm (n), d herem (n)(xi, yj) ∈ D (x, y), n=n+1, MN=MN- 1.I.e. the step in, often execute an object function data dm (n)(xi, yj) (i=1,2 ..., k;J=1,2 ..., l) to Maximum probability direction has migrated a step.
S3It is judgment step:The judgment step whether the probability scale self-organizing of m scales is completed, the step for want Judge | Mm (n)-Mm (n-1)|≤VOr MN=0If there are one conditions to meet, the probability scale of the m scales of more probability scales Self-organizing is completed, if completing to be transferred to S4Step, if not yet completing to jump to S2Step continues more probability scales from group The operation knitted.
S4Data saving step:Be m scales probability scale self-organizing after the completion of, need the characteristic value to m scales (xn, yn)m (n), m-th of probability scale Mm (n)It is preserved as a learning outcome, m=m-1 corrects the quarter of probability scale The data area that the set of object function is self-organizing is modified to upper probability scale M more than one by degreem (n)With characteristic value (x0, y0)m (n)It is formed by probability distribution, i.e.,
D (x, y) ← dm (n)(xi, yj) (i=1,2 ..., k;J=1,2 ..., l), certainly if it is probability scale more than second Method for organizing is set in more probability scales according to normal distribution since more probability scales are since minimum scale, due to The scale of later probability scale be all it is identical with the range of first maximum probability scale scale, it is more there is no need to continue The processing of probability scale self-organizing, by the interval for being set to other scales of first maximum probability scale scale, and By m=0, then it is transferred to following more probability scales and is finally completed no judgment step S5
S5Judgment step:It is to be finally completed judgment step, judges whether m=0 in this step, "Yes" then more probability rulers Degree self-organizing processing is completed to be transferred to S6, "No" then jumps to S2Continue at the self-organizing of next scale of more probability scales Reason.
S6Return to step:It is to be finally completed to return to main program step.
We are defined under more probability scales below, the probability distribution center of the point of Euclidean space one and probability space The definition of the distance of point.
Fig. 3 is the definition schematic diagram for the distance for passing through the different spaces including probability space.
As shown in Figure 3:301 be an Euclidean space for covering probability space, and 302 be one of probability space The central point w of probability distributionj, 303 be first scale M of more probability scales of a probability distribution of probability space1, 304 are Second scale M of more probability scales of one probability distribution of probability space2, 305 be a probability distribution of probability space More probability scales third scale M3, 309 be the point v of Euclidean space onej, seek wjTo vjDistance.
In realistic model identification, each characteristic value will under different conditions, by plural study, Learning outcome forms a probability distribution, and a feature vector is constituted by n characteristic value, so between needing calculating vector Distance, here, it is w to be located in set W there are one point 302jThere are one put 309 as v in ∈ W, set Vj∈ V, in jth=1, The scale spacing of the scale between 302 and 303 is set in 2 ..., n element of vector as D1j=M1j, belong to wjProbability distribution it is general Rate value is P1j (wj), the scale span between 303 and 304 is D2j=M2j-M1j, belong to wjProbability distribution probability value be P2j (wj), the scale span between 304 and 305 is D3j=M3j-M2jBelong to wjProbability distribution probability value be P3j (wj), can from Fig. 3 To find out by vjTo wjIt has passed through and belong to wjProbability distribution 3 scale zones, therefore mj (wj)=3, then between 309 to 302 Probability space distance is:
Formula 6
Here it enables,
Δj (wj)For by vjTo wjDirection, after data enter probability space from Euclidean space, in probability space Distance represented by the Euclidean space error the distance between at a distance from actual probabilities space, with adjustment Δj (wj)Method is just Can Euclidean space distance is unified at a distance from probability space, it is tight in passing through two spaces to solve machine learning data The definition of the distance relation of lattice.
In wjTo vjBetween arbitrarily set R have a point rj∈ R 310 and wj302 fuzziness is:
Formula 7
Above formula is a membership function (Membership Function), as any point rj∈ R are closer to wjWhen ∈ W, Fj (wj)Result just closer to " 1 ", opposite rj∈ R are further away from wjWhen ∈ W, Fj (wj)Closer to " 0 ", it is referred to herein it is far and near away from From by formula 6 it is seen that one across Euclidean space at a distance from probability space, therefore Fj (wj)Fuzzy value be also Across the fuzzy value of the data between Euclidean space and probability space.
Definition method in relation to membership function (Membership Function) is regardless of any form, according to artificial intervention Various formula can be defined, no matter which kind of defines method, as long as can reflect between two elements of object function Fuzzy relation, all belong to the scope of the present invention within.
Here, then rp is setj (wj)It is any point rj∈ R belong to wjProbability in the probability distribution of ∈ W.Fuzzy message Fj (wj) With probability rpj (wj)The integral of the product of (j=1,2 ..., n) constitutes the survey of the fuzzy event probability of following set R to set W Spend formula.
Formula 8
Or
Fuzzy event probability defined above estimates F(w)Effect be:Microcosmic micro fuzzy message with it is microcosmic Micro probabilistic information adequately uses, and macroscopic view after integration can generate considerable stabilization information, can be used as calibration and appoints Similarity relation between one set of meaning between R and W, this is reflected to greatest extent between two set in information theory One best judgement benchmark of relationship, the two be integrated into the application of pattern-recognition set R can be regarded as it is identified Feature vector, W can be regarded as the feature vector of some logged pattern, pass through the fuzzy event of the two set of R and W Probability measure value can reflect whether identified feature vector belongs to the stringent basis for estimation of logged feature vector.
Summarize a kind of preparation method passing through Euclidean space at a distance from probability space that Fig. 3 is proposed, feature It is at least the presence of this probability space, at a region for passing through probability space, the section in Euclidean space Probability metrics are related with the probability value in the region passed through.
Above-mentioned Euclidean space extends to:Including Manhattan space (Manhattan Space);Chebyshev is empty Between (Chebyshev Space);Minkowski sky (Minkowski Space);Mahalanobis space (Mahalanobis Space);One kind in included angle cosine space (Cosine Space).
Further more, the distance scale of probability space is related with the probability distribution value when probability space passes through, passed through, it is It has to directive, is unsatisfactory for the Symmetry Condition of common distance scale.Such as it is investigating by vjTo wjApart from when, The distance of probability space is and vjThe position at place reaches wjPosition during, the variation of the probability for the probability distribution passed through Process is related, this probability distribution is the w in the final position to be reachedjThe probability change procedure of probability distribution, and and starting point Position vjIt is unrelated, even if vjBe also a probability distribution, also with vjProbability distribution it is unrelated.
Fig. 4 is the pattern recognition model schematic diagram of ultra-deep confrontation study.
The problem of ultra-deep confrontation study:As shown in Figure 4:Given 4100 and 4200 be two by more probability scales oneself Characteristic vector data after tissue, fv1j∈FV1With fv1j∈FV2, in the feature vector SV for giving a 4000 identified objects In characteristic element svj∈ SV (j=1,2 ..., e), problem to be solved are which feature identified object vectors SV belongs to VectorHere, if 4001 be sv1, 4002 be sv2, 4003 be sv3..., 400e is sve
Characteristic vector data FV1Fv11First scale be 4111, fv11Second scale be 4112, fv11 Three scales are 4113, fv11Central value be 4110.fv12First scale be 4121, fv12Second scale be 4122, fv12Third scale be 4123, fv12Central value be 4120.fv13First scale be 4131, fv13Second A scale is 4132, fv13Third scale be 4133, fv13Central value be 4130.fv1eFirst scale be 41e1, fv1eSecond scale be 41e2, fv1eThird scale be 41e3, fv1eCentral value be 41e0.
Characteristic vector data FV2Fv21First scale be 4211, fv21Second scale be 4212, fv21 Three scales are 4213, fv21Central value be 4210.fv22First scale be 4221, fv22Second scale be 4222, fv22Third scale be 4223, fv22Central value be 4220.fv23First scale be 4231, fv23Second A scale is 4232, fv23Third scale be 4233, fv23Central value be 4230.fv2eFirst scale be 42e1, fv2eSecond scale be 42e2, fv2eThird scale be 42e3, fv2eCentral value be 42e0.
If the characteristic element sv in identified characteristics of objects vector SVj∈ SV are in listed characteristic vector data FV1Spy Levy element fv1jProbability value in probability distribution is spj (fv1j)∈SP(FV1), in characteristic vector data FV2Characteristic element fv2jGenerally Probability value in rate distribution is spj (fv2j)∈SP(FV2)(j=1,2 ..., e).
Sv is set againjTo fv2jThe quantity mv of more probability scale scales that is passed through of probability distribution center2j, then set svjIt arrives fv2jThe quantity m in probability field that is passed through of probability distribution centerj (fv2j)=mv2j+ 1, fv2jProbability distribution more probability rulers Degree scale spacing is Dij, in space DijField on belong to fv2jProbability distribution probability value be Pij (fv2j)(i=1,2 ..., mj (fv2j))。
The fuzzy event that identified object vectors SV and characteristic vector data FV2 can be calculated according to formula 7 and formula 8 is general Rate is estimated:
Similarly, if svj∈ SV to fv1jThe quantity mv of more probability scale scales that is passed through of probability distribution center1j, mj (fv1j)=mv1j+1.Identified object vectors SV and characteristic vector data FV can be calculated1Fuzzy event probability estimate:
Identified object vectors SV belongs to characteristic vector data FV2It is as follows to fight formula:
Formula 9
F=F(FV2)/F(FV1)
When F > 1, identified object vectors SV belongs to characteristic vector data FV2, otherwise when F < 1, it is identified object vectors SV belongs to characteristic vector data FV1
Fig. 5 is the schematic diagram of the optimal classification model of ultra-deep confrontation study.
As shown in Figure 5:501 be the Euclidean space schematic diagram for covering probability space, in 501 Euclidean space There are two the probability distribution 520 and 530 of probability space, and 502 be the central value of probability distribution 520, and 503 be probability distribution 520 First scale, 504 be second scale of probability distribution 520, and 505 be the third scale of probability distribution 520, and 506 be general First scale zone of rate distribution 520, if probability value is p1j (520), 507 be second scale zone of probability distribution 520, If probability value is p2j (520), 508 be the third scale zone of probability distribution 520, if probability value is p3j (520)
510 be the central value of probability distribution 530, and 511 be first scale of probability distribution 530, and 512 be probability distribution 530 second scale, 513 be the third scale of probability distribution 530, and 514 be first scale area of probability distribution 530 Domain, if probability value is p1j (530), 515 be second scale zone of probability distribution 530, if probability value is p2j (530), 516 be general The third scale zone of rate distribution 530, probability value p3j (530)
It is ultra-deep it is strong confrontation study a kind of model be:If Euclidean space is 501, in 501 Euclidean space In, there is two probability distribution 520 and 530, the center of two of which probability distribution may respectively be an element in set W Use wjAn element v in ∈ W, and set Vj∈ V, any one o'clock of the joining place of two probability distribution is in two probability The subpoint 500 on straight line that the center 502 of distribution is connect with 510, can use rj∈ R are indicated, seek any point 500 i.e. rj∈R Which probability distribution belonged to.
M is set firstj (wj)For rjReach wjThe probability involved by more probability scales passed through between the center of probability distribution Region quantity, such as Fig. 5 be mj (wj)=2, then set pij (520)=pij (wj), (i=1,2 ..., mj (wj)), in the set for seeking R Any point 500rj∈ R and 520 probability distribution centers are the point w in set WjBetween fuzzy relation, by above-mentioned formula 7
Here m is set againj (vj)For rjReach vjIt is general involved by the more probability scales passed through between the center of probability distribution The region quantity of rate, such as Fig. 5 are mj (vj)=3, then set pij (530)=pij (vj), (i=1,2 ..., mj (vj)), then seek the set of R In any point 500rj∈ R and 530 probability distribution centers are the point v in set VjBetween fuzzy relation, by above-mentioned public affairs Formula 7
Here considering rj∈ R belong to when estimating of 520 probability distribution, and probability measure value should be " rj∈ R are in 520 probability Probability value P in distributionj (Wj), equally, considering rj∈ R belong to when estimating of 530 probability distribution, and probability measure value should be rj∈ R probability value P in 530 probability distributionj (Vj)
The formula which set any point set R belongs to just can be obtained with reference to formula 7 and formula 8 and formula 9:
Formula 9 '
Here, formula 9 ' is unlike above-mentioned formula 9:Confrontation study is between characteristic element by microcosmic fuzzy letter Breath is realized with probabilistic information, is become apparent from than macroscopic view confrontation learning effect, confrontation is formed as a result, integrating again by microcosmic confrontation Fuzzy event probability afterwards is estimated, and reflects that the relationship of two set of macroscopic view can be more accurate, so if F >=100, arbitrarily One point set R belongs to set W, otherwise belongs to set V.
The central value of two probability distribution is fixed as to the position of w and v, then sets above formula wj=w, vj=v, two probability point Any one point r in all dot matrix set R of the joining place of clothj∈ R, (j=1,2 ..., n) use F by above-mentioned calculatingj (w)And Fj (v)The confrontation result F of (j=1,2 ..., n)j=(Fj (w)/Fj (v)) × 100 as a result, can two have probability The data of distribution carry out optimized classification.
The ultra-deep confrontation study introduced with Fig. 4 except that:The pattern-recognition mould of the ultra-deep confrontation study of Fig. 4 Type is the optimization modes identification for solving one group of feature recognition vector and belonging to which feature learning vector data, and Fig. 5 is to solve How the linking region of two probability distribution carries out the problem of optimization classification of dot matrix.
The fuzzy event probability measure value for passing through different spaces a kind of from the above is that basis passes through two or more differences The distance dependent in space is made of microcosmic fuzzy message and microcosmic probabilistic information.The nearlyr fuzzy relation of the distance is more Greatly;Otherwise the bigger fuzzy relation of distance is more become estranged.Above-mentioned microcosmic probabilistic information refers to:Pass through two data of different spaces In, probability value of the data in the probability distribution of another data.
Here, the model building method that its ultra-deep confrontation study is only introduced by taking two probability distribution as an example, is actually being answered Can have 3 probability distribution in, the n probability distribution that can also have 45 ... between each other by ultra-deep confrontation learn into Row optimizes classification.
Fig. 6 is the optimal classification flow chart of ultra-deep confrontation study.
As shown in Figure 6:Ultra-deep confrontation study can be realized by following steps in conjunction with Fig. 5.
S1It is initialization step, the initialization content of the more probability scale self-organizings of Fig. 2 is can refer in the step, first G set W is inputted respectivelyhIn each data wjh∈Wh(h=1,2 ... g), and set VhIn each data vjh∈Vh, The number of division of multiple dimensioned probability, such as m=3 are set, the number count value PN=1 of probability distribution is set, setting data preserve empty Between and other need the content that is handled in initialization step.
S2It is more probability scale self-organizing steps, with reference to the of the machine learning flow chart of the more probability scale self-organizings of Fig. 2 Two step S2Method, to g set WhIn each data wjh∈Wh, and set VhIn each data vjh∈Vh, carry out A kind of machine learning of probability scale self-organizing.
S3It is judgment step, S2Whether a kind of probability scale self-organizing is completed"No" is with regard to rebound S2More probability scale self-organizings Step, "Yes" are then transferred to data saving step S4
S4It is data saving step, by S2A kind of obtained w of probability scale self-organizingj∈ W and vjEach member of ∈ V The maximum probability central value of element, the graduation position of more probability scales, the probability value etc. between each scale.In a kind of probability scale After the completion of self-organizing, each data r in the lattice position set R of the adjoiner of two probability distribution under the scalej∈R (j=1,2 ..., n) finds out and is preserved.
S5It is probability distribution judgment step, whether the machine learning of more probability scale self-organizings is completed"No" is just by probability The quantity PN=PN+1 of distribution, jumps to S2More probability scale self-organizing steps continue to calculate new probability distribution, and "Yes" then turns Enter following S6Ultra-deep confrontation learning procedure.
S6It is ultra-deep confrontation learning procedure, obtains all the points in the linking field of each distribution first in this step Gust, i.e. each data r in set Rj∈ R (j=1,2 ..., n), according to from S4The probability for two linkings taken out in database Center " w " and " v " between distribution value connection in line, then by the linking field of two probability distribution appoint Anticipate a point rj∈ R are projected on this straight line, and r is calculated according to formula 7j∈ R central values with two probability distribution respectively The fuzzy relation F of " w " and " v "j (w)With Fj (v), then by formula 8, formula 9, and set Pj (w)For rj∈ R are in probability distribution 520 On probability value, if Pj (v)For rjProbability values of the ∈ R in probability distribution 530, then rj∈ R are between two probability distribution Ultra-deep confrontation study microcosmic fuzzy event probability estimate for:
Fj={ (Fj (w)×Pj (w))/(Fj (v)Pj (v))}100
Fj>=100 rj∈ R belong to probability distribution 520, otherwise belong to another probability distribution 530.
For the validity of the simple and clear ultra-deep confrontation learning model of the specific digital certificate of use, one group of number is given below According to:It is 70mm to enable the Euclidean distance of w and v, and the scale interval of 520 probability distribution is 20mm, and the scale interval of 530 probability distribution is 14mm, rj∈ R are 39mm away from w, are 31mm, P away from vj (w)It is 27.2, Pj (v)It is 4.2, then four kinds of algorithms the results are shown in Table 1.
Table 1
Learning algorithm With w distances With v distances With w relationships With v relationships Phase difference Judging result
Euclidean distance 39mm 31mm 1.25 again Belong to 530
Probability metrics 21mm 13.7mm 1.53 again Belong to 530
Fuzzy relation 58 76 1.31 again Belong to 530
Estimate relationship 39 3 13 times Belong to 520
Importing the validity of fuzzy event probability measure theory from the results shown in Table 1 can prove.
The above-mentioned measure theory as the fuzzy event probability in ultra-deep confrontation study is the probability based on probability space More probability scales defined in distribution realize that in practical applications, probability scale can also be by being similarly identified figure The characteristic vector data of picture, repetition learning at different conditions are obtained more by the situation of the actual probability distribution of characteristic element The scale of probability scale.
Fig. 7 is the flow chart of the scale for more probability scales that actual probability distribution is obtained by study.
As shown in Figure 7:By study obtain actual probability distribution more probability scales scale processing in four steps, There is initialization step S1, each characteristic element data step S of record feature vector2, calculate each characteristic element distribution and Probability step S3, return to main program step S4
In initialization step S1In mainly set characteristic vector data pointer, data scale, iterations, more probability The scale quantity of scale, because it is the probability distribution calculating for real data, therefore in order to which the calculating of higher precision two is general The scale of more probability scales between rate distribution center point as far as possible should be set more.
In the record each characteristic element data step S of feature vector2In, it reads and is identified produced by the multiple study of object A large amount of feature vector, first data can be preserved, offline handles data, directly to data can also exist Line processing.
In the distribution and probability step S for calculating each characteristic element3In, first pass through the algorithm of probability scale self-organizing Calculate the central value of the maximum probability of data, then calculated centered on the central value of this maximum probability all features to The euclidean distribution distance of each characteristic element in amount, according to the quantity of the scale of the set more probability scales of initialization, And the probability value in meter full scale, each characteristic element for calculating separately out feature vector are distributed in which meter full scale, The quantity of each characteristic element in a certain meter full scale is with total ratio for learning number quantity as the meter full scale Interior probability value obtains probability value of each characteristic element in the meter full scale, can also use given probability value, ask corresponding The range of the ratio of the number of the characteristic element of this probability.
In return to step S4In being distributed in each characteristic element of feature vector in which meter full scale, and at this Probability value in meter full scale preserves, and for use in estimating for fuzzy event probability is calculated, and returns to main program.
It is referred to Fig. 1 by the schematic diagram for the more probability scales for learning to obtain actual probability distribution, only on scale It can increase very much, the precision that fuzzy event probability is estimated can be further increased in this way.
The scale method for summarizing a kind of more probability scales obtaining actual probability distribution by study that Fig. 7 is proposed, is anti- It goes back to school and practises in feature vector, obtain the distributing position of each characteristic element data belonging to plural feature vector, then by each The actual distribution of characteristic element calculates what the ratio calculating that ratio data or data-oriented in the field of giving are distributed was distributed Region obtains the probability value on the graduation position or the graduation position of more probability scales.
Namely it is directed to the quantity and the total data bulk of characteristic element that characteristic element data are distributed in given field Ratio, under conditions of the quantity of given characteristic element and the ratio of total data bulk, the region of occupancy required for calculating.
Fig. 8 is the schematic diagram of ultra-deep confrontation learning neural network.
The aufbauprinciple of ultra-deep confrontation study is only introduced by taking image recognition as an example.As shown in Figure 8:801 perceived figures One space reflection of picture, ultra-deep study are focused on converting, perceived image such as the figure of frequency space by various images Picture, the image of color space, energy space image, rim space image etc., 802 and 803 be two different mappings respectively Spatial image, 804 be a regional area of image 801, can be at 360 degree any one angle when considering handset identity here Degree, can all realize correct identification, and the region segmentation method according to annular set, and the size in this region should be according to application Need determine, although range obtains too big calculating speed and can have an impact for accuracy of identification quickly, conversely, range Although obtaining too small accuracy of identification can improve, calculating speed can be slow.It is contemplated that the pixel quantity in each region Or area is consistent, can more balance the information content of input in this way.
805, which be one be connected between perceived object and sensing layer, has self-organized learning ability, more probability The machine learning unit of scale self-organizing, this unit are mainly the study analysis function of undertaking to perceiving target information. The sensing layer of 806 neural networks primarily serves the input action to object function information, since sensing layer is to be connected to largely Machine learning unit has analytic function for object function information, can be under conditions of maximum probability by object function The characteristic value of information is extracted out, the target for the problem of also having deep layer data mining duty, being by complexity by the method for space reflection The deep information of function is excavated, also have the function of artificial intervention can according to the demand of application by artificial intervention to object function Form be processed, 807 be a node of the sensing layer of neural network, it is ultra-deep confrontation study be deeply to pass through increase The information content of object function, that is, increase the quantity of spatial alternation, while increasing the machine learning unit of more probability scale self-organizings Quantity, and increase sensing layer number of nodes realize.
It, can be certainly due to the machine learning of the more probability scale self-organizings connected between perceived target and sensing layer The direction towards maximum probability of rule migrates, and can automatically obtain target, therefore when carrying out online recognition to video image, To solve the problem of that the mode of picture position serious offset fixed position needs largely to be learnt, spy propose according to The structure feature of identified image allows the machine learning unit of each more probability scale self-organizing automatically to capture identified figure The feature of one part of picture, from the spy of this parts of images of motion tracking when this Partial Feature has prodigious mobile Sign, for example identification object is face, can allow the machine learning units of different more probability scale self-organizings, respectively into pedestrian's The tracking of outline position, the tracking of two positions of people, the tracking of the position of the mouth of people, the tracking etc. of the position of the nose of people Deng, in this way regardless of which position of face on the image, recognition of face can be correctly carried out, it can be from a large amount of video frequency program In find the object to be identified, this is the important symbol that can reflect machine learning level.
808, which are be connected between sensing layer and nervous layer one, has a confrontation learning ability, and more probability scales are from group The machine learning unit knitted mainly plays and learns to realize the function of being excavated to the deep layer of the information of sensing layer by fighting, and 808 Machine learning unit study when data to depend on historic storage data, learning outcome to be also used as history number According to preservation, therefore be provided with database storage unit 809.
810 be the nervous layer of neural network, is generated confrontation result after the machine learning modular learning for 808 Data, for the final initial decision-making level for judging to provide important references opinion of cortex 814.811 are and each section of sensing layer The node of the nervous layer of the corresponding neural network of point.
812 be the machine learning list for the probability scale self-organizing more than one being connected between nervous layer 810 and cortex 814 Member, the mainly final decision for cortex 814 provide final learning data.Its critical function is inputted for sensing layer A large amount of characteristic elements information, these characteristic elements are judged to being worth possessed by function to achieve the objective by machine learning, To enable cortex to make final correct judgement.Caused by the machine learning result of 812 probability scale self-organizing Maximum probability scale, or the proportional numerical value of result is fought, all it is the property of the threshold values with nervous layer totality as a result, conduct The judgement signal of final cortex 814,813 be the database purchase of 812 machine learning results between nervous layer and cortex Unit undertakes the effect of offer and the storage of the required data of machine learning.
A kind of construction methods of artificial intelligence new neural network of Fig. 8 are summarized, are by sensing layer, nervous layer and cortex It constitutes, the unsupervised machine learning unit connection that wherein sensing layer is directly inputted by a plurality of small datas is perceived object; The study of maximum probability information is carried out to the perceptive object of input.It is direct that a plurality of small datas are connected between nervous layer and sensing layer The unsupervised machine learning unit of input connects;Carry out the confrontation study that fuzzy event probability is estimated.Cortex is connect with nervous layer The unsupervised machine learning unit that small data directly inputs;Carry out the study of the opposed decision-making of maximum reliability.
Here perceptive object is by after space reflection;Including image;Face information;Sound;Emotion information;Text envelope Breath;Industry spot, agricultural scene or business field data;Internet of Things information;Finance and any need including economic forecasting information Want the digital information of artificial intelligence process.
The perceptive object of complexity system, which is mapped to, can carry a plurality of spaces of perceptive object information, pass through the map of perception Spatial information afterwards achievees the purpose that it is the deep information to obtain complexity.
Above-mentioned maximum probability information is as obtained by after the unsupervised learning that small data directly inputs;Beyond tradition statistics The information of the perceptive object obtained, i.e., the numerical information of the perceptive object of closest parent.
The confrontation that above-mentioned fuzzy event probability is estimated learns:It is needed for a perception data closest at two Probability distribution data in;The study for the relationship that fuzzy event probability by belonging to a probability distribution data is estimated;Simultaneously The confrontation study for carrying out being not belonging to the relationship that the fuzzy event probability of this probability distribution data is estimated again, therefrom obtains belonging to that The learning method of a probability distribution data.
Above-mentioned confrontation study is built upon on the basis of the characteristic element belonging to maximum probability reliability.
The machine learning that probability scale self-organizing more than one is added between cranial nerve layer and cortex is to improve The precision of ultra-deep confrontation study, is the probability of success of the practical identification to each characteristic element of feature vector It can be obtained by the machine learning of more probability scale self-organizings as a kind of important information when practising, and being logged in for identification The characteristic element information for obtaining maximum reliability, the data of the high characteristic element of reliability are worked, such ultra-deep confrontation Study can play best recognition effect.
Fig. 9 is the flow chart of ultra-deep confrontation learning neural network.
Referring herein to the example of Fig. 4 and Fig. 8 image recognitions, the flow of ultra-deep confrontation learning neural network is specifically introduced. As shown in Figure 9:The flow chart of ultra-deep confrontation learning neural network is realized by following 16 steps:
Input is perceived image step S1:Identified image is inputted in the step, but is not only image information, Can be acoustic information, the data information being predicted, data information of Fast Fourier Transform etc. is various to need artificial intelligence The data of perception.
Space mapping step S2:By identified image by image procossing, it is transformed into the difference that can carry original image The image of information, such as a color space images, b color space images, rim space image, energy space image, frequency space Image, Ha Fu (Hough) spatial image etc., the image after these transformation is the certain implicit letters that can reflect identified image The image format or data mode for ceasing feature can simultaneously serve as the specific information carrier of identified image, be added to identification pair As in, other kinds of data can copy this method to carry out space reflection processing.
Image region segmentation step S3:It can directly divide the image into several squares by traditional method in this step It shape region can also be to being known in order to consider can all to realize correct identification in 360 degree of any angle by mobile phone photograph For other image using the dividing method of annular as shown in Figure 8, each region is an annular region, thus may not necessarily Consider the direction of shooting image.The quantity of the image of number and space reflection for the region of segmentation, due to more Probability scale self-organizing has the characteristics that large range of image can be made the extraction of the characteristic value of maximum probability, and the present invention is built View as far as possible more increases the quantity of space reflection, such as space reflection can be carried out within the scope of 4 to 10, can be as far as possible The quantity in the region of image segmentation is reduced, such as image segmentation can be carried out in 9 to 100 regions, in this way in same sensing layer Number of nodes under and same processing time on can obtain better recognition effect.
Read the image step S in a region4:With reference to the neural network diagram that the ultra-deep confrontation of Fig. 8 learns, step below Suddenly can be longitudinal processing and two kinds of forms of lateral processes, longitudinal processing form is:Since an image-region, sensing layer is arrived The processing of a node content after, then the processing of a node of nervous layer is carried out, until all image-regions are completed.Laterally Processing form is:First by after the completion of the contents processing between all nodes of all image-region and sensing layer, then carry out institute Some perceives the processing of the content between node layer and all neural node layers.Here it by taking longitudinal formal layout as an example, introduces The model treatment flow of ultra-deep confrontation learning neural network.The processing of an image-region is first selected in this step.
More probability scale self-organizing step S5:This is one and input image information is directly carried out machine learning, for one A image-region can eliminate the false and retain the true to obtain the characteristic value of maximum probability, after all characteristic values obtain, so that it may by all Characteristic value constitutes the feature vector for the Global Information that can carry identified image, and the purpose is to the structural informations using image It combines to express the Global Information of image.Due to being only to solve for the characteristic value of a maximum probability, adopt in this step It can with common probability scale self-organized algorithm.
One perception node layer step S of input6:It is above-mentioned more probability scale self-organizing step S5Obtained maximum probability Characteristic value be input on a node of sensing layer.
Read historical data step S7:Ultra-deep confrontation study uses the form of on-line study, i.e. image study and figure It is completed in a system as identifying, is learning state, often as the data deficiencies setting value z in 809 databases in Fig. 8 Image of input is just once learnt, and recognition result data are stored in database, until when data reach z, When also can have " value " to the raising of accuracy of identification from now on according to the data of identification image when usually identifying, also can with it is this Line learning data improves the characteristic vector data logged in database, due to the age when can thus solve such as recognition of face Or climate change, body variation, such as when having put on glasses, the image recognition of progress face that also can be adaptive.
For convenience described below, in 809 databases for having logged in Fig. 8 involved by the image recognition example to Fig. 4 Feature vector can be expressed by following formula:
Formula 10
The quarter of more probability scales of the probability distribution information of each characteristic element can be indicated in the feature vector of formula 10 Degree can also be expressed by following formula:
Formula 11
M=(M1kj, M2kj..., Mmkj) k=1,2 ..., z;J=1,2 ..., e
The quarter of more probability scales of the probability distribution information of each characteristic element can be indicated in the feature vector of formula 10 The spacing of Euclidean distance between degree can also be expressed by following formula:
Formula 12
D=(D1kj, D2kj..., Dmkj) k=1,2 ..., z;J=1,2 ..., e
The quarter of more probability scales of the probability distribution information of each characteristic element can be indicated in the feature vector of formula 10 Probability value corresponding to the spacing of Euclidean distance between degree can also be expressed by following formula:
Formula 13
P=(P1kj, P2kj..., Pmkj) k=1,2 ..., z;J=1,2 ..., e
Identified object vectors:
Formula 14
SV=(sv1, sv2..., sve)
The data of above-mentioned formula 10, formula 11, formula 12, formula 13 and formula 14 are stored in 809 data in Fig. 8 In library.
More probability scale self-organizing step S8:This is the mostly important engineering carried out between sensing layer and nervous layer The processing of habit has run through the core theory of ultra-deep confrontation study.Traditional image recognition, is founded only upon Euclidean space Distance scale, by the feature vector of the identified image of calculation formula 14, with each feature of the login of formula 10 to Euclidean distance between amount, using the image corresponding to the nearest feature vector of Euclidean distance as recognition result, however In image recognition processes, since the image data of video camera shooting is influenced very big, the obtained spy of variation by environmental factor It is random distribution to levy vector data, therefore the method for the distance only with Euclidean space, the raising of accuracy of identification by Certain limitation.
Ultra-deep confrontation study is by more probability scale self-organizings, to the feature of the multiple identification learning result of identical image The probability distribution of the characteristic value of each characteristic element of vector obtains shown in formula 11 to formula 13 in different meter full scales Different probability value explicit value, while by the definition of strict probability space distance, finding out the identification object diagram of formula 14 The characteristic value sv of each characteristic element of the feature vector of picturej(j=1,2 ..., e), with each feature of the login of formula 10 to The determination of distance value for passing through Euclidean space and probability space between the characteristic value of the characteristic element of amount.
Fuzzy event probability Likelihood Computation step S9:In this step using it is strict pass through Euclidean space with it is general The definition of the distance in rate space, so that it may which finding out can reflect that formula 14 identifies the characteristic value of object images, the login with formula 10 Each feature vector characteristic value between fuzzy relation:
Formula 15
Here
Here VmaxIt is each characteristic value sv of the feature vector of the identification object images in the formula 14 setjWith formula Maximum between the characteristic value of each feature vector in 10 passes through Euclidean space at a distance from probability space.M is formula J-th of characteristic value sv of the feature vector of the identification object images in 14jWith j-th of feature of a certain feature vector in formula 10 The scale of several more probability scales, D have been passed through between valueikjWith PikjIt is svjBetween j-th of characteristic value of a certain feature vector The each more probability scales passed through are spaced and here every upper probability value.
According still further to the definition of fuzzy event probability estimated, by each characteristic value sv of identified imagejPlace is public Probability value sp in the interval of which scale in the probability distribution of the characteristic element of each feature vector of formula 10j, and step on Product of fuzzy matrices between the characteristic element of each feature vector of record, so that it may obtain identified image some One microcosmic measure value of the characteristic element of the feature vector of login:
Formula 16
Here:
smkj (fvkj)=sfkj (fvkj)·spjK=1,2 ..., z;J=1,2 ..., e
Or
In fact by all characteristic element sv in the feature vector of identified imagej(j=1,2 ..., e) and formula Estimating between the characteristic element in each feature vector in 10 FV determinants is integrated, and obtained fuzzy event is general The measure value of rate, can be as between the feature vector and listed each feature vector for weighing some identified image Closeness relationship, the fuzzy event probability in FV determinants between selection and feature vector estimate relationship login the closest Feature vector corresponding to image, so that it may it is high-precision obtain identified image as a result, this both considered all spies The probability distribution information of each characteristic element in sign vector, and passing through the tight of the different spaces including probability space The distance relation of lattice, while it is contemplated that fuzzy event probability is estimated, in microcosmic various fuzzy messages and probabilistic information It is integrated to obtain the valuable information of stabilization macroscopically, while being passed through again with two listed different characteristic vectors Between confrontation, finally just obtain recognition result, this is one and establishes best figure on fuzzy event probability measure theory As identification machine learning model, so the feature vector of identification object and listed fuzzy event probability relationship is the closest Feature vector corresponding to image be supplied to brain skin as the pre-selection result between the sensing layer and nervous layer of neural network Layer does last decision.
In conjunction with biological brain nerve unit theory, because brain is probabilistic model, need there are one the threshold values of nerve triggering, The probability scale of useful maximum probability is used as the threshold values of nerve triggering, can just meet the mechanism of brain, and fuzzy event probability is surveyed Degree is phase space, cannot function as last judgement as a result, in this step the present invention propose by sensing layer and nervous layer it Between pre-selection result try again the new neural network of last decision.
Specific practice is:By the characteristic element sv of the identification object of primary electionjMost with listed fuzzy event probability relationship The characteristic value fv of obtained characteristic element between close feature vector FVkkj, corresponding microcosmic fuzzy event probability surveys Angle value smkj (fvkj), the final decision foundation as more probability scale self-organizing machine learning between nervous layer and cortex.
In the 808 of Fig. 8 more probability scale self-organizing units, the machine by more probability scale self-organizings is also assumed responsibility for Learn the characteristic element sv to upper one identification object imagesj', with the listed feature corresponding to the result images after identification Vector f vkj', machine learning is re-started, new maximum probability central value is found out, the scale value of more probability scales ties these Fruit logs in again.
Input database step S10:This refers to the login step of 809 databases of Fig. 8, to carry out three in this step The login and processing of kind of data, first will by the result such as formula 10 of the machine learning of online probability scale self-organizing, 11,12 and 13 are logged in.
Second data is the characteristic element sv for the identified image for obtaining previous stepjWith listed fuzzy event The characteristic value fv of obtained each characteristic element between probabilistic relation feature vector the closestkj, corresponding microcosmic mould Paste the sm of probability of happening measure value, that is, formula 16kj (fvkj), the measure value of total macroscopic view:
Formula 17
And corresponding image is logged in.
The judgment step S whether entire area completes12:If entire area is not yet completed, modifier area pointer, sensing layer is changed The image step S for reading a region is jumped to after pointer and nervous layer pointer4It is proceeded to if entire area is complete Next step.
More probability scale self-organizing step S13:This is the machine learning unit between nervous layer and cortex, main to carry on a shoulder pole Negative machine learning function following aspects:
1. the evaluation of the reliability of the Measure Characteristics value of characteristic element.
Differentiate the Measure Characteristics value sm of the characteristic element of each feature vector belonging to the FV of the formula 10 of nervous layer inputkjIt is The no judgement with reliability can make cortex for there is the data of trust to be judged, not have to reduce The risk of incorrect judgement obtained by the data of trust.
The method of the specific probability value for finding out reliability is:According to the machine learning repeatedly carried out usually, that is, set k-th J-th of characteristic element in the feature vector of k-th of image having logged on corresponding to identified image is when actually identifying Success rate be CDkj, and false recognition rate when not being the image recognition is EDkj, trust value can indicate by following formula 18:
Formula 18
2. the acquisition of the Measure Characteristics value of the characteristic element of maximum reliability
By the machine learning of more probability scale self-organizings of this step, for the reliability vector of k-th of feature vector RkjSelf-organized learning is carried out, the maximum probability of k-th affiliated of feature vector can be found out in the scale of maximum probability scale Reliability vectorial Rkl(l=1,2 ..., a), according to the characteristic element sm estimated corresponding to k and ikl (fvki)(l=1, 2 ..., it is exactly the characteristic element of the reliability of maximum probability estimated a).
The reliability of maximum probability can also be wanted according to j-th of feature in the feature vector for k-th of image having logged on The probability distribution of element is concentrated or is disperseed, and can carry out more probability scales to the value range of the first scale of all characteristic elements Self-organized learning obtains the first meter full scale value of maximum probability, the characteristic element value belonged in maximum first meter full scale As the characteristic element estimated with maximum reliability.
3. fighting the composition of vector
According between the identified vector SV of formula 17 and each feature vector FVk in the feature vector that k are estimated Two the closest feature vector FV are found in fuzzy event probability measure valuemax1And FVmax2, the two vectors and SV institutes are right The fuzzy event probability answered estimates respectively SMmax1And SMmax2, with their each element smmax1∈SMmax1And smmax2l∈ SMmax2(l=1,2 ..., a) as final decision-making foundation.
Data step S is read in deposit14:In this step first by the trust value of the Measure Characteristics value of characteristic element RkjIt logs in, below by the vectorial R of the reliability of maximum probabilitykl(l=1,2 ..., a), and by smmax1∈SMmax1And smmax2l ∈SMmax2(l=1,2 ..., a) log in.
Input cortex steps in decision-making S15:The step for the letter of the maximum probability of identified image that obtains nervous layer The characteristic element sm of Lai Du estimatedmax1∈SMmax1And smmax2l∈SMmax2(l=1,2 ..., it a), can in cortex step Decision is carried out according to the formula of following ultra-deep confrontation study.
Formula 19
Here threshold values T > 1 are set, for cortex according to formula 19 the result is that as AL > T, nervous layer excitement stimulates cortex Judgement identification image belongs to and SMmax1Otherwise image corresponding to fuzzy event probability measure value refuses this recognition result.
Return to step S16:Terminate to this identification process, returns to main program.
Up to the present ultra-deep confrontation study is the machine learning carried out for each characteristic element of feature vector The model for the ultra-deep strong confrontation study that can be directed to feature vector is provided below in model.
Spatial image is passed through more probability scale self-organizings by the imaging only with handset identity optical indicia in space here Ultra-deep confrontation study is transformed into code ITC (Image To Code), due to the light source of scanner be it is directive, for Spatially the image of a certain angle can replicate, it is therefore desirable to solve how the various images by optical indicia in space In the feature vector set of the image that scanner can replicate isolated by ultra-deep strong confrontation study, and cannot replicate Image feature vector set, the feature with reproducible image is found out in the image feature vector set that cannot be replicated The fuzzy event probability of vector set estimates the anti-fake certificate that one group of feature vector value of relationship minimum is identified as mobile phone masses Code, so that it may realize the application purpose that public truth identification is carried out by mobile phone.
Figure 10 is the ultra-deep strong confrontation learning process figure for mobile phone anti-counterfeit recognition.
As shown in Figure 10:It needs to be scanned by reading in terms of the feature vector set for obtaining the image that scanner can replicate The step S of image1, ultra-deep confrontation learning procedure S2, preserve reproducible image feature vector data step S3, differentiate scanning figure Seem no completion step S4, four step compositions:
In the step S for reading scan image1In, optical indicia is carried out to the scanning of different directions by scanner, is read One optical indicia spatial image in one direction is transferred to be operated in next step.
Ultra-deep confrontation learning procedure S2, the image read in this step primarily directed to previous step passes through Carry out ultra-deep confrontation with reference to the content of Fig. 4, Fig. 8 or Fig. 9 and learn, obtain the feature of a maximum probability of scan image to Amount, and for feature vector each characteristic element distribution more probability scales each scale value.
Preserve reproducible image feature vector data step S3, by the result log database of previous step.
Differentiate whether scan image completes step S4, judge whether to need to input scan image again, be to sweep in the past here The image retouched not will produce subject to new feature vector, if it is determined that also needing to input scan image, then be transferred to S1Step, The direction of change optical indicia continues scanning and obtains new scan image.If it is determined that without continuing scan image then It is transferred to next step.
It is needed by the step S of reading handset identity image in terms of the feature vector set for obtaining handset identity image5, surpass Depth fights learning procedure S6, preserve handset identity image feature vector data step S7, differentiate whether handset identity image is completed Step S8, four step compositions:
Read handset identity image step S5, optical indicia is read in the arbitrary image in space by mobile phone, is transferred to next Step.
Ultra-deep confrontation learning procedure S6, the image read in this step primarily directed to previous step mobile phone, Ultra-deep confrontation study is carried out by referring to the content of Fig. 4, Fig. 8 or Fig. 9, obtain the image of handset identity one is most general The feature vector of rate, and for feature vector characteristic element distribution more probability scales each scale value.
Preserve handset identity image feature vector data step S7, most by one of the image of the handset identity of previous step The feature vector of maximum probability, and for feature vector characteristic element distribution more probability scales each scale value log in Database.
Differentiate whether handset identity image completes step S8, judge whether to need to carry out handset identity image again, be here Handset identity image of being subject to over not will produce new feature vector, if it is determined that also needing to input handset identification figure Picture is then transferred to S5The operation of step, handset identity image obtains new handset identity image.If it is determined that without continuing to sweep Tracing is as being then transferred to next step.
In obtaining public handset identity anti-fake certificate code, need by ultra-deep strong confrontation learning procedure S9, prevented Pseudo- identifying code step S10And return to step S11, three step compositions:
Ultra-deep strong confrontation learning procedure S9, learn in this step with reference to the ultra-deep confrontation of Fig. 4, Fig. 8 or Fig. 9 Method, relationship is estimated by fuzzy event probability, differentiates whether the feature vector being stored in handset identity image belongs to guarantor There are that in reproducible image feature vector set, judgement belongs to any one in reproducible image feature vector set When a, this feature vector is rejected from the feature vector set for being stored in handset identity image just, can be answered until being not present Until any one feature vector in imaged feature vector set.
Obtain fake certification code step S10, in this step, then with handset identity optical indicia, with reference to Fig. 4, Fig. 8, or Fig. 9 is learnt by ultra-deep confrontation again, by the characteristic value of the image of handset identity and is stored in handset identity image feature vector It is compared, therefrom finds the feature vector of the spatial image of a mobile phone angle of inclination minimum as handset identity anti-fake generation Code.
Return to step S11, above-mentioned steps achieve for the anti-counterfeiting codes of public handset identity, return to main program.
Figure 11 is the characteristic schematic diagram for printing color of image space and electronic image color space.
As shown in figure 11:Print color is the color space being made of CMYK, and electronic image is the color sky being made of RGB Between, electronic image is all overlapped with the color space overwhelming majority of printing image, also there is mutually misaligned part, due to tool There is such characteristic, as shown in figure 11:1101 be original image color, becomes 1102 color of image after being scanned through, therefore Being scanned through the printing image after instrument replicates will necessarily be different with original image.
Further more, small dot matrix such as 0.04mm dot matrix below, by the way that one will necessarily be lost after scanner scanning Branch battle array, that is to say, that the gray scale that printing image is scanned through each color after instrument scans has different degrees of loss.
But the printing image replicated after some compensation can make to be scanned through instrument scanning, accomplish relatively Original printing image.Especially naked eyes are it is difficult to which it is original image which, which is distinguished, which is the image replicated, and here it is current societies The problem that can't resolve that commodity counterfeit prevention commonly encounters can be gone up.
For this problem, ultra-deep strong confrontation study can be imported, by original image and the image being replicated Machine learning respectively, so that it may which to distinguish original image and duplicating image, but also it is former to obtain a high-precision differentiation The anti-fake cognizance code of public mobile phone of beginning image and duplicating image, to solve the problems, such as public mobile phone anti-counterfeit recognition.
Specific method can refer to Figure 10, in S1-S4The step of in scanning distinct methods replicate printing image, obtain duplication The set of the feature vector of printing image afterwards and the scale of more probability scales.In S5-S8The step of in mobile phone in difference Former printing image is shot under environment, obtains the feature vector of mobile phone shooting original image and the set of the scale of more probability scales. Then again in S9-S11The step of in shoot with mobile phone former printing image under various circumstances by ultra-deep strong confrontation study, take It obtains in the feature vector of mobile phone shooting original image and the set of the scale of more probability scales and will not belong to the printing image replicated Feature vector and more probability scales scale set in feature vector as anti-counterfeiting codes, be replicated belonging to The feature vector printed in the feature vector of image and the set of the scale of more probability scales is stepped on as replicating code The above method is used for the processing of commodity sign, so that it may realize the effect of the public handset identity true and false of commodity by record.
One small anti-counterfeit recognition region can also be equally set on commodity sign, can be set on anti-counterfeit recognition region Optical indicia or anti-counterfeit printing region as shown in figure 11 are set, realizes commodity masses' mobile phone truth identification according to the method described above.
The principle of optical indicia masses' handset identity true and false of Figure 10 is copied to extend also to the application of OVI optically variable films, Optically variable films and color shifting ink are by technique for vacuum coating, and the thickness by controlling film realizes that the interference to a certain spectrum is imitated Film or ink made of fruit, this film or ink can generate different color changes when changing the angle of 30-60 degree, The image that different colours can namely be generated in space realizes commodity mark with the similar method for being referred to Figure 10 of optical indicia The anti-counterfeit structure that the mobile phone masses of knowledge distinguish true from false.
The effect that micro lens arrays film realizes public mobile phone truth identification can also be imported using the method for Figure 10.Specifically Expression image can be projected some angle by being micro lens arrays, this angle can be by the angle of diffraction of micro lens arrays Design determine that while the image for projecting space is also that can be determined by the printing image corresponding to micro lens arrays, Therefore same optical indicia and optically variable films, which are compared, has controllability, stability, therefore to realizing that commodity masses' handset identity is true Have the characteristics that for pseudo- mark structure design richly endowed by nature.The processing method of Figure 10 is copied to achieve that mobile phone masses identify The effect of commodity true and false.
Whether can handle for the one group of discrete data provided, be realized and returned by above-mentioned ultra-deep confrontation study Analysis, is the important means of the performance of inspection machine study, and the ultra-deep confrontation of testing identity by real data learns can be with It obtains surmounting actually calculating as a result, lower mask body introduces a kind of method solving regression analysis using ultra-deep confrontation study.
Figure 12 is the flow chart that regression analysis is solved using ultra-deep confrontation study.
As shown in figure 12:Learn solution regression analysis using ultra-deep confrontation to be achieved by the steps of:
Initialization step S1:First have to provide the straight line of initial regression analysis in the step, it can will be given Lattice position (x belonging to regression analysis data acquisition system RDi, yi) in ∈ RD (i=1,2 ..., n), the point (x of minimum positionmin, ymin), the point (x with maximum positionmax, ymax) connection it is in alignment, as initial regression analysis straight line, because of probability ruler Degree self-organizing can automatically find the most close place of data distribution, so in aforementioned manners will not be straight because of initial regression analysis Linear distance data distribution is distant and influences last result.The step in also to provide the initial of probability scale in advance Value, can refer to the requirement of the initialization of above-mentioned more probability scale self-organizings, then have the largest loop time for being to provide regression analysis Number etc..
Calculate distance between beeline and dot step S2:By (the x in given regression analysis data acquisition system RDi, yi) ∈ RD (i=1, 2 ..., n), find out according to the following formula each point to regression analysis straight line distance:
Formula 20
Here, if regression analysis straight line is:Ax+By+C=0
Probability scale self-organizing step S3:The flow chart for copying the more probability scale self-organizings of above-mentioned Fig. 2, for previous step (x in obtained regression analysis data acquisition system RDi, yi) ∈ RD (i=1,2 ..., n) each data and regression analysis straight line Distance di(i=1,2 ..., n) carries out the self-organizing of probability scale, and the data within probability scale retain, other than probability scale Data are rejected.
Obtain Regression Analysis Result step S4:Copy the method for initialization in maximum x data retained dot array data (xmax) near find the central value i.e. (x of a y0, y0)max, minimum x data (xmin) near also find in a y Center value is (x0, y0)min, new regression analysis straight line is constituted with the two data, or right with the probability scale institute of point to straight line The center answered repeats such self-organizing process, so that it may obtain the regression analysis of maximum probability as new regression analysis straight line Straight line, and point arrive the maximum probability range of straight line.Here with traditional regression analysis formula calculated result it is different It is:By then passing through the machine learning of probability scale self-organizing progress, noise filtering can be crossed over small probability data Obstacle is to maximum probability Data Migration.
Judge whether to also need to continue learning procedure S5:According to more probability scale self-organizings whether the judgement side completed Method judges whether the result of probability scale self-organizing is stable, or whether exceeds the iterations of maximum self-organizing, "Yes" just terminates to enter next step."No", which then jumps to, calculates point to straight line apart from step S2Continue regression analysis Machine learning.
Return to step S6:Machine learning is completed to return to main program.
In order to be verified the effect that ultra-deep confrontation learns obtained regression analysis, one of table 2 is we illustrated The actual example of the statistics of the sales achievement of cold coffee related with outdoor temperature.
Table 2
Outdoor temperature 22 23 23 24 24 25 25 26 26
Sales volume 300 310 320 330 320 330 310 320 310
Outdoor temperature 27 27 28 29 32 28 24 31 31
Sales volume 340 360 350 360 400 370 310 360 390
Outdoor temperature 33 34 34 35 35 20 20 21
Sales volume 400 450 460 440 480 350 300 380
Figure 13 is the comparison schematic diagram of two kinds of Regression Analysis Results.
As shown in figure 13:(a) of Figure 13 is to pass through calculated time of the formula of traditional regression analysis for the data of table 2 Return analysis result, 1301 be the straight line using the calculated regression analysis of traditional formula, 1302 and 1303 be given range and The data that regression analysis straight line is clipped together, only 17 data are within the scope of this here.
(b) of Figure 13 is the Regression Analysis Result realized by ultra-deep confrontation study, and 1304 be by ultra-deep confrontation Learn the straight line for the regression analysis realized, 1305 and 1306 be given (a) same range with Figure 13, with ultra-deep confrontation Learn the data that obtained regression analysis straight line is clipped together, shares 21 data here within the scope of this.
By comparison it can be seen that since the Regression Analysis Result realized by ultra-deep confrontation study has maximum probability Regression analysis as a result, can be with cancelling noise, it is possible in identical area, find the maximum area of data distribution density Domain, therefore can be described as the Regression Analysis Result of maximum probability.
Figure 14 is the algorithm of ultra-deep manifold learning and the schematic diagram of result of calculation.
As shown in figure 14:For the data of this manifold learning provided, the algorithm of ultra-deep manifold learning is imported, it can be with Figure 14's as a result, being described separately as below is obtained by 47 steps:
First step S1:Any one position of random slave manifold learning is cut, and sets step-length first, step-length is minimum 2 times of width of the maximum probability scale of probability scale self-organizing can be in such a case, it is possible to accomplish to adapt to any data Accomplish the result of calculation of full accuracy, still, it is long to calculate the time.In order to examine the ability of ultra-deep manifold learning, here, Specially handled using long interval, but according to sampling thheorem, interval cannot more than no direct relation data point it Between distance less than half, otherwise will appear the mistake on path.
In S1The step of in first in the both ends machine learning algorithm of above-mentioned more probability scale self-organizings, obtain both ends most The two positions are linked to be a straight line, are obtained using the algorithm of above-mentioned regression analysis by the position of the central value of maximum probability scale The straight line of one data being distributed closest to this two sections, this straight line is bent as manifold learning is approached between this two sections The result of line.
Second step S2:It can select S1Any one end at the result both ends of step continues to calculate, and is selected here from a left side While starting to continue with.By S1The result of step does the extended line of one and interval equal in width being represented by dashed line to the left, here It just falls in manifold learning data, so directly having obtained a maximum probability scale according to above-mentioned processing to this Central value.It is identical with previous step in alignment with the connection of this central value, continue regression analysis, finally obtains one It is a the step for section in manifold learning best Approaching Results.Here the recurrence point obtained using ultra-deep confrontation study Analysis is as a result, the Regression Analysis Result closest to parent can be obtained with cancelling noise, therefore can also make manifold learning approaches effect Fruit reaches removal noise, the Approaching Results of the maximum probability of closest parent.
Third step S3:The step in extended line obviously departing from the data of manifold learning, still, cover mostly general In the process range of rate scale self-organizing, so the direction towards maximum probability that can be restrained oneself by more probability scale self-organizings Migration, finally still has found the central value of maximum probability.
Four steps S4:The step in extended line be significantly departing from manifold learning data, but as shown in figure 14:Have One data is still fallen in the process range of more probability scale self-organizings, by the guiding of this data, more probability scales The migration towards maximum probability that self-organizing can restrain oneself can pass through this small probability data, it is close to directly migrate to probability Spend the position of the data of bigger.
5th step S5:In this step, extended line is seriously departing from manifold learning data, in given more probability rulers Include the higher data of density ratio of the data and the right side of two dispersions on the left side within the scope of degree self-organizing, no matter it is mostly general Rate scale self-organizing the result is which data, carries out regression analysis within the scope of this, the density for being certain to approach the right side is high Data, and reject the low density data in the left side.
6th step S6:In S6Occurs special case in step, because extended line has approached the manifold learning number being not directly dependent upon According to if according to such case, the reason of can deviate from original track of manifold learning, this problem occur is in first step In emphasize in order to meet sampling thheorem, the data without direct relation of half cannot be more than by handling the spacing of manifold learning Span, therefore can be with it is assumed here that the length of extended line meets above-mentioned sampling thheorem, so that it may which guarantee is not in deviate stream Shape learns the problem of track.
7th step S7:Here it can be increased when data are not present within the scope of given more probability scales initial mostly general Rate range scale.
Eight to ten two step S8-S12:It is relatively more smooth always, the end of manifold learning is directly just can reach by extended line Point.
13rd step S13:Return first step S1, extended line is begun through from right end, through excessive probability scale from group The migration knitted has found manifold learning data, and carries out regression analysis, obtains maximum probability, can most approach manifold learning data Straight line.
Tenth four steps S14:The step in be mainly characterized by can in the terminal substantial deviation manifold data of extended line With by the initial stage size for increasing more probability scales, until covering the data of manifold learning.
Tenth five to ten six step S15-S16:S15There is no special processing requirement, S16It needs to increase more probability scales twice Size, until covering the data of manifold learning.
Tenth seven to ten nine step S17-S19:S17With S18There is no special processing requirement, S19With S6It is identical, occur extending The long problem of line, therefore also to shorten the size of extended line according to sampling thheorem.
2nd the ten to two ten three step S20-S23:S20With S21There is no special processing requirement, S22With S23It needs by The manifold learning data approached, therefore the manifold learning data after being approached need to mark, this data cannot participate in once again The self-organized learning of probability scale, but can be using the node of the straight line approached as secondary use.
20th the eight to two ten nine step S28-S29:S28There is no special processing requirement, S29According to the trend of extended line, need It wants and S7An endpoint connection.
3rd the ten to three ten eight step S30-38:S30, S31, S32, S33, S34And S35, not special processing requirement, from S36Start in very narrow space, manifold learning will beat a broken line, equally be directed to the data of this manifold learning, fixed according to sampling Reason, processing interval cannot be too big, otherwise can't resolve S36, S37And S38180 degree Steering, so using herein Three steps carry out the steering of 180 degree.
30th nine to four ten four steps S39-S44:S39, S40, S41, S42And S43There is no special processing requirement, according to The guiding of extended line, in addition more probability self-organizings have the ability and ultra-deep right of self-discipline migrated to the direction of high probability The regression analysis of anti-study has cancelling noise, can accomplish approaching for maximum probability, it is possible to smoothly realize manifold The best of data is practised to approach.S44Step is needed according to the guiding of extended line by a processed manifold learning data Node.
40th the five to four ten seven step S45-S47:S45, S46And S47Not special processing requirement can be approached directly To the starting point of manifold learning.
The list of a manifold learning data most preferably approached is obtained by ultra-deep manifold learning, using spline function Each straight line is connected into smooth continuous function, but also can be with derivation.
Above-mentioned ultra-deep manifold learning by means of the straight line for having approached manifold learning data extended line, and by mostly general The characteristics of self-organizing of rate scale can be migrated to high probability direction, ultra-deep regression analysis can be accomplished most with cancelling noise The rule that the initial diameter of good the characteristics of approaching and more purpose probability scales can increase as needed achieves that manifold The best of data is practised to approach.The processing interval of ultra-deep manifold learning can be in most most probability ruler after more probability scale self-organizings Range of the two times of diameters of degree to several times diameter.
Finally, it should be noted that:Ultra-deep manifold learning is suitable for the manifold learning big data with certain probability distribution Processing, result can be fitted by spline function as a continuous function, it might even be possible to accomplish that derivative is continuous.
The features of the present invention:
Content main points of the present invention and the feature for summarizing description above are as follows:
1. proposing the definition method for more probability scales that one is realized by probability density characteristics.
This method can demarcate the region of probability distribution, adapt to the probability distribution information that machine learning can get object function Demand, using the characteristic of probability distribution, without the independent study for each data, and without to each probability point The independent study in cloth region can get the very high application effect of learning efficiency, because being the scale of maximum probability, adapt to The scale demand of self-organizing machine learning.
2. also proposed the definition method for more probability scales that one obtains actual probability distribution by study.
The data that this method can be directed to probability distribution using very simple machine learning method realize more probability point The characteristics of learning outcome of cloth there is method simply to import hardware spending small, and computational efficiency is high, application suitable for pattern-recognition.
3. proposing the unsupervised engineering for more probability scale self-organizings that the self-discipline migration towards maximum probability may be implemented Algorithm is practised, the obstacle of noise data can be crossed over during probability distribution, the direction towards maximum probability of self-discipline migrates, directly To the scale letter of each region of the entire probability distribution in the region and available probability space for obtaining maximum probability distribution Breath.
4. proposing the strict difinition that can pass through Euclidean space at a distance from probability space, difference is obtained generally to be stringent Distant relationships between rate distribution provide theoretical foundation.It discloses in the general of the section of the distance and probability distribution of probability space Rate value is related, and disclose probability space be present in Euclidean space, and can in an Euclidean space Distance there is unlimited probability space and probability space is asymmetric, is that tool is directive etc..To improve mould The precision of formula identification and the data precision of machine learning result, and the efficiency of raising machine learning provide theoretical foundation.
5. what the fuzzy event probability for proposing the data for the different spaces that can be passed through including probability space was estimated determines Right way of conduct method, can be microcosmic fuzzy between the position of in space any one point, and the position of a certain probability distribution Information is fully utilized with probabilistic information, and the fuzzy message and probabilistic information of considerable stabilization are can get after by integral, And stringenter defines in passing through Euclidean space and probability space, stringenter between two data Distant relationships provide theoretical foundation to further increase the precision of machine learning with the efficiency for further increasing machine learning.
6. the method for proposing several confrontation study, for appointing in passing through the different spaces including probability space It anticipates on a position, the stringent fuzzy event probability established between a data and two probability distribution estimates relationship The confrontation study carried out, can get the nicety of grading more increased, the precision of pattern-recognition can be made to reach best level, be Unsupervised machine learning provides higher precision and higher learning effect algorithm.Strong confrontation study can make tradition Upper unclassified data can be realized the separation of different characteristic vector by the ultra-deep algorithm fought by force, can solve mesh The problem of preceding anti-counterfeit recognition public by mobile phone realization that can't resolve in the world.
7. proposing a neural network model for best suiting brain mechanism, compared with traditional neural network, avoid logical Cross magnanimity parameter carrying object function information, avoided deep learning between mass data, the group of universe number It closes in space, carries out the problem that can only obtain local optimum solution that data screening is encountered, solve the black of traditional neural network Case problem solves the problems, such as that traditional neural network can not achieve unsupervised learning, and solves traditional neural network needs Huge hardware supported, the problem that cannot be applied in monolithic chip and wireless terminal, ultra-deep confrontation proposed by the present invention Learning neural network has the advantages that study precision is high and learning efficiency is high, and also having can be with collateral learning and master die The configuration of group, can according to application need to allow user voluntarily build the framework of neural network, while being also suitable for passing through chip Carry out the composition of the machine learning module of hardware.
8. can most preferably be approached linear and nonlinear probability distribution data, the recurrence of maximum probability can get The best Approximation effect of analysis result and manifold learning data, can be with cancelling noise, and can pass through the barrier of noise data Hinder, the region for approaching maximum probability of self-discipline, ultra-deep regression analysis learning outcome has with the calculated result of practical formula Apparent Approximation effect.It proves that ultra-deep confrontation study can be fought with traditional deep learning algorithm, has better than biography The advantage of system algorithm.

Claims (6)

1. a kind of ultra-deep regression analysis learning method, be by calculating point to straight line step, probability scale self-organizing step and Obtain Regression Analysis Result step composition, it is characterised in that:
(1) distance between beeline and dot step is calculated:For the straight line in data distribution;Ask each data in given range to straight line Distance;
(2) probability scale self-organizing step:For the distance of the above-mentioned point found out to straight line;Carry out the machine of probability scale self-organizing Device learns, and obtains the data distribution of maximum probability;
(3) Regression Analysis Result step is obtained:The data distribution of the above-mentioned maximum probability found out is found out again closest straight Line;Judge whether the processing of probability scale self-organizing machine learning can be completed, the no probability scale self-organizing step that continues to Processing;It is the Regression Analysis Result for just obtaining maximum probability.
2. a kind of ultra-deep regression analysis learning method according to claim 1, it is characterised in that:Probability scale self-organizing Machine learning refer to:The unsupervised learning that small data directly inputs;Be pass through restrain oneself in different spaces towards maximum probability The machine learning of direction migration.
3. a kind of ultra-deep regression analysis learning method according to claim 1, it is characterised in that:Probability scale refers to: Based on including having normal distribution;Multivariate normal is distributed;Logarithm normal distribution;Exponential distribution;T is distributed;F is distributed;X2Distribution; Bi-distribution;Negative bi-distribution;Multinomial distribution;Poisson distribution;Erlangian distribution (Erlang Distribution);It is super several What is distributed;Geometry is distributed;Traffic distribution;Weibull distribution (Weibull Distribution);Angular distribution;Beta is distributed (Bete Distribution);Gamma is distributed any one in (Gamma Distribution) or extends to Bayes side Method (Bayesian Analysis);Any one in arbitrariness probability distributing in Gaussian process (Gaussian Processes) Module set by probability density characteristics.
4. according to claim 1, a kind of ultra-deep regression analysis learning method described in 2, it is characterised in that:Different spaces are Refer to:Euclidean space;Probability space includes Manhattan space (Manhattan Space);Chebyshev space (Chebyshev Space);Minkowski sky (Minkowski Space);Mahalanobis space (Mahalanobis Space); Arbitrary Spatial Coupling in included angle cosine space (Cosine Space).
5. a kind of ultra-deep regression analysis learning method according to claim 1, it is characterised in that:Maximum probability information is Refer to:It is as obtained by after the unsupervised learning that small data directly inputs;The Regression Analysis Result that beyond tradition statistics is obtained, The Regression Analysis Result of i.e. closest parent.
6. a kind of ultra-deep regression analysis learning method according to claim 1, it is characterised in that:More probability scale self-organizings It can be expressed by following formula:
If there is a following set G for probability space, and have gf∈ G,
In this probability distribution g of probability spacef(f=1,2 ..., ζ) certainly exists one at characteristic value A (G), due to probability Space is measure space, therefore certainly exists a probability scale M [G, A (G)] for characteristic value A (G), meets such as lower probability ruler When spending the condition of self-organizing,
A(n)=A (G(n))
M(n)=M [G(n), A (G(n))]
Set G can be allowed on the basis of probability scale(n)It is migrated towards maximum probability direction,
G(n)=G { [A (G(n-1)), M [G(n-1), A (G(n-1))]]
When n >=β (β is the numerical value more than 4), A (G can be set(n)) it is maximum probability characteristic value, M [G(n), A (G(n))] It is the maximum probability scale centered on maximum probability characteristic value.
CN201710123049.XA 2017-02-27 2017-02-27 A kind of ultra-deep regression analysis learning method Pending CN108510068A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201710123049.XA CN108510068A (en) 2017-02-27 2017-02-27 A kind of ultra-deep regression analysis learning method
JP2018047267A JP6998561B2 (en) 2017-02-27 2018-02-27 A method for constructing a machine learning model for ultra-deep regression analysis, its device, its program, and a general-purpose mobile terminal device equipped with the program.

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710123049.XA CN108510068A (en) 2017-02-27 2017-02-27 A kind of ultra-deep regression analysis learning method

Publications (1)

Publication Number Publication Date
CN108510068A true CN108510068A (en) 2018-09-07

Family

ID=63373316

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710123049.XA Pending CN108510068A (en) 2017-02-27 2017-02-27 A kind of ultra-deep regression analysis learning method

Country Status (2)

Country Link
JP (1) JP6998561B2 (en)
CN (1) CN108510068A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111523353A (en) * 2019-02-02 2020-08-11 顾泽苍 Method for processing machine understanding radar data

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111045422A (en) * 2018-10-11 2020-04-21 顾泽苍 Control method for automatically driving and importing 'machine intelligence acquisition' model
CN111126612A (en) * 2018-10-11 2020-05-08 顾泽苍 Automatic machine learning composition method
CN111858928B (en) * 2020-06-17 2022-11-18 北京邮电大学 Social media rumor detection method and device based on graph structure counterstudy
CN114818839B (en) * 2022-07-01 2022-09-16 之江实验室 Deep learning-based optical fiber sensing underwater acoustic signal identification method and device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9069725B2 (en) 2011-08-19 2015-06-30 Hartford Steam Boiler Inspection & Insurance Company Dynamic outlier bias reduction system and method
JP5822411B2 (en) 2013-08-12 2015-11-24 株式会社アポロジャパン Image information code conversion apparatus, image information code conversion method, image related information providing system using image code, image information code conversion program, and recording medium recording the program

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111523353A (en) * 2019-02-02 2020-08-11 顾泽苍 Method for processing machine understanding radar data

Also Published As

Publication number Publication date
JP2018142325A (en) 2018-09-13
JP6998561B2 (en) 2022-01-18

Similar Documents

Publication Publication Date Title
CN108510057A (en) A kind of constructive method of the neural network model of ultra-deep confrontation study
CN108510052A (en) A kind of construction method of artificial intelligence new neural network
CN108509964A (en) A kind of preparation method passing through Euclidean space at a distance from probability space
CN108510068A (en) A kind of ultra-deep regression analysis learning method
CN108319972B (en) End-to-end difference network learning method for image semantic segmentation
Fukumi et al. Rotation-invariant neural pattern recognition system estimating a rotation angle
CN106920243A (en) The ceramic material part method for sequence image segmentation of improved full convolutional neural networks
CN111681178B (en) Knowledge distillation-based image defogging method
CN108510070A (en) A kind of preparation method for the fuzzy event probability measure value passing through different spaces
CN108509965A (en) A kind of machine learning method of ultra-deep strong confrontation study
CN108509966A (en) A kind of method of ultra-deep confrontation study
CN108510079A (en) A kind of constructive method of more probability scales for machine learning
CN112489168A (en) Image data set generation and production method, device, equipment and storage medium
CN109191418A (en) A kind of method for detecting change of remote sensing image based on contraction self-encoding encoder feature learning
CN112818849A (en) Crowd density detection algorithm based on context attention convolutional neural network of counterstudy
CN115761240B (en) Image semantic segmentation method and device for chaotic back propagation graph neural network
CN110281949A (en) A kind of automatic Pilot unifies hierarchical decision making method
CN106203373A (en) A kind of human face in-vivo detection method based on deep vision word bag model
CN108510053A (en) The method of the machine learning of probability scale self-organizing more than one
CN115830537A (en) Crowd counting method
CN115761888A (en) Tower crane operator abnormal behavior detection method based on NL-C3D model
CN114463614A (en) Significance target detection method using hierarchical significance modeling of generative parameters
CN108510056A (en) A kind of method that ultra-deep confrontation learns online image recognition
CN108510055A (en) The mobile phone masses that another kind imports artificial intelligence distinguish true from false method
CN108510054A (en) A kind of mobile phone masses using artificial intelligence distinguish true from false method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination