CN108415568A - The intelligent robot idea control method of complex network is migrated based on mode - Google Patents
The intelligent robot idea control method of complex network is migrated based on mode Download PDFInfo
- Publication number
- CN108415568A CN108415568A CN201810168227.5A CN201810168227A CN108415568A CN 108415568 A CN108415568 A CN 108415568A CN 201810168227 A CN201810168227 A CN 201810168227A CN 108415568 A CN108415568 A CN 108415568A
- Authority
- CN
- China
- Prior art keywords
- mode
- robot
- picture
- network
- eeg signals
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/015—Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
Abstract
A kind of intelligent robot idea control method migrating complex network based on mode:By the picture collection equipment that each robot carries in robot, ambient condition information is obtained, robot is made to have target identification function;Acquisition subject watches the 4 electrode SSVEP EEG signals induced the when of flickering picture in visual stimuli interface attentively, and host computer is wirelessly transmitted to by WiFi;4 electrode SSVEP EEG signals of acquisition are handled using polynary empirical mode decomposition method, and combine mode migration Complex Networks Analysis theoretical, realize the Accurate classification to SSVEP EEG signals, the anti-visual stimuli picture for pushing away subject and watching attentively, and then robot team formation control instruction is generated, realize the control of intelligent robot idea.The invention enables the selectable direction target more horn of plenties of robot.Strong to the analyzing processing ability of signal, identification precise control rate is high.
Description
Technical field
The present invention relates to a kind of idea control methods.More particularly to a kind of robot migrating complex network based on mode
Intelligent idea control method.
Background technology
Brain-computer interface (Brain-Computer Interface, BCI) is not dependent on to be made of nervus peripheralis and muscle
Output channel, be directly connected to the communication system of brain and computer and external equipment.Brain machine interface system have it is noninvasive acquisition,
Advantage easy to operate and unique temporal resolution advantage.Brain machine interface system is usually made of four modules:Brain telecommunications
Number acquisition module, EEG feature extraction module, EEG signals tagsort module and peripheral control module.Feature
Extraction module and tagsort module are the most crucial parts of entire brain-computer interface, and being exactly based on the two modules can be by brain
Electric signal is converted to the control signal that can be identified by external equipment., there are faint property, complexity and not in the characteristics of EEG signals
Stability has relatively high requirement due to these characteristics of EEG signals for normal form design and feature extraction and classifying.
The method for seeking effective feature extraction and classification is to improve one of recognition accuracy key technology.Common feature
Extracting method has power spectrumanalysis, wavelet transformation, cospace pattern algorithm etc..But there are still some limitations for these algorithms.
Used in this patent polynary empirical mode decomposition (Multivariate empirical mode decomposition,
MEMD) method can not only efficiently solve the problem of modal overlap and scale are aligned, and have better adaptivity and when
Frequency localization ability.
Since the 1950s, First industrial robot was born, with communication, computer, sensing, electronics, control
The fast development of the technologies such as system and artificial intelligence technology, integrates the robot technology of a variety of advanced technologies in the market demand
Traction under have been achieved for prodigious development.Augmented reality not only presents the information of real world, but also will be virtual
Information show that two kinds of information are complementary to one another, are superimposed simultaneously.By the way that augmented reality is anticipated applied to intelligent robot
Read control, can effective hoisting machine people operability and adaptability, the application for robot in more areas provides
Important foundation.
Invention content
The technical problem to be solved by the invention is to provide a kind of intelligent robot meanings migrating complex network based on mode
Control method is read, robot acquires a large amount of ambient condition informations and sends data to host computer, real by deep learning theory
Existing augmented reality and target identification;The SSVEP EEG signals for acquiring 4 electrodes using EEG eeg collection systems simultaneously, to collecting
SSVEP signals carry out polynary empirical mode decomposition, migrating Complex Networks Theory by mode is analyzed, and is carried out using SVM
Classification, master control system assigns control instruction based on classification results, to realize intelligent idea control method.
The technical solution adopted in the present invention is:A kind of intelligent robot idea control migrating complex network based on mode
Method includes the following steps:
1) the picture collection equipment carried by robot obtains ambient condition information, in conjunction with deep learning theory, makes machine
Device people has target identification function;
2) subject is acquired by EEG electroencephalogramsignal signal collection equipments and watches what the when of flickering picture in visual stimuli interface induced attentively
4 electrode SSVEP EEG signals of acquisition are filtered with after Denoising disposal, pass through WiFi by 4 electrode SSVEP EEG signals
It is wirelessly transmitted to host computer;
3) 4 electrode SSVEP EEG signals of acquisition are handled using polynary empirical mode decomposition method, and combines mould
State migrates Complex Networks Analysis theory, realizes the Accurate classification to SSVEP EEG signals, the anti-visual stimuli for pushing away subject and watching attentively
Picture, and then robot control instruction is generated, to the target movement identified in step 1), realize the control of intelligent robot idea.
Step 1) includes:
(1) robot acquires ambient enviroment picture information from different perspectives respectively, the artificial robot of the machine or
Person is the robot team formation that can realize cooperating operation;
(2) collected picture is uploaded to by server by WiFi wireless transmission methods;
(3) according to environment picture, CNN deep learning models are built and trained, for multiple target identification, positioning, recognize mesh
It is unlimited to mark type;
(4) the environment pictorial information for acquiring robot in real time is inputted in the CNN deep learning models trained, is obtained
12 targets of identification uniformly select 12 targets when number of targets is more than 12 according to spatial position, and in 12 mesh picked out
Mark side label target title and setting target designation A1~A12, target designation and number are shown as task to be done to use
Family interface.
12 pictures flickered with different frequency are shown in SSVEP visual stimulis interface described in step 2), to 12
A picture number consecutively, obtains picture number B1~B12, wherein the frequency range of 12 pictures is between 8~13.5Hz, frequency
Minimum interval be 0.5Hz and frequency distributing order be 8Hz, 10Hz, 12Hz, 11.5Hz, 13.5Hz, 9.5Hz, 9Hz,
11Hz、13Hz、10.5Hz、12.5Hz、8.5Hz。
Electrode is arranged using 10-20 international standard leads in step 2), chooses O1, O2, Oz, Pz totally 4 occipitalia region electricity
Pole, reference electrode Cz, grounding electrode GND.
Step 3) includes:
(3.1) it to 4 electrode SSVEP EEG signals, is handled using polynary empirical mode decomposition method, obtains 4 groups of sheets
Levy mode function, respectively correspond to 4 electrode SSVEP EEG signals, in each group of intrinsic mode function with SSVEP visual stimulis
The relevant part of picture flicker frequency merges in interface, generates 4 new intrinsic mode functions, i.e. 4 new sequences take
For original 4 electrode SSVEP EEG signals, it to be used for subsequent analysis;
(3.2) 4 to generation new sequences, structure mode migrate complex network, realize data fusion;
(3.3) complex network is migrated to mode, calculates separately network average node degree, network average nodal betweenness, network
Average aggregate coefficient, network overall situation convergence factor, network aggregation coefficient entropy and network average shortest path length network index introduce branch
Vector machine is held, multiple target classification is carried out based on obtained network index, subject's note is pushed away by the way that multiple target classification results are counter
Depending on visual stimuli picture;
(3.4) by the picture number B of visual stimuli picture1~B12With the target designation A identified1~A12It is associated,
The association is, when subject watches stimulation picture B attentivelyυWhen, it induces and generates corresponding SSVEP EEG signals, pass through (3.1) step
After accurately locking the picture to (3.3) step, by brain-computer interface robot command to target AυMovement, wherein υ ∈ 1,
2,...,12}。
Structure mode described in (3.2) step migrates complex network:
(3.21) to Multiphase sequencesP=1,2,3,4, wherein the length comprising 4 channels is the sequence of N,
xp,iI-th of element for indicating p-th of channel sequence is split by way of window sliding, determines that window size is α >
10, sliding step-length is τ, obtains a series of Multiphase sequences segment that length are α:
Wherein h is positive integer, meets α+(h-1) τ≤N;
(3.22) for the Multiphase sequences segment of any one windowSelect two channel p therein1And p2,
Related coefficient between the sequence of calculation
Wherein
(3.23) according to method described in (3.22) step, for the Multiphase sequences segment of any one windowThe related coefficient between any two of which channel sequence is calculated, respectively:r12, r13, r14, r23, r24, r34,
In 1,2,3,4 indicate channel numbers, correspond to (3.21) step in p=1,2,3,4;
(3.24) for the Multiphase sequences segment of any one windowBy r12, r13, r14, r23, r24, r34It presses
It is ranked up according to sequence from small to large, constitutes mode, share 6!=720 kinds of sortords share 720 kinds of mode;
(3.25) using mode as network node, company's side method of determination between network node is:Multiphase sequences segmentDetermine mode γh, Multiphase sequences segmentDetermine mode γh+1, establish from mode γhIt is directed toward mode
γh+1It is oriented even side, if mode γhWith mode γh+1For same mode, then ignore this and connect side, includes 720 nets to establish
The mode of network node migrates complex network.
The intelligent robot idea control method that complex network is migrated based on mode of the present invention, is had the advantage that:
(1) information is collected by robot, multigroup environmental information, while changeable detection place can be obtained in synchronization,
Obtain different zones information so that the selectable direction target more horn of plenty of robot.
(2) method for combining deep learning and complex network, it is strong to the analyzing processing ability of signal, identify precise control rate
It is high.
(3) being introduced into for mode migration complex network makes it possible to preferably excavate the potential rule in SSVEP EEG signals,
Contribute to the promotion of classification accuracy.
Description of the drawings
Fig. 1 is that the present invention is based on the flow charts of the intelligent robot idea control method of mode migration complex network;
Fig. 2 is the polytypic interfaces of SSVEP;
Fig. 3 is the schematic diagram of the worlds 10-20 lead system.
Specific implementation mode
The intelligent robot idea control that complex network is migrated based on mode with reference to embodiment and attached drawing to the present invention
Method processed is described in detail.
The present invention based on mode migrate complex network intelligent robot idea control method, flow chart as shown in Figure 1,
Include the following steps:
1) the picture collection equipment carried by robot obtains ambient condition information, in conjunction with deep learning theory, makes machine
Device people has target identification function;Including:
(1) robot acquires ambient enviroment picture information from different perspectives respectively, the artificial robot of the machine or
Person is the robot team formation that can realize cooperating operation, is embodied for example using one group of four robot, assume diamond in shape collaboration
Movement, acquires the environment picture of four direction all around respectively;
(2) collected picture is uploaded to by server by WiFi wireless transmission methods;
(3) according to environment picture, CNN deep learning models are built and trained, for multiple target identification, positioning, recognize mesh
It is unlimited to mark type;Wherein, the training process of CNN models, includes the following steps:
(3.1) convolution is deconvoluted an input sample with a trainable convolution kernel, in addition passing through one again after biasing
A activation primitive obtains convolutional layer;
(3.2) down-sampled, i.e., maximum of points is selected in multiple pixel values per sub-regions, this area is replaced with the pixel
Domain traverses entire sample, and obtained pixel is combined, and is exactly the Feature Mapping result of this layer input;
(3.3) full connection output, information reach output layer from input layer by transmitting step by step.In this process, input with
Every layer of weight matrix dot product, and then obtain output result;
(3.4) error back propagation, when determining minimum using gradient descent method,Expression parameterGiven
Error on training set χ finds optimal variableHave:
WhenWhen being the differentiable function of variable vector, the gradient vector of partial derivative composition is
E is minimized by gradient vector and gradient descent procedures, from random vectorStart, and on each step edge and this
The opposite direction update of gradient
Wherein,Indicate updatedη is Studying factors, and when E obtains minimum, derivative is equal to 0, and process terminates;
(4) the environment pictorial information for acquiring robot in real time is inputted in the CNN deep learning models trained, is obtained
12 targets of identification uniformly select 12 targets when number of targets is more than 12 according to spatial position, and in 12 mesh picked out
Mark side label target title and setting target designation A1~A12, target designation and number are shown as task to be done to use
Family interface.
2) subject is acquired by EEG electroencephalogramsignal signal collection equipments and watches what the when of flickering picture in visual stimuli interface induced attentively
4 electrode SSVEP EEG signals of acquisition are filtered with after Denoising disposal, pass through WiFi by 4 electrode SSVEP EEG signals
It is wirelessly transmitted to host computer;Wherein,
12 pictures flickered with different frequency are shown in SSVEP visual stimulis interface as described in Figure 2, and 12 are schemed
Piece number consecutively obtains picture number B1~B12, wherein the frequency range of 12 pictures between 8~13.5Hz, frequency most
It is closely-spaced be 0.5Hz and frequency distributing order be 8Hz, 10Hz, 12Hz, 11.5Hz, 13.5Hz, 9.5Hz, 9Hz, 11Hz,
13Hz、10.5Hz、12.5Hz、8.5Hz。
Electrode is arranged using 10-20 international standard leads, O1, O2, Oz, Pz totally 4 occipitalia area electrodes are chosen, with reference to electricity
Extremely Cz, grounding electrode GND, specific electrode setting are as shown in Figure 3.
3) using polynary empirical mode decomposition (Multivariate Empirical Mode Decomposition,
MEMD) method handles 4 electrode SSVEP EEG signals of acquisition, and combines mode migration Complex Networks Analysis theoretical, real
Now to the Accurate classification of SSVEP EEG signals, the anti-visual stimuli picture for pushing away subject and watching attentively, and then generates robot control and refer to
It enables, to the target movement identified in step 1), realizes the control of intelligent robot idea;
Including:
(3.1) it to 4 electrode SSVEP EEG signals, is handled using polynary empirical mode decomposition method, obtains 4 groups of sheets
Mode function (Intrinsic node function, IMF) is levied, 4 electrode SSVEP EEG signals are corresponded to respectively, to each group of sheet
Being merged with the relevant part of picture flicker frequency in SSVEP visual stimulis interface in sign mode function, generate 4 it is new
Intrinsic mode function, i.e. 4 new sequences replace original 4 electrode SSVEP EEG signals, are used for subsequent analysis;
The polynary empirical mode decomposition method of the application carries out processing:
(3.11) with 4 dimensional vector sequencesRepresent 4 electrode SSVEP brain telecommunications
Number, sequence length N is usedIndicate the corresponding angles on 3 n-dimensional sphere nsDirection to
Quantity set, if establishing K direction vector on 3 n-dimensional sphere ns, k=1,2,3 ..., K;
(3.12) Hammersley sequential sampling methods are used, 4 dimension space direction vectors are obtained on 3 n-dimensional sphere ns;
(3.13) with 4 dimensional vector sequencesFor input signal, in each direction vectorOn mappingCalculation formula is as follows:
Wherein, ● indicate that inner product of vectors, * indicate vector dot product,It indicates to vectorModulus;
(3.14) mapping set of all direction vectors is determinedK=1,2,3 ..., corresponding to the extreme value of K
MomentWherein, u represents extreme point position, and meets u ∈ [1, N];
(3.15) multivariate spline interpolating function interpolation extreme value point coordinates is usedN=1,2,3,4, obtain K
Polynary envelopeK=1,2,3 ..., K;
(3.16) it averages to K polynary envelopes of 3 n-dimensional sphere nFormula is as follows:
(3.18) untilWithMeet polynary intrinsic mode function criterion simultaneously, by original signal vn
(t) become serial intrinsic mode function hf(t) with surplus r (t) and form
Wherein, q indicates intrinsic mode function quantity, the corresponding intrinsic mode of the 4 each metavariables of electrode SSVEP EEG signals
Function is aligned in 4 electrodes by dimensions in frequency, forms polynary intrinsic mode function.
(3.2) the 4 new sequences generated to (3.1) step, structure mode migrate complex network, realize data fusion;
The structure mode migrates complex network:
(3.21) to Multiphase sequencesP=1,2,3,4, wherein the length comprising 4 channels is the sequence of N,
xp,iI-th of element for indicating p-th of channel sequence is split by way of window sliding, determines that window size is α >
10, sliding step-length is τ, obtains a series of Multiphase sequences segment that length are α:
Wherein h is positive integer, meets α+(h-1) τ≤N;
(3.22) for the Multiphase sequences segment of any one windowSelect two channel p therein1And p2,
Related coefficient between the sequence of calculation
Wherein
(3.23) according to method described in (3.22), for the Multiphase sequences segment of any one windowMeter
The related coefficient between any two of which channel sequence is calculated, respectively:r12, r13, r14, r23, r24, r34, wherein 1,2,3,4 table
Show channel number, correspond to (3.21) in p=1,2,3,4;
(3.24) for the Multiphase sequences segment of any one windowBy r12, r13, r14, r23, r24, r34It presses
It is ranked up according to sequence from small to large, constitutes mode, share 6!=720 kinds of sortords share 720 kinds of mode.
(3.25) using mode as network node, company's side method of determination between network node is:Multiphase sequences segmentDetermine mode γh, Multiphase sequences segmentDetermine mode γh+1, establish from mode γhIt is directed toward mode
γh+1It is oriented even side, if mode γhWith mode γh+1For same mode, then ignore this and connect side, includes 720 nets to establish
The mode of network node migrates complex network;
(3.3) complex network is migrated to mode, calculates separately network average nodal intensity, network average nodal betweenness, net
Network average aggregate coefficient, network overall situation convergence factor, network aggregation coefficient entropy and network average shortest path length network index introduce
Support vector machines (SVM), based on obtained network index carry out multiple target classification, by multiple target classification results it is counter push away by
The visual stimuli picture that examination person watches attentively;Wherein calculating network index detailed process is:
(3.31) network average nodal intensity O:Arbitrary nodeIntensityIndicate all even side rights being connected with the node
The sum of weight, thenWherein < > indicate that the operation being averaged, l are nodes number;
(3.32) network average nodal betweenness Indicate node betweenness, wherein σλμFor even
The shortest path number of node λ and node μ are connect,To pass through nodeConnecting node λ and node μ shortest path number
Mesh;
(3.33) network average aggregate coefficient Indicate node rendezvous coefficient, whereinIndicate multiple
It include node in miscellaneous networkClosing triangle number,It indicates in complex network from nodeSetting out at least, there are two sides
Triangle number;
(3.34) network overall situation convergence factor
(3.35) network aggregation coefficient entropy
(3.36) network average shortest path lengthIts interior joint λ and node μ are different, and Uλμ
Indicate the shortest path length between node λ and node μ.
(3.4) by the picture number B of visual stimuli picture1~B12With the target designation A identified1~A12It is associated,
The association is, when subject watches stimulation picture B attentivelyυWhen, it induces and generates corresponding SSVEP EEG signals, arrived by (3.1)
(3.3) after the method accurately locks the picture, by brain-computer interface robot command to target AυMovement, wherein υ ∈ 1,
2,...,12}。
Above to the description of the present invention and embodiment, it is not limited to which this, the description in embodiment is only the reality of the present invention
One of mode is applied, it is without departing from the spirit of the invention, any not inventively to design and the technical solution
Similar structure or embodiment, belongs to protection scope of the present invention.
Claims (6)
1. a kind of intelligent robot idea control method migrating complex network based on mode, which is characterized in that including walking as follows
Suddenly:
1) the picture collection equipment carried by robot obtains ambient condition information, in conjunction with deep learning theory, makes robot
Has target identification function;
2) subject is acquired by EEG electroencephalogramsignal signal collection equipments and watches 4 electricity that the when of flickering picture in visual stimuli interface induces attentively
Pole SSVEP EEG signals, by 4 electrode SSVEP EEG signals of acquisition be filtered with after Denoising disposal, it is wireless by WiFi
It is transmitted to host computer;
3) 4 electrode SSVEP EEG signals of acquisition are handled using polynary empirical mode decomposition method, and is moved in conjunction with mode
Complex Networks Analysis theory is moved, realizes the Accurate classification to SSVEP EEG signals, the anti-visual stimuli figure for pushing away subject and watching attentively
Piece, and then robot control instruction is generated, to the target movement identified in step 1), realize the control of intelligent robot idea.
2. the intelligent robot idea control method according to claim 1 for migrating complex network based on mode, feature
It is, step 1) includes:
(1) robot acquires ambient enviroment picture information from different perspectives respectively, the artificial robot of the machine or is
It can realize the robot team formation of cooperating operation;
(2) collected picture is uploaded to by server by WiFi wireless transmission methods;
(3) according to environment picture, CNN deep learning models are built and trained, for multiple target identification, positioning, recognize target species
Class is unlimited;
(4) the environment pictorial information for acquiring robot in real time is inputted in the CNN deep learning models trained, is recognized
12 targets, 12 targets are uniformly selected according to spatial position when number of targets is more than 12, and by 12 targets picked out
Side label target title and setting target designation A1~A12, target designation and number are shown as task to be done to user circle
Face.
3. the intelligent robot idea control method according to claim 1 for migrating complex network based on mode, feature
It is, 12 pictures flickered with different frequency is shown in the SSVEP visual stimulis interface described in step 2), 12 is schemed
Piece number consecutively obtains picture number B1~B12, wherein the frequency range of 12 pictures between 8~13.5Hz, frequency most
It is closely-spaced be 0.5Hz and frequency distributing order be 8Hz, 10Hz, 12Hz, 11.5Hz, 13.5Hz, 9.5Hz, 9Hz, 11Hz,
13Hz、10.5Hz、12.5Hz、8.5Hz。
4. the intelligent robot idea control method according to claim 1 for migrating complex network based on mode, feature
It is, electrode is arranged using 10-20 international standard leads in step 2), chooses O1, O2, Oz, Pz totally 4 occipitalia area electrodes,
Reference electrode is Cz, grounding electrode GND.
5. the intelligent robot idea control method according to claim 1 for migrating complex network based on mode, feature
It is, step 3) includes:
(3.1) it to 4 electrode SSVEP EEG signals, is handled using polynary empirical mode decomposition method, obtains 4 groups of eigen modes
State function, respectively correspond to 4 electrode SSVEP EEG signals, in each group of intrinsic mode function with SSVEP visual stimulis interface
The middle relevant part of picture flicker frequency merges, and generates 4 new intrinsic mode functions, i.e. 4 new sequences, substitution original
Begin 4 electrode SSVEP EEG signals, is used for subsequent analysis;
(3.2) 4 to generation new sequences, structure mode migrate complex network, realize data fusion;
(3.3) complex network is migrated to mode, it is average calculates separately network average node degree, network average nodal betweenness, network
Convergence factor, network overall situation convergence factor, network aggregation coefficient entropy and network average shortest path length network index, introduce support to
Amount machine is carried out multiple target classification based on obtained network index, pushes away what subject watched attentively by the way that multiple target classification results are counter
Visual stimuli picture;
(3.4) by the picture number B of visual stimuli picture1~B12With the target designation A identified1~A12It is associated, it is described
Association be, when subject watches stimulation picture B attentivelyυWhen, it induces and generates corresponding SSVEP EEG signals, pass through (3.1) step to the
(3.3) after step accurately locks the picture, by brain-computer interface robot command to target AυMovement, wherein υ ∈ 1,2 ...,
12}。
6. the intelligent robot idea control method according to claim 5 for migrating complex network based on mode, feature
It is, the structure mode migration complex network described in (3.2) step is specially:
(3.21) to Multiphase sequencesP=1,2,3,4, wherein the length comprising 4 channels is the sequence of N, xp,iIt indicates
I-th of element of p-th of channel sequence is split by way of window sliding, determines that window size is α > 10, sliding
Step-length is τ, obtains a series of Multiphase sequences segment that length are α:
Wherein h is positive integer, meets α+(h-1) τ≤N;
(3.22) for the Multiphase sequences segment of any one windowSelect two channel p therein1And p2, calculate
Related coefficient between sequence
Wherein
(3.23) according to method described in (3.22) step, for the Multiphase sequences segment of any one windowIt calculates
Related coefficient between any two of which channel sequence, respectively:r12, r13, r14, r23, r24, r34, wherein 1,2,3,4 indicates
Channel number, correspond to (3.21) step in p=1,2,3,4;
(3.24) for the Multiphase sequences segment of any one windowBy r12, r13, r14, r23, r24, r34According to from
It is small to be ranked up to big sequence, mode is constituted, shares 6!=720 kinds of sortords share 720 kinds of mode;
(3.25) using mode as network node, company's side method of determination between network node is:Multiphase sequences segment
Determine mode γh, Multiphase sequences segmentDetermine mode γh+1, establish from mode γhIt is directed toward mode γh+1It is oriented
Lian Bian, if mode γhWith mode γh+1For same mode, then ignore this and connect side, to establish the mould for including 720 network nodes
State migrates complex network.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810168227.5A CN108415568B (en) | 2018-02-28 | 2018-02-28 | Robot intelligent idea control method based on modal migration complex network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810168227.5A CN108415568B (en) | 2018-02-28 | 2018-02-28 | Robot intelligent idea control method based on modal migration complex network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108415568A true CN108415568A (en) | 2018-08-17 |
CN108415568B CN108415568B (en) | 2020-12-29 |
Family
ID=63129457
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810168227.5A Active CN108415568B (en) | 2018-02-28 | 2018-02-28 | Robot intelligent idea control method based on modal migration complex network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108415568B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109318207A (en) * | 2018-11-07 | 2019-02-12 | 西安交通大学 | A kind of lower extremity movement readiness potential detection system and method using myoelectricity timing |
CN109558004A (en) * | 2018-10-31 | 2019-04-02 | 杭州程天科技发展有限公司 | A kind of control method and device of human body auxiliary robot |
CN111007725A (en) * | 2019-12-23 | 2020-04-14 | 昆明理工大学 | Method for controlling intelligent robot based on electroencephalogram neural feedback |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030171688A1 (en) * | 2002-03-06 | 2003-09-11 | Yoo Jae Sup | Mind controller |
CN106292705A (en) * | 2016-09-14 | 2017-01-04 | 东南大学 | Many rotor wing unmanned aerial vehicles idea remote control system based on Bluetooth brain wave earphone and operational approach |
CN106650709A (en) * | 2017-01-22 | 2017-05-10 | 深圳市唯特视科技有限公司 | Sensor data-based deep learning step detection method |
CN107357311A (en) * | 2017-07-28 | 2017-11-17 | 南京航空航天大学 | A kind of reconnaissance system with unmanned plane based on mixing control technology |
-
2018
- 2018-02-28 CN CN201810168227.5A patent/CN108415568B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030171688A1 (en) * | 2002-03-06 | 2003-09-11 | Yoo Jae Sup | Mind controller |
CN106292705A (en) * | 2016-09-14 | 2017-01-04 | 东南大学 | Many rotor wing unmanned aerial vehicles idea remote control system based on Bluetooth brain wave earphone and operational approach |
CN106650709A (en) * | 2017-01-22 | 2017-05-10 | 深圳市唯特视科技有限公司 | Sensor data-based deep learning step detection method |
CN107357311A (en) * | 2017-07-28 | 2017-11-17 | 南京航空航天大学 | A kind of reconnaissance system with unmanned plane based on mixing control technology |
Non-Patent Citations (1)
Title |
---|
张坤等: "《基于脑电波传感器的智能轮椅控制系统》", 《测控技术》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109558004A (en) * | 2018-10-31 | 2019-04-02 | 杭州程天科技发展有限公司 | A kind of control method and device of human body auxiliary robot |
CN109318207A (en) * | 2018-11-07 | 2019-02-12 | 西安交通大学 | A kind of lower extremity movement readiness potential detection system and method using myoelectricity timing |
CN111007725A (en) * | 2019-12-23 | 2020-04-14 | 昆明理工大学 | Method for controlling intelligent robot based on electroencephalogram neural feedback |
Also Published As
Publication number | Publication date |
---|---|
CN108415568B (en) | 2020-12-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Wang et al. | Review of image fusion based on pulse-coupled neural network | |
Huang et al. | Retracted: Jointly network image processing: Multi‐task image semantic segmentation of indoor scene based on CNN | |
WO2015034759A1 (en) | Pattern recognition system | |
CN108415568A (en) | The intelligent robot idea control method of complex network is migrated based on mode | |
JP2013164696A (en) | Image processing device, image processing method and program | |
CN109903299A (en) | A kind of conditional generates the heterologous remote sensing image registration method and device of confrontation network | |
CN108198044A (en) | Methods of exhibiting, device, medium and the electronic equipment of merchandise news | |
CN110428470B (en) | Augmented reality glasses eye movement interaction self-calibration method based on electroencephalogram assistance | |
Huang et al. | A multiscale urban complexity index based on 3D wavelet transform for spectral–spatial feature extraction and classification: an evaluation on the 8-channel WorldView-2 imagery | |
CN111124902A (en) | Object operating method and device, computer-readable storage medium and electronic device | |
Ruget et al. | Pixels2pose: Super-resolution time-of-flight imaging for 3d pose estimation | |
Yu et al. | A radar-based human activity recognition using a novel 3-D point cloud classifier | |
Islam et al. | Applied human action recognition network based on SNSP features | |
CN114066984A (en) | Three-dimensional posture classification method based on two-dimensional key points and related device | |
Wang et al. | Design and implementation of image fusion system | |
Zhang et al. | Cross-domain gesture recognition via learning spatiotemporal features in Wi-Fi sensing | |
CN115496911A (en) | Target point detection method, device, equipment and storage medium | |
Liu et al. | SFusion: Self-attention Based N-to-One Multimodal Fusion Block | |
Gang et al. | Skeleton-based action recognition with low-level features of adaptive graph convolutional networks | |
Workman et al. | Augmenting depth estimation with geospatial context | |
CN114638744A (en) | Human body posture migration method and device | |
Li et al. | Human standing posture recognition based on CNN and pressure floor | |
CN107025433A (en) | Video Events class people's concept learning method and device | |
CN201782759U (en) | Four-dimensional electrocardiogram diagnostic apparatus | |
Yang et al. | Leg Posture Correction System for Physical Education Students Based on Multimodal Information Processing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |