CN112214105A - Task-driven multi-channel 3D real-sense unmanned aerial vehicle control interaction method - Google Patents
Task-driven multi-channel 3D real-sense unmanned aerial vehicle control interaction method Download PDFInfo
- Publication number
- CN112214105A CN112214105A CN202010925782.5A CN202010925782A CN112214105A CN 112214105 A CN112214105 A CN 112214105A CN 202010925782 A CN202010925782 A CN 202010925782A CN 112214105 A CN112214105 A CN 112214105A
- Authority
- CN
- China
- Prior art keywords
- task
- information
- system parameters
- control interaction
- related system
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000003993 interaction Effects 0.000 title claims abstract description 60
- 238000000034 method Methods 0.000 title claims abstract description 20
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 6
- 238000012706 support-vector machine Methods 0.000 claims abstract description 6
- 238000007621 cluster analysis Methods 0.000 claims abstract description 4
- 238000012549 training Methods 0.000 claims abstract description 4
- 230000004424 eye movement Effects 0.000 claims description 10
- 230000008859 change Effects 0.000 claims description 3
- 230000000694 effects Effects 0.000 claims description 3
- 238000012545 processing Methods 0.000 claims description 3
- 230000009467 reduction Effects 0.000 claims description 3
- 238000001514 detection method Methods 0.000 claims description 2
- 238000005516 engineering process Methods 0.000 description 12
- 230000002452 interceptive effect Effects 0.000 description 11
- 238000012544 monitoring process Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 7
- 238000004458 analytical method Methods 0.000 description 5
- 238000013461 design Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 2
- 230000001149 cognitive effect Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000014509 gene expression Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000004397 blinking Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 230000006866 deterioration Effects 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 210000005069 ears Anatomy 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000035790 physiological processes and functions Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 210000001747 pupil Anatomy 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/10—Simultaneous control of position or course in three dimensions
- G05D1/101—Simultaneous control of position or course in three dimensions specially adapted for aircraft
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Human Computer Interaction (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The application relates to a task-driven multi-channel 3D real unmanned aerial vehicle control interaction method, which comprises the following steps: receiving control interaction data through traditional touch control equipment or intelligent non-contact equipment; pre-storing an operation data set, task related system parameters and situation related system parameters; based on a hybrid nonlinear support vector machine algorithm, fusing the control interaction data, the operation data set, the task related system parameters and the situation related system parameters into an intention prediction model; extracting control interaction characteristics in the intention prediction model through training self-learning cluster analysis; searching corresponding information to be displayed in the operation data set according to the control interaction characteristics; and displaying the information to be displayed.
Description
Technical Field
The invention belongs to the field of unmanned aerial vehicle command control application, and particularly relates to a task-driven multi-channel 3D real-sensing unmanned aerial vehicle control interaction method.
Background
Along with the continuous deepening of unmanned aerial vehicle in military field application to the novel combat demand that wide area coordination, cluster control represented emerges in succession, traditional controlling means's touch-control formula interactive mode is with the operational environment that can't fully satisfy many unmanned aerial vehicle control in the future, mass information data.
The current unmanned aerial vehicle finger control system interactive mode mainly has three not enough, and is first, and interactive mode is single, only relies on traditional controlling device basically to realize man-machine interaction, and operating arm, throttle lever, pedal, link antenna action bars, control panel press and the displacement of pole volume realizes the instruction interaction promptly, and other natural interactions are like multichannel interactive mode shortcomings such as pronunciation, gesture. Secondly, interactive feedback is lacked, the interactive feedback of an operator is basically not available, after an instruction is sent, the instruction sending result can be checked only from monitoring software, the monitoring cost of the operator is invisibly increased, and feedback forms such as 3D voice alarm and vibration are lacked. Thirdly, the intelligent level is low, the intelligent auxiliary capacity for operators is lacked, emerging interactive modes such as artificial intelligence, big data multi-mode interaction, intelligent pushing and displaying and the like are basically not used on the existing unmanned aerial vehicle ground station, the interactive auxiliary capacity is weak under the existing interactive intelligent level, and the load of the operators is not effectively lightened. How to effectively integrate mass information, reduce the workload of operators, and support the rapid response and decision of unmanned aerial vehicle operators to become the key of the development of the unmanned aerial vehicle ground station.
Disclosure of Invention
The invention provides a task-driven multi-channel 3D real-sense unmanned aerial vehicle control interaction method. The multi-stage unmanned aerial vehicle ground station human-computer interaction architecture is innovatively designed in the technology, the workload of operators is reduced, and the rapid response and decision making of unmanned aerial vehicle operators become the key of unmanned aerial vehicle ground station development.
The application relates to a task-driven multi-channel 3D real unmanned aerial vehicle control interaction method, which comprises the following steps:
receiving control interaction data through traditional touch control equipment or intelligent non-contact equipment;
pre-storing an operation data set, task related system parameters and situation related system parameters;
fusing the control interaction data, the operation data set, the task related system parameters and the situation related system parameters into an intention prediction model based on a hybrid nonlinear support vector machine algorithm;
extracting control interaction characteristics in the intention prediction model through training self-learning cluster analysis;
searching corresponding information to be displayed in the operation data set according to the control interaction characteristics;
and displaying the information to be displayed.
Specifically, the conventional touch device includes an active device, a passive device, and an active/passive device;
the intelligent non-contact device comprises touch control, gesture control, voice recognition and eye movement detection.
Specifically, displaying the information to be displayed specifically includes:
and displaying the information to be displayed in an information enhancement mode and a voltage reduction type interface switching mode.
Specifically, the operation data set comprises basic pilot information, age and flight duration;
the control interaction data comprises real-time operation data of a pilot, image data, video data, voice data and system parameters detected by in-station and out-station sensors;
the task related system parameters comprise airplane position, height, air route and task load starting state parameters in different flight task stages;
the situation-related system parameters comprise plan change parameters acquired by different sensors, regional climate deterioration conditions and equipment failure conditions.
Specifically, the system parameters include a height parameter, a speed parameter, and a coordinate parameter.
Specifically, the flight mission phase comprises preparation before takeoff, takeoff and departure, monitoring mission execution, attack mission execution, return journey landing and after-flight processing.
Specifically, the information enhancement method specifically includes: the information to be displayed is highlighted on the interface by a highlighted, flashing or animated effect.
Specifically, the step-down interface switching mode specifically includes: by reducing irrelevant display content, the information to be displayed is highlighted on the interface according to different operation requirements of the unmanned aerial vehicle based on the current task and the operator operation information from the multi-source channel.
In conclusion, the invention provides a task-driven multi-channel 3D real-sensing unmanned aerial vehicle control interaction technology, and the man-machine interaction capability of the existing unmanned aerial vehicle ground station is greatly improved. The method comprises the following specific steps:
1. the technology innovatively designs a universal multi-stage unmanned aerial vehicle ground station man-machine interaction architecture, information of an input layer, a model layer and an output layer in a future interaction mode is clarified, the architecture is not influenced by a ground station hardware environment, and the architecture can be flexibly applied to the existing ground station.
2. The technology effectively integrates mass information, and self-adaptive switching information pushing integrates the mass information and a limited display interface by formulating information enhancement display and step-down interface switching logic.
3. The intelligent level of the interaction of the ground station is improved, and the intelligent interaction capability of the unmanned ground station is greatly improved by designing the multi-channel interaction and feedback capability containing the traditional touch control type (active, passive, active/passive) and intelligent non-contact type.
Drawings
Fig. 1 is a schematic flow chart of an implementation of a task-driven multi-channel 3D real-sensing unmanned aerial vehicle control interaction technology provided by the present application;
fig. 2 is a schematic diagram of a man-machine interaction mode of a ground station of a conventional contact-type unmanned aerial vehicle provided by the present application;
fig. 3 is a multi-channel human-computer interaction diagram of a converged intelligent non-contact ground station provided by the present application;
fig. 4 is a diagram illustrating a human-computer interaction architecture of a generalized multi-stage ground station for an unmanned aerial vehicle according to the present application;
fig. 5 is a task-driven multi-channel drone interface switching logic diagram provided by the present application.
Detailed Description
The invention is explained in further detail below with reference to the drawing.
The flow of the embodiment of the invention is as follows:
a task-driven multi-channel 3D real unmanned aerial vehicle control interaction method comprises the following steps:
s10: the system comprises traditional touch control type (active, passive, active/passive) and intelligent non-contact type multi-channel interaction and feedback capabilities;
the traditional touch interaction mainly refers to a means of completing human-computer interaction through mutual acting force after a person is in contact with limbs. At present, the mainstream contact type unmanned aerial vehicle ground station human-computer interaction devices can be divided into three types, namely active interaction devices which are operated by applying acting force by people, passive interaction devices which are fed back to operators by vibration or other forms, and active/passive interaction devices which are provided with active operation and feedback. The man-machine interaction mode of the traditional touch control type unmanned aerial vehicle ground station is shown in table 1 and fig. 2:
TABLE 1 typical touch-control type human-computer interaction mode table
The intelligent non-contact interaction is a means for controlling instruction sending by means of information identification such as images, gestures and voice, and the like. The technology integrates the multi-mode interactive design realization of eyes, ears, mouths, hands and the like, and can greatly improve the man-machine interaction level of the unmanned aerial vehicle. The intelligent non-contact interaction implementation flow is shown in fig. 3.
TABLE 2 typical intelligent non-contact interactive mode table
S20: designing a human-computer interaction architecture of a universal multistage unmanned aerial vehicle ground station;
specifically, the design of a human-computer interaction architecture of a ground station of a generalized multistage unmanned aerial vehicle breaks through the human-computer interaction mode of the existing ground station, and a multidimensional sensor is introduced, mainly comprising the following parts as shown in the figure:
s201: the input layer is a multi-element input and comprises an operation data set and implicit feedback (touch control type and intelligent non-contact type data).
The framework is based on a task structure in a man-machine intelligent cognitive information framework theory, and uses multivariate parameters as input, including an operation data set and control interaction data. The operation data set mainly comprises basic pilot information, age, flight duration and the like; the control interaction data comprises real-time operation data of a pilot, data (images, videos, voice and the like) detected by sensors inside and outside the station, system parameters (such as altitude, speed, coordinates and the like) and the like.
S202: the model layer is based on a fusion algorithm before self-learning, the scene, the task, the display feedback and the implicit feedback are fused into one model, and the characteristics in the data are extracted through the training and self-learning cluster analysis to form a uniform pushing result.
The establishment of the model layer depends on operation parameters generated by a first-stage interaction mode (traditional contact mode) and task and situation related system parameters generated by a second-stage interaction mode (intelligent non-contact mode), wherein the task related system parameters are judged by combining the current flight task stage (preparation before takeoff, takeoff and departure, monitoring task execution, attack task execution, return voyage and post-flight processing) by the positions, heights, air routes, task load starting states and the like of the airplane. The situation-related system parameters comprise plan change, severe regional climate, certain equipment failure and the like, and the parameters are obtained by the sensors for judgment.
S203: the output layer is an operational intent recognition formed by combining operator state representation (display feedback, implicit feedback) and contextual task representation.
The output layer combines the operator state representation (display feedback, implicit feedback) and the operation specific intention recognition formed by the scene task representation as the output layer, and forms the input for the information enhancement display and the step-down interface switching logic.
Based on the task and situation intention understanding idea, the intention understanding process can be simplified, and real-time operator intention prediction is realized. Through the operator intention understanding technology based on the task state and the situation state, a huge operator intention aggregate can be reduced from bottom to top into a less-element operator intention aggregate according to given task and situation conditions; the current operator intent may then be further identified in the operator intent set by an operator intent understanding based on operator monitoring behavior-specific expressions.
Taking eye movement analysis and operator operation data as examples, the specific expression of the operator monitoring behavior adopts two methods of real-time eye movement analysis and real-time operation data analysis, and on the basis of the two methods in turn, the intention recognition of the operator in the monitoring operation is effectively realized. The defined eye movement indexes comprise explicit eye movement indexes (indexes directly reflecting the reaction process of an operator on visual information, such as a fixation point, fixation time and the like) and implicit eye movement indexes (eye movement indexes indirectly reflecting the physiological state of the operator, such as the pupil size which can reflect the tension degree of the operator, the blinking frequency, the opening and closing angle of the eye corner which can reflect the fatigue degree of the operator and the like).
And after data are obtained, final intention analysis is completed by a procedural mixed nonlinear support vector machine algorithm to obtain a result. And the task scene analysis technology, the operator behavior analysis technology and the operator intention are in one-to-one correspondence. The calculation method firstly carries out calculationThe collected data samples (eye movement, operation data) are mapped to a multidimensional space, and then by adding a kernel function, the establishment of an optimal classification in the multidimensional space is completed. The specific principle is as follows: assuming that all calculations are inner product operations, a nonlinear function φ is assumed: rn→ H can convert the samples to multidimensional space H, establish the optimal classification surface in H, further using the dot product operation: phi (x)i)·φ(xj). If the function K () can be found to make K (x)i,xj)=φ(xi)·φ(xj) Then the dot product operation can be implemented. According to the statistical theory, if the function K (x)i,xj) If the Mercer condition is satisfied, the function can form a one-to-one correspondence with the inner product in space. The decision function at this time becomes:
the input of the decision function and the nodes of the middle layers form a linear combination similar to a neural network. After the mapping of the inner product of single support vector (such as eye movement and behavior data) input, an intermediate node, namely a support vector network, is formed. An intention understanding model is established through a support vector machine algorithm, so that the recognition of the intention of an operator is realized, and a foundation is laid for driving a dynamic time domain human-computer interaction interface in the next step.
S30: information enhancement display and step-down interface switching logic is formulated, advanced interaction technology implementation is supported, and the technology implementation flow is shown in figure 1.
Under the condition of limited display resources, the time domain man-machine system page is driven to be switched in a self-adaptive mode, and the cognitive information required by matching the current monitoring operation intention of the operator is completely displayed according to the state of the operator so as to reduce the workload and enhance the situation perception.
By combining the current unmanned aerial vehicle task scene and the actual operation condition of an incoming multi-channel operator, integrating and displaying the setting of elements (such as font size, color and the like), designing two types of switching logics of information enhanced display and step-down interface switching, and realizing the task-driven multi-channel unmanned aerial vehicle interface switching logic design, as shown in the figure.
(1) The information enhancement display has the effects of highlighting, flickering, animation and the like, and some information is highlighted on the interface. The multi-channel interaction technology based on task driving can capture task stage information and operator operation information from multi-source channels, and different information enhanced display logics are designed according to different operation requirements of the unmanned aerial vehicle.
For example, when the system collects a long-time fixation of an operator on a certain point of information, the system automatically switches the point of information push out from the right side of the interface and enhances the information display on the left side of the interface.
(2) The step-down interface switching follows the reduction of monitoring pressure of an operator, and different step-down interface switching logics are designed based on the current task and the operator operation information from the multi-source channel and according to different operation requirements of the unmanned aerial vehicle by reducing irrelevant display contents.
For example, when the system collects the fatigue of the operator, the system automatically switches the interface, so as to reduce the display of normal information (interface green information) and reduce the operation burden of the operator.
Claims (8)
1. A task-driven multi-channel 3D real-time unmanned aerial vehicle control interaction method is characterized by comprising the following steps:
receiving control interaction data through traditional touch control equipment or intelligent non-contact equipment;
pre-storing an operation data set, task related system parameters and situation related system parameters;
based on a hybrid nonlinear support vector machine algorithm, fusing the control interaction data, the operation data set, the task related system parameters and the situation related system parameters into an intention prediction model;
extracting control interaction characteristics in the intention prediction model through training self-learning cluster analysis;
searching corresponding information to be displayed in the operation data set according to the control interaction characteristics;
and displaying the information to be displayed.
2. The method of claim 1,
the traditional touch control type equipment comprises active equipment, passive equipment and active/passive equipment;
the intelligent non-contact device comprises touch control, gesture control, voice recognition and eye movement detection.
3. The method according to claim 1, wherein displaying the information to be displayed specifically comprises:
and displaying the information to be displayed in an information enhancement mode and a voltage reduction type interface switching mode.
4. The method of claim 1,
the operational data set comprises basic pilot information, age and flight duration;
the control interaction data comprises real-time operation data of a pilot, image data, video data, voice data and system parameters detected by in-station and out-station sensors;
the task related system parameters comprise airplane position, height, air route and task load starting state parameters in different flight task stages;
the situation-related system parameters comprise plan change parameters acquired by different sensors, regional climate severe conditions and equipment fault conditions.
5. The method of claim 4, wherein the system parameters include an altitude parameter, a speed parameter, and a coordinate parameter.
6. The method of claim 1, wherein the mission phase comprises pre-takeoff preparation, takeoff and departure, surveillance mission execution, attack mission execution, return to the ground, and post-flight processing.
7. The method according to claim 3, wherein the information enhancement mode is specifically: the information to be displayed is highlighted on the interface by a highlighted, flashing or animated effect.
8. The method according to claim 3, wherein the step-down interface switching manner is specifically: by reducing irrelevant display content, the information to be displayed is highlighted on the interface according to different operation requirements of the unmanned aerial vehicle based on the current task and the operator operation information from the multi-source channel.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010925782.5A CN112214105A (en) | 2020-09-04 | 2020-09-04 | Task-driven multi-channel 3D real-sense unmanned aerial vehicle control interaction method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010925782.5A CN112214105A (en) | 2020-09-04 | 2020-09-04 | Task-driven multi-channel 3D real-sense unmanned aerial vehicle control interaction method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112214105A true CN112214105A (en) | 2021-01-12 |
Family
ID=74049395
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010925782.5A Pending CN112214105A (en) | 2020-09-04 | 2020-09-04 | Task-driven multi-channel 3D real-sense unmanned aerial vehicle control interaction method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112214105A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114385099A (en) * | 2021-11-26 | 2022-04-22 | 中国航空无线电电子研究所 | Multi-unmanned aerial vehicle dynamic monitoring interface display method and device based on active push display |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105912288A (en) * | 2016-04-12 | 2016-08-31 | 上海易天无人飞行器科技有限公司 | Method and system for comprehensive processing display capable of monitoring flight state of unmanned aerial vehicle |
CN110232918A (en) * | 2019-05-22 | 2019-09-13 | 成都飞机工业(集团)有限责任公司 | A kind of UAV ground control station's speech control system and control method |
CN110597382A (en) * | 2019-08-08 | 2019-12-20 | 中广核工程有限公司 | Nuclear power station control room multi-channel fusion man-machine interaction method and system |
CN110764521A (en) * | 2019-10-15 | 2020-02-07 | 中国航空无线电电子研究所 | Ground station task flight integrated monitoring system and method for multiple unmanned aerial vehicles |
CN111214227A (en) * | 2020-01-21 | 2020-06-02 | 中国人民解放军空军工程大学 | Method for identifying user operation intention and cognitive state in man-machine interaction |
CN111610850A (en) * | 2019-02-22 | 2020-09-01 | 东喜和仪(珠海市)数据科技有限公司 | Method for man-machine interaction based on unmanned aerial vehicle |
-
2020
- 2020-09-04 CN CN202010925782.5A patent/CN112214105A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105912288A (en) * | 2016-04-12 | 2016-08-31 | 上海易天无人飞行器科技有限公司 | Method and system for comprehensive processing display capable of monitoring flight state of unmanned aerial vehicle |
CN111610850A (en) * | 2019-02-22 | 2020-09-01 | 东喜和仪(珠海市)数据科技有限公司 | Method for man-machine interaction based on unmanned aerial vehicle |
CN110232918A (en) * | 2019-05-22 | 2019-09-13 | 成都飞机工业(集团)有限责任公司 | A kind of UAV ground control station's speech control system and control method |
CN110597382A (en) * | 2019-08-08 | 2019-12-20 | 中广核工程有限公司 | Nuclear power station control room multi-channel fusion man-machine interaction method and system |
CN110764521A (en) * | 2019-10-15 | 2020-02-07 | 中国航空无线电电子研究所 | Ground station task flight integrated monitoring system and method for multiple unmanned aerial vehicles |
CN111214227A (en) * | 2020-01-21 | 2020-06-02 | 中国人民解放军空军工程大学 | Method for identifying user operation intention and cognitive state in man-machine interaction |
Non-Patent Citations (1)
Title |
---|
吴慧垚等: "基于认知架构的无人机操作员意图预测技术研究", 《2019第七届中国指挥控制大会论文集》, 30 September 2019 (2019-09-30), pages 361 - 365 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114385099A (en) * | 2021-11-26 | 2022-04-22 | 中国航空无线电电子研究所 | Multi-unmanned aerial vehicle dynamic monitoring interface display method and device based on active push display |
CN114385099B (en) * | 2021-11-26 | 2023-12-12 | 中国航空无线电电子研究所 | Multi-unmanned aerial vehicle dynamic monitoring interface display method and device based on active push display |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108197580B (en) | A kind of gesture identification method based on 3d convolutional neural networks | |
CN106598226A (en) | UAV (Unmanned Aerial Vehicle) man-machine interaction method based on binocular vision and deep learning | |
US10692390B2 (en) | Tabletop system for intuitive guidance in augmented reality remote video communication environment | |
CN108983636B (en) | Man-machine intelligent symbiotic platform system | |
US20210192856A1 (en) | Xr device and method for controlling the same | |
CN202754148U (en) | Universal land station capable of being allocated with unmanned aerial vehicles | |
CN106681354B (en) | The flight control method and device of unmanned plane | |
JPH06214711A (en) | Management system of interactive system | |
CN109933272A (en) | The multi-modal airborne cockpit man-machine interaction method of depth integration | |
CN104571823A (en) | Non-contact virtual human-computer interaction method based on smart television set | |
CN114372341A (en) | Steel hot rolling pipe control system and method based on digital twinning | |
Zhou et al. | Vision language models in autonomous driving and intelligent transportation systems | |
US20190371002A1 (en) | Artificial intelligence device capable of being controlled according to user's gaze and method of operating the same | |
CN106377228A (en) | Monitoring and hierarchical-control method for state of unmanned aerial vehicle operator based on Kinect | |
CN108227926B (en) | Intelligent channel switching system and method for multi-channel cooperative intelligent interaction | |
CN112214105A (en) | Task-driven multi-channel 3D real-sense unmanned aerial vehicle control interaction method | |
US11863627B2 (en) | Smart home device and method | |
CN102866833A (en) | Icon interactive system based on social network and method thereof | |
KR102537381B1 (en) | Pedestrian trajectory prediction apparatus | |
CN107538492A (en) | Intelligent control system, method and the intelligence learning method of mobile robot | |
Castillo et al. | The aircraft of the future: towards the tangible cockpit | |
CN107765963A (en) | A kind of UAV system multi-mode composite Reconnaissance system integrates display control device | |
CN106933122A (en) | Train display intelligent interactive method and system | |
CN108228285A (en) | A kind of human-computer interaction instruction identification method multi-modal end to end | |
CN115716278A (en) | Robot target searching method based on active sensing and interactive operation cooperation and robot simulation platform |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |