CN111694425A - Target identification method and system based on AR-SSVEP - Google Patents

Target identification method and system based on AR-SSVEP Download PDF

Info

Publication number
CN111694425A
CN111694425A CN202010341613.7A CN202010341613A CN111694425A CN 111694425 A CN111694425 A CN 111694425A CN 202010341613 A CN202010341613 A CN 202010341613A CN 111694425 A CN111694425 A CN 111694425A
Authority
CN
China
Prior art keywords
target
stimulation
glasses
electroencephalogram
attention
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010341613.7A
Other languages
Chinese (zh)
Inventor
马留洋
王宁
胡怡芳
胡争争
蔡玉宝
徐聪
刘当
冯少康
于洋
李德峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC 27 Research Institute
Original Assignee
CETC 27 Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC 27 Research Institute filed Critical CETC 27 Research Institute
Priority to CN202010341613.7A priority Critical patent/CN111694425A/en
Publication of CN111694425A publication Critical patent/CN111694425A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/163Wearable computers, e.g. on a belt
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Hardware Design (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Dermatology (AREA)
  • General Health & Medical Sciences (AREA)
  • Neurology (AREA)
  • Neurosurgery (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a target identification method, which combines a target identification technology, an augmented reality technology and a brain-computer interface system, transmits far-end video information to VR glasses, frames and selects an attention target under a complex background by using the target identification technology, provides a selection for a subject, and presents an SSVEP stimulation-type stimulation interface for the subject by using the VR glasses, thereby realizing the electroencephalogram selection of a specific target. The invention discloses a target identification system which comprises a remote camera, a wireless transmission module, a signal receiving module, AR glasses, a target identification module, a communication interface module, electroencephalogram acquisition equipment and an upper computer. The method can improve the recognition efficiency and accuracy of specific targets in similar attention targets or slightly different targets in the target recognition process, and obviously improve the decision-making precision of moving target detection.

Description

Target identification method and system based on AR-SSVEP
Technical Field
The invention relates to the technical field of target identification, in particular to a target identification method and a target identification system based on AR-SSVEP.
Background
Brain-computer interface (BCI) is a completely new external information communication and control technology that is not dependent on a conventional brain information output channel and is established between the human brain and a computer or other electronic devices. Wherein the SSVEP-BCI presents a plurality of periodic visual stimuli with different frequencies and phases to the user, when the user focuses on a certain frequency visual stimulus, an EEG signal with specific characteristics, i.e., an SSVEP signal, is induced in the main visual cortex. In order to obtain an electroencephalogram signal of a human under the stimulation of the SSVEP, the system is used for researching the relationship between the electroencephalogram signal and a stimulated signal, in the prior art, a system based on the SSVEP usually comprises a computer display screen specially used for displaying an SSVEP stimulation model, a subject can generate an electroencephalogram phenomenon described by the SSVEP by watching the display screen, then an electroencephalogram collecting device is used for collecting the electroencephalogram signal of the subject, and then a recognition result (recognizing the extracted electroencephalogram signal) is obtained by analyzing the electroencephalogram signal so as to be further used for researching the influence of the stimulation on the human brain.
However, in the prior art, the brain-computer interface system based on the SSVEP includes devices such as a computer, which is bulky, and on the other hand, the subject is a stimulation interface watching a computer display screen, and the subject cannot perform other behavior operations when receiving stimulation, so the prior SSVEP stimulation paradigm can only meet the use under laboratory conditions, is inconvenient to apply to outdoor environments, and has very limited practicability in actual life and production. Object recognition refers to the process by which a particular object (or a particular type of object) is distinguished from other objects (or other types of objects). It includes the identification of both two very similar objects and the identification of one type of object with another type of object.
At present, after a target recognition system is trained, only one type of attention target can be recognized from an image or a video stream, and the attention target often has slight individual differences, such as blue vehicles of different models, twins and the like; the accuracy of specific targets in similar or slightly different attention targets identified by the existing target identification method is low.
Disclosure of Invention
The invention aims to provide an AR-SSVEP-based target identification method and system, which can improve the identification efficiency and accuracy of similar concerned targets or specific targets in slightly different targets in the target identification process, provide possibility for an SSVEP stimulation paradigm to be suitable for an outdoor environment, and improve the practicability of the SSVEP stimulation paradigm in actual life production.
The technical scheme adopted by the invention is as follows:
an AR-SSVEP-based target identification method comprises the following steps:
the method comprises the following steps: the subject wears AR glasses and electroencephalogram equipment;
step two: the camera collects video stream information of a target scene and transmits the collected video stream information to the AR glasses and the target identification module respectively;
step three: the target identification module identifies an attention target in the video stream in real time, calculates target information of the attention target and transmits the target information to the AR glasses; the target information comprises the number n of the concerned targets, the target position of each concerned target and the number of each concerned target; the target position comprises the height of a target frame for framing a concerned target, the width of the target frame and the position coordinates of the central point of the target frame;
setting the labels of the n attention objects as an attention object 1 and an attention object 2 … attention object i … attention object n respectively; numbers corresponding to the attention object 1 and the attention object 2 … attention object i … attention object n are respectively a number 1 and a number 2 …, i … and n; i is more than or equal to 1 and less than or equal to n;
step four: after receiving the target information, the AR glasses draw a target frame according to the n concerned targets, select the n concerned targets, and generate n stimulation blocks corresponding to the n concerned targets one by one;
setting n stimulation blocks as a stimulation block 1 and a stimulation block 2 …, namely a stimulation block i … stimulation block n; the stimulation block i corresponds to the attention target i; the stimulation blocks flicker respectively at specific frequencies, and the flicker frequency of any one stimulation block is different from the flicker frequencies of the other n-1 stimulation blocks;
step five: the AR glasses draw numbers i at the stimulation block i and the attention target i simultaneously in real time;
step six: the AR glasses display a stimulation interface to the subject in real time, and the subject selects a specific target through brain electricity; the stimulation interface comprises n attention targets and n stimulation blocks which are selected by a frame; the method comprises the following steps that a subject selects a specific target from n concerned targets according to the states of the n concerned targets in a stimulation interface, and stares at a stimulation block corresponding to the selected specific target to output an SSVEP electroencephalogram signal; the electroencephalogram acquisition equipment acquires the SSVEP electroencephalogram signals output by the subject, analyzes the stimulation block watched by the subject, and completes the electroencephalogram selection of a specific target;
step seven: and outputting the target position information of the specific target according to the result of the electroencephalogram selection and the target information of the target of interest.
In the third step, the target identification module identifies the concerned target in the video stream information by adopting a YOLO algorithm and calculates the target information of the concerned target.
The distribution of the stimulation blocks comprises two modes, wherein the first mode is that the stimulation blocks are positioned in or around a target frame and follow the target frame; and the second mode is that the stimulation blocks are arranged around the screen frame, and one mode is selected from the two modes to generate the stimulation blocks.
An AR-SSVEP-based target identification system comprises a remote camera, a wireless transmission module, a signal receiving module, AR glasses, a target identification module, a communication interface module, electroencephalogram acquisition equipment and an upper computer;
the remote camera is used for acquiring remote scene video stream information;
the wireless transmission module is used for transmitting the acquired remote scene video stream information to the signal receiving module;
the signal receiving module transmits the received remote scene video stream information to the AR glasses and the target identification module respectively;
the AR glasses are used for receiving and projecting a video stream transmitted back by the front camera, superposing the display stimulation blocks and displaying a stimulation interface to the testee;
the target identification module is used for identifying an attention target in the video stream in real time and solving target information of the attention target;
the communication interface module is used for connecting AR glasses and electroencephalogram acquisition equipment;
the electroencephalogram acquisition device is used for acquiring SSVEP electroencephalogram signals output by a subject, analyzing stimulation blocks watched by the subject and completing electroencephalogram selection of a specific target;
the upper computer is used for acquiring target position information of a specific target according to the result of electroencephalogram selection and target information of the concerned target;
the remote camera is connected with the signal receiving module through the wireless transmission module, a first output end of the signal receiving module is connected with a first input end of the AR glasses, a second output end of the signal receiving module is connected with an input end of the target identification module, an output end of the AR glasses is connected with the electroencephalogram acquisition equipment through the communication interface module, and a first output end of the target identification module is connected with a second input end of the AR glasses; the first output end of the target identification module and the output end of the electroencephalogram acquisition equipment are both connected with an upper computer.
The AR glasses adopt Microsoft HoloLens AR equipment.
The wireless transmission module is RTC67055.8G wireless video transmission chip.
The target identification method combines a target identification technology, an augmented reality technology and a brain-computer interface system, transmits far-end video information to VR glasses, frames an attention target under a complex background by using the target identification technology, presents an SSVEP stimulation mode stimulation interface for a subject, and realizes electroencephalogram selection of a specific target, so that the target identification difficulty of the subject is reduced, and the selection time is reduced; for the aspect of a brain-computer interface system, VR glasses are adopted to present an SSVEP stimulation model stimulation interface for a subject, the distance is short, interference of natural light is small, accuracy of SSVEP induced electroencephalogram is improved, accuracy of target identification is further improved, possibility of being applied outdoors is developed for the SSVEP stimulation model and the brain-computer interface system, and practicability of the SSVEP stimulation model in actual life and production is improved; the problem that a YOLO target recognition algorithm cannot be operated on AR glasses in the combination of a target recognition technology and an augmented reality technology is solved by performing target recognition operation through the external target recognition module, and the recognition and display effects are achieved;
the target identification system is provided with the AR glasses and the electroencephalogram acquisition equipment, combines the target identification technology, the augmented reality technology and the brain-computer interface system, develops the possibility of being applied outdoors for the SSVEP stimulation paradigm and the brain-computer interface system, improves the practicability of the SSVEP stimulation paradigm in actual life production, improves the identification efficiency and the accuracy rate of specific targets in similar concerned targets in the target identification process, and obviously improves the moving target detection decision-making accuracy.
Drawings
FIG. 1 is a flow chart of a method of the present invention;
FIG. 2 is a schematic illustration of a stimulation interface of an embodiment of the present invention;
FIG. 3 is a schematic diagram of an electroencephalogram signal channel of the present invention;
FIG. 4 is a functional block diagram of a target recognition system of the present invention;
1. a stimulation block; 2. a target frame; 3. an object of interest; 4. a screen frame; 5. and (6) numbering.
Detailed Description
As shown in fig. 1, the present invention comprises the steps of:
the method comprises the following steps: the subject wears AR glasses and electroencephalogram equipment;
step two: the camera collects video stream information of a target scene and transmits the collected video stream information to the AR glasses and the target identification module respectively;
step three: the target identification module identifies an attention target in the video stream in real time, calculates target information of the attention target and transmits the target information to the AR glasses; in this embodiment, the target identification module identifies an attention target in the video stream information by using a YOLO algorithm and calculates target information of the attention target.
The target information comprises the number n of the concerned targets, the target position of each concerned target and the number of each concerned target; the target position comprises the height of a target frame for framing a concerned target, the width of the target frame and the position coordinates of the central point of the target frame;
setting the labels of the n attention objects as an attention object 1 and an attention object 2 … attention object i … attention object n respectively; numbers corresponding to the attention object 1 and the attention object 2 … attention object i … attention object n are respectively a number 1 and a number 2 …, i … and n; i is more than or equal to 1 and less than or equal to n;
the target position is calculated by a YOLO algorithm and comprises the height of a target frame, the width of the target frame and the position coordinates of the central point of the target frame; the number of the concerned targets is obtained by counting the number of the position coordinates of the central point; the number is the default number of the classification result, the number is sequentially numbered from 1 to n from left to right according to the coordinate position of the center point of the target from top to bottom according to the numbering principle, and the number has no special meaning and only marks the identified attention target.
As shown in fig. 2, in the present embodiment, the camera acquires road video information, the target of interest is a vehicle, and the target identification module identifies 4 targets of interest.
In fig. 2, the target of interest on the picture and 1, 2, 3, 4 on the stimulation block are the numbers according to the present invention. And 1, 2, 3, 4 and 5 on the transverse line are reference numbers.
Step four: after receiving the target information, the AR glasses draw a matched target frame around each concerned target according to the n concerned targets, select the n concerned targets, and generate n stimulation blocks corresponding to the n concerned targets one by one;
setting n stimulation blocks as a stimulation block 1 and a stimulation block 2 …, namely a stimulation block i … stimulation block n; the stimulation block i corresponds to the attention target i; the stimulation blocks flicker with specific frequencies respectively, and the flicker frequency of any one stimulation block is different from the flicker frequency of other n-1 stimulation blocks.
The distribution of the stimulation blocks comprises two modes, wherein the first mode is that the stimulation blocks are positioned in or around a target frame and follow the target frame; and the second mode is that the stimulation blocks are arranged around the screen frame, and one mode is selected from the two modes to generate the stimulation blocks.
The target identification module and the AR glasses realize data sharing through a network, the data sharing means that the target identification module can transmit identified target information to the AR glasses in real time, and the AR glasses draw corresponding target frames and numbers in real time through a drawing program according to the received target information, so that the requirement on the computing capacity of the AR glasses is reduced.
The AR glasses generate n stimulation blocks according to the identified target number n: in the embodiment, the second mode is selected, and the stimulation blocks are arranged around the screen frame according to a fixed sequence; the stimulation block flickers according to a specific frequency; the specific frequency is represented by the formula
Figure BDA0002468658240000051
Is calculated to obtain, wherein fcIndicating the stimulation frequency (i.e. the blinking frequency of the stimulation block), fsRepresenting the refresh frequency of the device and N represents a positive integer.
For example, when the refresh frequency of the AR glasses display is 60Hz, the stimulation frequency obtained when N is 4,5,6,7,8,9,10 meets the requirements, and is 15Hz,12Hz,10Hz,8.57Hz,7.5Hz,6.67Hz,6Hz respectively.
Preferably, the stimulation paradigm of the stimulation block is a red-white color block flashing pattern.
In this embodiment, four stimulation blocks are disposed around the screen frame, and the frequencies are 7.5Hz, 8.5Hz, 10Hz, and 12Hz, respectively, although other forms and frequencies may also be provided.
Step five: the AR glasses draw numbers i at the stimulation block i and the attention target i simultaneously in real time;
step six: the AR glasses display a stimulation interface to the subject in real time, and the subject selects a specific target through brain electricity; the stimulation interface comprises n attention targets and n stimulation blocks which are selected by a frame; the method comprises the following steps that a subject selects a specific target from n concerned targets according to the states of the n concerned targets in a stimulation interface, and stares at a stimulation block corresponding to the selected specific target to output an SSVEP electroencephalogram signal; the electroencephalogram acquisition equipment acquires the SSVEP electroencephalogram signals output by the subject, analyzes the stimulation block watched by the subject, and completes the electroencephalogram selection of a specific target; in the embodiment, as shown in fig. 3, the channels for acquiring electroencephalogram signals are T5, T6, P3, P4, O1 and O2.
Step seven: and outputting the target position information of the specific target according to the result of the electroencephalogram selection and the target information of the target of interest.
The target identification method combines a target identification technology, an augmented reality technology and a brain-computer interface system, transmits far-end video information to VR glasses, frames an attention target under a complex background by using the target identification technology, presents an SSVEP stimulation-type stimulation interface for a subject, and realizes electroencephalogram selection of a specific target, thereby reducing the target identification difficulty of the subject and reducing the selection time.
The method of the invention also has outstanding beneficial effects on SSVEP-BCI technology. Because VR glasses are adopted to present the stimulation interface of the SSVEP stimulation paradigm for the subject, the distance is short, the interference of natural light is small, the accuracy of inducing the electroencephalogram by the SSVEP is improved, the possibility of being applied outdoors is developed for the SSVEP stimulation paradigm and a brain-computer interface system, and the practicability of the SSVEP stimulation paradigm in actual life and production is improved.
The AR glasses have limited computing capability, and the target identification computing is performed through the external target identification module, so that the problem that a YOLO target identification algorithm cannot be operated on the AR glasses in the process of combining the target identification technology and the augmented reality technology is solved, and the identification and display effects are achieved.
The target identification method provided by the invention can be suitable for decision selection of unmanned vehicle driving, and mainly solves the problems that the unmanned vehicle remote moving target identification is difficult to track and position.
In the hit-and-run vehicle search, the conventional target recognition can only identify the similar vehicles, and can not accurately identify whether the vehicle is the hit-and-run vehicle or not and whether the driver is the hit-and-run driver or not.
The AR-SSVEP-based target identification system comprises a remote camera, a wireless transmission module, a signal receiving module, AR glasses, a target identification module, a communication interface module, electroencephalogram acquisition equipment and an upper computer;
the remote camera is used for acquiring remote scene video stream information;
the wireless transmission module is used for transmitting the acquired remote scene video stream information to the signal receiving module;
the signal receiving module transmits the received remote scene video stream information to the AR glasses and the target identification module respectively;
the AR glasses are used for receiving and projecting a video stream transmitted back by the front camera, superposing the display stimulation blocks and displaying a stimulation interface to the testee;
the target identification module is used for identifying an attention target in the video stream in real time and solving target information of the attention target;
the communication interface module is used for connecting AR glasses and electroencephalogram acquisition equipment;
the electroencephalogram acquisition device is used for acquiring SSVEP electroencephalogram signals output by a subject, analyzing stimulation blocks watched by the subject and completing electroencephalogram selection of a specific target;
the upper computer is used for acquiring target position information of a specific target according to the result of electroencephalogram selection and target information of the concerned target;
the remote camera is connected with the signal receiving module through the wireless transmission module, a first output end of the signal receiving module is connected with a first input end of the AR glasses, a second output end of the signal receiving module is connected with an input end of the target identification module, an output end of the AR glasses is connected with the electroencephalogram acquisition equipment through the communication interface module, and a first output end of the target identification module is connected with a second input end of the AR glasses; the first output end of the target identification module and the output end of the electroencephalogram acquisition equipment are both connected with an upper computer.
The AR glasses adopt Microsoft HoloLens AR equipment.
The wireless transmission module is RTC67055.8G wireless video transmission chip.
The target identification system is provided with AR glasses and brain electricity acquisition equipment, combines a target identification technology, an augmented reality technology and a brain-computer interface system, transmits far-end video information to VR glasses, frames and selects an attention target under a complex background by using the target identification technology, presents an SSVEP stimulation-type stimulation interface for a subject, realizes brain electricity selection of a specific target, reduces the target identification difficulty of the subject, reduces selection time, and adds the step of brain electricity selection for the target identification technology, so that specific target identification in similar attention targets or slightly different targets is more accurately identified, the identification efficiency and the accuracy of the specific target in the target identification process are improved, and the moving target detection decision precision is remarkably improved.
For the aspect of a brain-computer interface system, VR glasses are arranged to present an SSVEP stimulation paradigm stimulation interface for a subject, the distance is short, interference of natural light is small, accuracy of SSVEP induced electroencephalogram is improved, accuracy of target identification is further improved, possibility of being applied outdoors is developed for the SSVEP stimulation paradigm and the brain-computer interface system, and practicability of the SSVEP stimulation paradigm in actual life and production is improved.
The target recognition system is provided with the external target recognition module for carrying out target recognition operation, solves the problem that a YOLO target recognition algorithm cannot be operated on AR glasses in the combination of the target recognition technology and the augmented reality technology, and achieves the recognition and display effects.

Claims (6)

1. An AR-SSVEP-based target identification method is characterized in that: the method comprises the following steps:
the method comprises the following steps: the subject wears AR glasses and electroencephalogram equipment;
step two: the camera collects video stream information of a target scene and transmits the collected video stream information to the AR glasses and the target identification module respectively;
step three: the target identification module identifies an attention target in the video stream in real time, calculates target information of the attention target and transmits the target information to the AR glasses; the target information comprises the number n of the concerned targets, the target position of each concerned target and the number of each concerned target; the target position comprises the height of a target frame for framing a concerned target, the width of the target frame and the position coordinates of the central point of the target frame;
setting the labels of the n attention objects as an attention object 1 and an attention object 2 … attention object i … attention object n respectively; numbers corresponding to the attention object 1 and the attention object 2 … attention object i … attention object n are respectively a number 1 and a number 2 …, i … and n; i is more than or equal to 1 and less than or equal to n;
step four: after receiving the target information, the AR glasses draw a target frame according to the n concerned targets, select the n concerned targets, and generate n stimulation blocks corresponding to the n concerned targets one by one;
setting n stimulation blocks as a stimulation block 1 and a stimulation block 2 …, namely a stimulation block i … stimulation block n; the stimulation block i corresponds to the attention target i; the stimulation blocks flicker respectively at specific frequencies, and the flicker frequency of any one stimulation block is different from the flicker frequencies of the other n-1 stimulation blocks;
step five: the AR glasses draw numbers i at the stimulation block i and the attention target i simultaneously in real time;
step six: the AR glasses display a stimulation interface to the subject in real time, and the subject selects a specific target through brain electricity; the stimulation interface comprises n attention targets and n stimulation blocks which are selected by a frame; the method comprises the following steps that a subject selects a specific target from n concerned targets according to the states of the n concerned targets in a stimulation interface, and stares at a stimulation block corresponding to the selected specific target to output an SSVEP electroencephalogram signal; the electroencephalogram acquisition equipment acquires the SSVEP electroencephalogram signals output by the subject, analyzes the stimulation block watched by the subject, and completes the electroencephalogram selection of a specific target;
step seven: and outputting the target position information of the specific target according to the result of the electroencephalogram selection and the target information of the target of interest.
2. The AR-SSVEP-based target recognition method of claim 1, wherein: in the third step, the target identification module identifies the concerned target in the video stream information by adopting a YOLO algorithm and calculates the target information of the concerned target.
3. The AR-SSVEP-based target recognition method of claim 1, wherein: the distribution of the stimulation blocks comprises two modes, wherein the first mode is that the stimulation blocks are positioned in or around a target frame and follow the target frame; and the second mode is that the stimulation blocks are arranged around the screen frame, and one mode is selected from the two modes to generate the stimulation blocks.
4. An AR-SSVEP-based target identification system is characterized in that: the device comprises a remote camera, a wireless transmission module, a signal receiving module, AR glasses, a target identification module, a communication interface module, electroencephalogram acquisition equipment and an upper computer;
the remote camera is used for acquiring remote scene video stream information;
the wireless transmission module is used for transmitting the acquired remote scene video stream information to the signal receiving module;
the signal receiving module transmits the received remote scene video stream information to the AR glasses and the target identification module respectively;
the AR glasses are used for receiving and projecting a video stream transmitted back by the front camera, superposing the display stimulation blocks and displaying a stimulation interface to the testee;
the target identification module is used for identifying an attention target in the video stream in real time and solving target information of the attention target;
the communication interface module is used for connecting AR glasses and electroencephalogram acquisition equipment;
the electroencephalogram acquisition device is used for acquiring SSVEP electroencephalogram signals output by a subject, analyzing stimulation blocks watched by the subject and completing electroencephalogram selection of a specific target;
the upper computer is used for acquiring target position information of a specific target according to the result of electroencephalogram selection and target information of the concerned target;
the remote camera is connected with the signal receiving module through the wireless transmission module, a first output end of the signal receiving module is connected with a first input end of the AR glasses, a second output end of the signal receiving module is connected with an input end of the target identification module, an output end of the AR glasses is connected with the electroencephalogram acquisition equipment through the communication interface module, and a first output end of the target identification module is connected with a second input end of the AR glasses; the first output end of the target identification module and the output end of the electroencephalogram acquisition equipment are both connected with an upper computer.
5. The AR-SSVEP based target recognition system of claim 5, wherein: the AR glasses adopt Microsoft HoloLens AR equipment.
6. The AR-SSVEP based target recognition system of claim 6, wherein: the wireless transmission module is RTC67055.8G wireless video transmission chip.
CN202010341613.7A 2020-04-27 2020-04-27 Target identification method and system based on AR-SSVEP Pending CN111694425A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010341613.7A CN111694425A (en) 2020-04-27 2020-04-27 Target identification method and system based on AR-SSVEP

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010341613.7A CN111694425A (en) 2020-04-27 2020-04-27 Target identification method and system based on AR-SSVEP

Publications (1)

Publication Number Publication Date
CN111694425A true CN111694425A (en) 2020-09-22

Family

ID=72476665

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010341613.7A Pending CN111694425A (en) 2020-04-27 2020-04-27 Target identification method and system based on AR-SSVEP

Country Status (1)

Country Link
CN (1) CN111694425A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113138668A (en) * 2021-04-25 2021-07-20 清华大学 Method, device and system for selecting destination of automatic wheelchair driving
CN113377212A (en) * 2021-08-16 2021-09-10 南京中谷芯信息科技有限公司 Eye movement tracking AR interface navigation system and method based on electroencephalogram detection
CN114138108A (en) * 2021-10-19 2022-03-04 杭州回车电子科技有限公司 Brain-computer interaction device, system and method
CN114138107A (en) * 2021-10-19 2022-03-04 杭州回车电子科技有限公司 Brain-computer interaction device, system and method
GR1010460B (en) * 2022-05-30 2023-05-16 Ιδρυμα Τεχνολογιας Και Ερευνας, A mobility system and a related controller, method, software and computer-readable medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106339091A (en) * 2016-08-31 2017-01-18 博睿康科技(常州)股份有限公司 Augmented reality interaction method based on brain-computer interface wearing system
CN106859645A (en) * 2017-03-06 2017-06-20 广东工业大学 Wearable device and eeg collection system based on VR technologies and SSVEP
CN107346179A (en) * 2017-09-11 2017-11-14 中国人民解放军国防科技大学 Multi-moving-target selection method based on evoked brain-computer interface
CN107748622A (en) * 2017-11-08 2018-03-02 中国医学科学院生物医学工程研究所 A kind of Steady State Visual Evoked Potential brain-machine interface method based on face perception
US20180104482A1 (en) * 2016-10-14 2018-04-19 Boston Scientific Neuromodulation Corporation Systems and methods for determining orientation of an implanted lead
US20190369727A1 (en) * 2017-06-29 2019-12-05 South China University Of Technology Human-machine interaction method based on visual stimulation

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106339091A (en) * 2016-08-31 2017-01-18 博睿康科技(常州)股份有限公司 Augmented reality interaction method based on brain-computer interface wearing system
US20180104482A1 (en) * 2016-10-14 2018-04-19 Boston Scientific Neuromodulation Corporation Systems and methods for determining orientation of an implanted lead
CN106859645A (en) * 2017-03-06 2017-06-20 广东工业大学 Wearable device and eeg collection system based on VR technologies and SSVEP
US20190369727A1 (en) * 2017-06-29 2019-12-05 South China University Of Technology Human-machine interaction method based on visual stimulation
CN107346179A (en) * 2017-09-11 2017-11-14 中国人民解放军国防科技大学 Multi-moving-target selection method based on evoked brain-computer interface
CN107748622A (en) * 2017-11-08 2018-03-02 中国医学科学院生物医学工程研究所 A kind of Steady State Visual Evoked Potential brain-machine interface method based on face perception

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘亚茹: "《基于脑机接口的多移动目标选择技术研究》", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113138668A (en) * 2021-04-25 2021-07-20 清华大学 Method, device and system for selecting destination of automatic wheelchair driving
CN113377212A (en) * 2021-08-16 2021-09-10 南京中谷芯信息科技有限公司 Eye movement tracking AR interface navigation system and method based on electroencephalogram detection
CN113377212B (en) * 2021-08-16 2021-11-16 南京中谷芯信息科技有限公司 Eye movement tracking AR interface navigation system and method based on electroencephalogram detection
CN114138108A (en) * 2021-10-19 2022-03-04 杭州回车电子科技有限公司 Brain-computer interaction device, system and method
CN114138107A (en) * 2021-10-19 2022-03-04 杭州回车电子科技有限公司 Brain-computer interaction device, system and method
GR1010460B (en) * 2022-05-30 2023-05-16 Ιδρυμα Τεχνολογιας Και Ερευνας, A mobility system and a related controller, method, software and computer-readable medium

Similar Documents

Publication Publication Date Title
CN111694425A (en) Target identification method and system based on AR-SSVEP
CN109284737A (en) A kind of students ' behavior analysis and identifying system for wisdom classroom
CN104083258A (en) Intelligent wheel chair control method based on brain-computer interface and automatic driving technology
CN105260025B (en) Steady State Visual Evoked Potential brain machine interface system based on mobile terminal
CN103531174A (en) Brightness adjusting device and method
CN108153502B (en) Handheld augmented reality display method and device based on transparent screen
CN111930238B (en) Brain-computer interface system implementation method and device based on dynamic SSVEP (secure Shell-and-Play) paradigm
CN111317469B (en) Brain wave monitoring equipment, system and monitoring method
CN112133246A (en) Control method of LED display screen system and LED display screen system
CN113009931B (en) Man-machine and unmanned-machine mixed formation cooperative control device and method
Shen et al. CoCAtt: A cognitive-conditioned driver attention dataset
CN110660275A (en) Teacher-student classroom instant interaction system and method based on video analysis
CN111540335B (en) Color blindness correction device, method, intelligent glasses, controller and medium
CN113269063A (en) Examination management system based on big data and intelligent education
CN106255991A (en) State decision-making system
KR101331055B1 (en) Visual aid system based on the analysis of visual attention and visual aiding method for using the analysis of visual attention
CN110251076B (en) Method and device for detecting significance based on contrast and fusing visual attention
CN110321782A (en) A kind of system detecting characteristics of human body's signal
Kouamou et al. Extraction of video features for real-time detection of neonatal seizures
CN110321781A (en) A kind of signal processing method and device for heed contacted measure
CN112936259B (en) Man-machine cooperation method suitable for underwater robot
CN115834952A (en) Video frame rate detection method and device based on visual perception
CN109508089B (en) Sight line control system and method based on hierarchical random forest
CN111782055A (en) Brain control system based on AR glasses
CN106055298A (en) Device and method for identifying target display

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination