CN118210409A - Target presentation method, device, equipment and medium of hybrid brain-computer interface - Google Patents

Target presentation method, device, equipment and medium of hybrid brain-computer interface Download PDF

Info

Publication number
CN118210409A
CN118210409A CN202410436793.5A CN202410436793A CN118210409A CN 118210409 A CN118210409 A CN 118210409A CN 202410436793 A CN202410436793 A CN 202410436793A CN 118210409 A CN118210409 A CN 118210409A
Authority
CN
China
Prior art keywords
target
stimulation
interface
stimulus
presentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410436793.5A
Other languages
Chinese (zh)
Inventor
陈小刚
崔红岩
张若晴
李朝晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Biomedical Engineering of CAMS and PUMC
Original Assignee
Institute of Biomedical Engineering of CAMS and PUMC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Biomedical Engineering of CAMS and PUMC filed Critical Institute of Biomedical Engineering of CAMS and PUMC
Priority to CN202410436793.5A priority Critical patent/CN118210409A/en
Publication of CN118210409A publication Critical patent/CN118210409A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Neurosurgery (AREA)
  • Neurology (AREA)
  • Health & Medical Sciences (AREA)
  • Dermatology (AREA)
  • Geometry (AREA)
  • Biomedical Technology (AREA)
  • Multimedia (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The invention discloses a target presentation method, device, equipment and medium of a hybrid brain-computer interface. Presenting a target stimulus object in a stimulus interface by responding to a target presentation triggering operation; the target stimulation object comprises a first stimulation object and a second stimulation object, wherein the first stimulation object is an annular stimulation object, and the second stimulation object is a target video; the second stimulus object is presented in a circular area surrounded by the first stimulus object; the target video comprises a picture for executing preset movement of a target part; and in the playing process of the target video, adjusting the brightness of the first stimulation object based on the current time stamp of the target video and the preset flicker frequency. According to the embodiment of the invention, the steady-state visual evoked potential task and the motor imagery task can be simultaneously carried out under the condition that the tested object only needs to watch the second stimulus object, so that the complexity of the steady-state visual evoked potential and motor imagery mixed task can be reduced.

Description

Target presentation method, device, equipment and medium of hybrid brain-computer interface
Technical Field
The present invention relates to the field of brain-computer interfaces, and in particular, to a method, an apparatus, a device, and a medium for presenting a target of a hybrid brain-computer interface.
Background
The brain-computer interface is a communication or control system for decoding brain neural activity into specific instructions to realize interaction of the brain with external equipment. There are several paradigms for brain-machine interfaces, with steady-state visual evoked potentials and motor imagery being the classical two paradigms.
Currently, the prior art generally mixes steady-state visual evoked potentials with motor imagery to construct a hybrid brain-computer interface to increase the robustness of the brain-computer interface system, including guiding a user through a motor imagery task by switching multiple still pictures. However, the low-frequency signal caused by the switching frequency of the still picture in the prior art affects the motor imagery signal under the mu rhythm, so that the steady-state visual evoked potential task and the motor imagery task are difficult to execute.
Disclosure of Invention
The invention provides a target presentation method, device, equipment and medium of a hybrid brain-computer interface, which are used for solving the problem of complex steady-state visual evoked potential and motor imagery hybrid task and realizing the simplification of the steady-state visual evoked potential and motor imagery hybrid task.
According to an aspect of the present invention, there is provided a target presentation method of a hybrid brain-computer interface, including:
Responding to the target presentation triggering operation, and presenting a target stimulation object in a stimulation interface; the target stimulation object comprises a first stimulation object and a second stimulation object, wherein the first stimulation object is an annular stimulation object, and the second stimulation object is a target video; the second stimulus object is presented in a circular area surrounded by the first stimulus object; the target video comprises a picture for executing preset movement of a target part;
and in the playing process of the target video, adjusting the brightness of the first stimulation object based on the current time stamp of the target video and the preset flicker frequency.
According to another aspect of the present invention, there is provided a target presenting apparatus of a hybrid brain-computer interface, including:
The target stimulation object presentation module is used for responding to the target presentation triggering operation and presenting a target stimulation object in the stimulation interface; the target stimulation object comprises a first stimulation object and a second stimulation object, wherein the first stimulation object is an annular stimulation object, and the second stimulation object is a target video; the second stimulus object is presented in a circular area surrounded by the first stimulus object; the target video comprises a picture for executing preset movement of a target part;
and the brightness adjusting module is used for adjusting the brightness of the first stimulation object based on the current time stamp of the target video and the preset flicker frequency in the playing process of the target video.
According to another aspect of the present invention, there is provided an electronic apparatus including:
at least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the target rendering method of the hybrid brain-computer interface of any one of the embodiments of the present invention.
According to another aspect of the present invention, there is provided a computer readable storage medium storing computer instructions for causing a processor to execute a target rendering method of the hybrid brain-computer interface according to any one of the embodiments of the present invention.
According to the technical scheme, the target stimulation object is presented in the stimulation interface by responding to the target presentation triggering operation; the target stimulation object comprises a first stimulation object and a second stimulation object, wherein the first stimulation object is an annular stimulation object, and the second stimulation object is a target video; the second stimulus object is presented in a circular area surrounded by the first stimulus object; the target video comprises a picture for executing preset movement of a target part; and in the playing process of the target video, adjusting the brightness of the first stimulation object based on the current time stamp of the target video and the preset flicker frequency. By taking the target video as the second stimulus object instead of taking the static image as the second stimulus object, the problem that the ERD phenomenon caused by the fact that the tested object is guided to complete the motor imagery task based on the static image is not obvious is solved, the frame rate of the target video is higher than the switching frequency of a plurality of static images, the influence of the refreshing of the target video image on the mu rhythm of the motor imagery electroencephalogram signal can be reduced, and the follow-up accuracy of the motor imagery electroencephalogram signal analysis is improved. By presenting the second stimulus object in the circular area surrounded by the first stimulus object and adjusting the brightness of the first stimulus object in the playing process of the target video, the flicker stimulus can be carried out on the periphery of the target video while the target video is played, so that the tested object can simultaneously carry out the steady-state visual evoked potential task and the motor imagery task only by watching the second stimulus object in the target stimulus object, and the complexity of the steady-state visual evoked potential and motor imagery mixed task is reduced.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the invention or to delineate the scope of the invention. Other features of the present invention will become apparent from the description that follows.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for presenting targets of a hybrid brain-computer interface according to an embodiment of the present invention;
FIG. 2 is a schematic illustration of a stimulation interface provided in accordance with a first embodiment of the present invention;
FIG. 3 is a schematic diagram of brightness adjustment of a first subject according to an embodiment of the present invention;
FIG. 4 is a flowchart of a target presenting method of a hybrid brain-computer interface according to a second embodiment of the present invention;
FIG. 5 is a schematic view of a first and a second optical offset angle according to a second embodiment of the present invention;
FIG. 6 is a schematic illustration of a stimulus interface provided by a second embodiment of the present invention;
fig. 7 is a schematic structural diagram of a target presenting device of a hybrid brain-computer interface according to a third embodiment of the present invention;
fig. 8 is a schematic structural diagram of an electronic device implementing a target presentation method of a hybrid brain-computer interface according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above-described drawings are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
Fig. 1 is a flowchart of a target presenting method of a hybrid brain-computer interface according to an embodiment of the present invention, where the method may be performed by a target presenting device of the hybrid brain-computer interface, and the target presenting device of the hybrid brain-computer interface may be implemented in hardware and/or software, and the target presenting device of the hybrid brain-computer interface may be configured in electronic devices such as a computer and a server.
In this embodiment, the target stimulus object is an object for performing visual stimulus on the object to be tested, so as to guide the object to synchronously complete the motor imagery task and the steady-state visual evoked potential task. The subject is illustratively a human or animal body capable of feedback on visual stimuli.
As shown in fig. 1, the method includes:
S110, responding to target presentation triggering operation, and presenting a target stimulation object in a stimulation interface; the target stimulation object comprises a first stimulation object and a second stimulation object, wherein the first stimulation object is an annular stimulation object, and the second stimulation object is a target video; the second stimulus object is presented in a circular area surrounded by the first stimulus object; the target video includes a picture in which a target portion performs a preset motion.
In this embodiment, the target presentation triggering operation is an operation for controlling the stimulation interface to present the target stimulation object. Exemplary target presentation triggering operations include, but are not limited to, a target presentation key being pressed and recognized as a target presentation voice command, wherein the target presentation key may be a physical key or a touch key. For the touch key, the number of times the touch key is pressed or the duration of the pressing can be detected, and the target presentation key is determined to be triggered under the condition that the number of times the touch key is pressed or the duration of the pressing reaches the preset number of times or the preset duration.
Specifically, under the condition that the target presentation triggering operation is identified, the stimulation interface is controlled to present the first stimulation object, and meanwhile, the stimulation interface is controlled to present the second stimulation object in a circular area surrounded by the first stimulation object. The stimulation interface is an interface for displaying a target stimulation object, and the stimulation interface comprises, but is not limited to, a display screen, a virtual reality interface and a projection interface. The target stimulation object consists of a first stimulation object and a second stimulation object, wherein the first stimulation object is an annular stimulation object, and the brightness of a plurality of pixel points contained in the first stimulation object is the same; the second stimulation object is a target video, the target video comprises a plurality of pictures corresponding to the continuous time stamps respectively, each picture comprises a target part, and the target parts in the pictures corresponding to the continuous time stamps respectively execute preset movement, wherein the target parts are limb parts of a human body or an animal body, for example, the target parts are left hand, right hand or double feet. The preset motion is a motion executable by a preset target portion, for example, assuming that the target portion is left hand or right hand, the preset motion is a gripping motion; assuming that the target portion is a double foot, the preset motion is a lifting and dropping motion.
It should be noted that, by making the measured object watch the second stimulus object in the target stimulus objects presented in the stimulus interface, the measured object is guided to synchronously complete the motor imagery task and the steady-state visual evoked potential task.
Fig. 2 is a schematic diagram of a stimulus interface according to a first embodiment of the present invention, and as shown in fig. 2, a stimulus interface 400 includes a first stimulus object 410 and a second stimulus object 420, where the second stimulus object presented in a circular area surrounded by the first stimulus object is a target video of a screen including a right-hand performing gripping motion. In the stimulus interface 400, the RGB values presented outside of the first stimulus object 410 and the second stimulus object 420 are [999999].
S120, adjusting the brightness of the first stimulation object based on the current time stamp of the target video and the preset flicker frequency in the playing process of the target video.
In this embodiment, the current timestamp is a timestamp corresponding to a picture of the target video currently presented in the target video playing process. Illustratively, assuming that the target video includes N pictures, the ith picture corresponds to a timestamp i, where i E [0, N-1], in the case where the ith picture of the target video is currently presented, the current timestamp is i. The preset flicker frequency is the preset frequency for adjusting the brightness of the first stimulation object. It should be noted that, under the condition that the object to be tested executes the steady-state visual evoked potential task, the generated electroencephalogram signals are usually mu rhythm (8 hz to 13 hz) and beta rhythm (13 hz to 30 hz), and by setting the preset flicker frequency to be more than 30 hz, the interference of the electroencephalogram signals generated based on the motor imagery task by the electroencephalogram signals generated based on the steady-state visual evoked potential task is avoided. The preset flicker frequency is, for example, 34 hz.
Specifically, in the refreshing process of the stimulation interface, updating the picture of the target video currently presented by the stimulation interface based on the sequence of the time stamps in the target video to realize the playing of the target video, and simultaneously adjusting the brightness of the first stimulation object currently presented by the stimulation interface to be the brightness corresponding to the current time stamp of the target video. The brightness corresponding to the current timestamp of the target video is determined based on the current timestamp and the preset flicker frequency, and exemplary brightness corresponding to the current timestamp may be calculated based on the current timestamp and the preset flicker frequency, or may be brightness of a joint mapping of a timestamp range to which the current timestamp belongs and a frequency range to which the preset flicker frequency belongs.
In some embodiments, the brightness corresponding to the current timestamp is calculated based on the current timestamp and a preset flicker frequency. Optionally, determining a brightness change function of the first stimulus object based on a preset flicker frequency; determining target brightness corresponding to the first stimulation object based on the brightness change function and the current timestamp of the target video; the brightness of the first stimulus object is adjusted to the target brightness.
In this embodiment, the target brightness is the brightness corresponding to the current timestamp. The luminance change function is a function having the current timestamp as an argument and the target luminance as an argument, and is used to describe a periodic change in luminance. Illustratively, the brightness variation functions include, but are not limited to, sine wave functions, cosine wave functions, and triangular wave functions.
Specifically, the brightness change period in the brightness change function is related to a preset flicker frequency, and it can be understood that different preset flicker frequencies correspond to different brightness change functions. And determining the brightness change period in the brightness change function based on the preset flicker frequency to obtain the brightness change function taking the current time stamp as an independent variable. And substituting the current time stamp of the target video into a brightness change function corresponding to the preset flicker frequency to calculate, so as to obtain the target brightness corresponding to the current time stamp. Optionally, the luminance variation function is a sine wave function.
In the present embodiment, the brightness change function includes data itemsWherein ω represents angular frequency, ω is determined based on a preset flicker frequency, t represents a time point corresponding to the current timestamp,/>The preset initial phase is characterized. Illustratively, the luminance change function is stim (t) = {1+sin (2pi f (i/R)) } 2, and the luminance change range is [0,1], where f represents a preset flicker frequency, ω=2pi f, R represents a refresh frequency of the stimulus interface, and t=i/R.
According to the technical scheme, the brightness change function is set to be the sine wave function, so that the brightness of the first stimulation object can be changed in a sine wave mode, the situation that the brightness change of the first stimulation object with the preset flicker frequency cannot be accurately realized under the condition that the preset flicker frequency is not the integral frequency division of the refresh frequency of the stimulation interface is avoided, the target brightness can be changed smoothly and periodically in a sine wave mode, the flicker sense of the first stimulation object is reduced, and the comfort level of the tested object is improved.
It should be noted that, under the condition that the refresh frequency of the stimulus interface is the same as the frame rate of the target video, the duration of the target video is the same as the duration of the presentation period of the target stimulus object on the stimulus interface, and for example, assuming that the refresh frequency of the stimulus interface is 120 hz, the frame rate of the target video is 120, and the duration of the target video is 5 seconds, the duration of the presentation period of the target stimulus object on the stimulus interface is also 5 seconds.
The current timestamp is updated by refreshing the stimulus interface. And under the condition that the stimulation interface presents a picture corresponding to the current time stamp in the target video, presenting a first stimulation object at the target brightness on the stimulation interface.
Taking a picture including a right-hand grip motion in a target video as an example, fig. 3 is a schematic diagram of brightness adjustment of a first stimulus object according to an embodiment of the present invention, where a stimulus interface in (a) in fig. 3 presents a picture corresponding to a first time stamp of the target video, a stimulus interface in (b) in fig. 3 presents a picture corresponding to a time stamp located between the first time stamp and a last time stamp in the target video, and a stimulus interface in (c) in fig. 3 presents a picture corresponding to a last time stamp of the target video. In the process from (a) in fig. 3 to (c) in fig. 3, the brightness of the second stimulus object and the background in the stimulus interface other than the first stimulus object are unchanged.
According to the technical scheme, the brightness change function of the first stimulation object is determined based on the preset flicker frequency; based on the brightness change function and the current timestamp of the target video, determining the target brightness corresponding to the first stimulus object, so that the complexity of determining the target brightness can be reduced, and the computing resource is saved; and the target brightness can be enabled to correspond to the current time stamp of the target video, so that the target brightness corresponding to the picture corresponding to each time stamp in the target video is ensured to be fixed, and the periodicity of the target stimulus object is improved.
It should be noted that, the texture of the first stimulus object may be a gray texture, or may be a color texture determined based on the picture in the target video currently presented by the stimulus interface, which is not limited in this embodiment. Illustratively, by preloading the target video and determining the texture of each picture in the target video, the texture of the first stimulus object corresponding to the picture is determined based on the texture of the picture. And in the process of presenting the target stimulus object by the stimulus interface, drawing the texture of the first stimulus object corresponding to the texture of any picture in the target video by the stimulus interface while drawing the texture of the first stimulus object.
According to the technical scheme, a target stimulation object is presented in a stimulation interface by responding to a target presentation triggering operation; the target stimulation object comprises a first stimulation object and a second stimulation object, wherein the first stimulation object is an annular stimulation object, and the second stimulation object is a target video; the second stimulus object is presented in a circular area surrounded by the first stimulus object; the target video comprises a picture for executing preset movement of a target part; and in the playing process of the target video, adjusting the brightness of the first stimulation object based on the current time stamp of the target video and the preset flicker frequency. By taking the target video as the second stimulus object instead of taking the static image as the second stimulus object, the problem that the ERD phenomenon caused by the fact that the tested object is guided to complete the motor imagery task based on the static image is not obvious is solved, the frame rate of the target video is higher than the switching frequency of a plurality of static images, the influence of the refreshing of the target video image on the mu rhythm of the motor imagery electroencephalogram signal can be reduced, and the follow-up accuracy of the motor imagery electroencephalogram signal analysis is improved. By presenting the second stimulus object in the circular area surrounded by the first stimulus object and adjusting the brightness of the first stimulus object in the playing process of the target video, the flicker stimulus can be carried out on the periphery of the target video while the target video is played, so that the tested object can simultaneously carry out the steady-state visual evoked potential task and the motor imagery task only by watching the second stimulus object in the target stimulus object, and the complexity of the steady-state visual evoked potential and motor imagery mixed task is reduced.
Example two
Fig. 4 is a flowchart of a target presenting method of a hybrid brain-computer interface according to a second embodiment of the present invention, and the technical solution of the embodiment of the present invention is improved on the basis of the foregoing embodiment. Wherein the explanation of the same or corresponding terms as those of the above embodiments is not repeated herein. As shown in fig. 4, the method includes:
S210, responding to target presentation triggering operation, and acquiring association information of a stimulation interface of a target stimulation object to be presented and the number of objects of the target stimulation object to be presented.
In this embodiment, the target stimulus object to be presented is a target stimulus object that needs to be presented at the stimulus interface. The association information of the stimulus interface is information related to a property of the stimulus interface, for example, the association information of the stimulus interface includes, but is not limited to, a resolution of the stimulus interface, which includes a first resolution and a second resolution, and is exemplified by 1920pixels (pixels) x 1080pixels (pixels), where 1920pixels is the first resolution and 1080pixels is the second resolution. The number of the target stimulation objects is the preset number of the target stimulation objects which need to be presented at the same time on the stimulation interface.
Specifically, under the condition that the target presentation triggering operation is detected, the attribute of the stimulation interface of the target stimulation object to be presented is identified, the associated information of the stimulation interface is obtained, and under the condition that the object setting operation is detected, the set object number is read, so that the object number of the target stimulation object to be presented is obtained, wherein the object number setting operation is the operation executed by the target presentation triggering operation.
Exemplary object quantity setting operations include, but are not limited to, an object quantity setting key being pressed, an object quantity setting text box entering a value, and recognizing an object quantity setting voice command. The number of times of the detected touch key pressed can be determined as the number of objects for the object number setting key; setting a text box for the number of objects, and determining the numerical value input by the text box as the number of objects; as for the number of objects, a voice instruction is set, and a number recognized by voice can be determined as the number of objects, which is not limited in the present embodiment.
S220, determining a first presenting area based on the association information of the stimulus interface and the number of objects, and presenting the first stimulus object in the first presenting area, wherein the first presenting area is an annular area.
In this embodiment, the first presenting area is an area for presenting a first target stimulus object in the stimulus interface, and it can be understood that each target stimulus object corresponds to one first presenting area, and the first stimulus object is a stimulus object presented in an annular area corresponding to the first presenting area.
Specifically, based on the related information of the stimulation interface and the number of objects, the stimulation interface is divided into areas, and a first presentation area corresponding to each target stimulation object is obtained. Optionally, determining a first visual deflection angle and a second visual deflection angle corresponding to the target stimulus object, wherein the first visual deflection angle is larger than the second visual deflection angle; determining the position of a key point corresponding to the target stimulation object based on the association information of the stimulation interface and the number of the objects; determining the corresponding annular size of the target stimulation object based on the associated information of the stimulation interface, the first visual deflection angle and the second visual deflection angle; the annular dimensions include an outer annular dimension and an inner annular dimension; and determining a first presentation area corresponding to the target stimulation object based on the key point position, the outer ring size and the inner ring size.
In this embodiment, the annular size corresponding to the target stimulus object is used to represent the size of the first presentation area corresponding to the target stimulus object, the outer annular size is the size of the outer annular of the first presentation area where the first stimulus object in the target stimulus object is located, and the inner annular size is the size of the inner annular of the first presentation area where the first stimulus object in the target stimulus object is located.
Illustratively, the outer ring dimension is an outer ring radius or outer ring diameter of the first presentation area and the inner ring dimension is an inner ring radius or inner ring diameter of the first presentation area. The first visual deflection angle is the visual deflection angle used for determining the size of the outer ring, and the second visual deflection angle is the visual deflection angle used for determining the size of the inner ring. Fig. 5 is a schematic view of a first viewing angle and a second viewing angle according to a second embodiment of the present invention, where, as shown in fig. 5, the first viewing angle is 10 °, and the second viewing angle is 7 °.
Specifically, the first visual deflection angle and the second visual deflection angle may be set based on the visual deflection angle setting operation, or may be determined based on the association information of the stimulus interface and the number of objects, which is not limited in this embodiment. The visual deflection angle setting operation is an operation executed before the target presentation triggering operation and is used for presetting a first visual deflection angle and a second visual deflection angle, and exemplary visual deflection angle setting operations include, but are not limited to, a visual deflection angle setting text box input numerical operation and a visual deflection angle drop-down candidate box selection operation.
In some embodiments, the first and second eye deviation angles set based on the eye deviation angle setting operation are identified by detecting the eye deviation angle setting operation and in the event that the eye deviation angle setting operation is detected. Illustratively, the first optic deflection angle is set to 10 ° and the second optic deflection angle is set to 7 ° based on the optic deflection angle setting operation.
In some embodiments, a first eye deviation angle threshold corresponding to each target stimulus object is determined based on the association information of the stimulus interface and the number of objects; determining a first visual deflection angle based on a first visual deflection angle threshold value and a preset second visual deflection angle threshold value; and determining a second visual deflection angle based on the first visual deflection angle and a preset second visual deflection angle threshold value. The first visual deflection angle threshold is a threshold of the first visual deflection angle. The preset second visual deflection angle threshold is a preset second visual deflection angle threshold, and the preset second visual deflection angle threshold is used for guaranteeing that the second stimulation object can clearly appear in a circular area surrounded by the first stimulation object. The first visual deflection angle and the second visual deflection angle are obtained by setting any visual deflection angle smaller than or equal to the first visual deflection angle threshold and larger than the preset second visual deflection angle threshold as the first visual deflection angle and setting any visual deflection angle smaller than the first visual deflection angle and larger than or equal to the preset second visual deflection angle threshold as the second visual deflection angle.
The key point of the target stimulus object may be the geometric center of the target stimulus object (for example, the center point of the first presentation area), or may be a specific point on the edge of the target stimulus object (for example, the upper left corner point of the first presentation area), or may be an endpoint of the maximum circumscribing rectangular outline (for example, the upper left corner point of the maximum circumscribing rectangular outline) of the area where the target stimulus object is located, which is not limited in this embodiment. The position of the key point corresponding to the target stimulation object is the coordinate position of the key point of the target stimulation object in the stimulation interface. And determining the coordinate position of the key point of each target stimulation object based on the association information of the stimulation interface and the number of objects to obtain the position of the key point corresponding to the target stimulation object.
For example, assuming that the association information of the stimulus interface includes an upper left corner position (0, 0) of the interface, a resolution of the stimulus interface is a 1 (first resolution) ×a 2 (second resolution), a left margin b, and the number of objects is 2, two target stimulus objects are symmetrically placed in the stimulus interface, a right margin c=b of the stimulus interface, a key point of the target stimulus object is an upper left corner point of a maximum circumscribed rectangular outline of an area where the target stimulus object is located, an outer ring radius of a first presentation area corresponding to the target stimulus object 1 is represented as a specific value of a variable r 1,r1 to be determined, a key point position corresponding to the target stimulus object 1 on the left side is (x 1,y1), wherein x 1=b,y1=(a2-2×r1)/2, a key point position corresponding to the target stimulus object 2 on the right side is (x 2,y2), and y 2=y1,x2=x1+2×r1+x3=a1-b-2×r1, wherein a distance between the target stimulus object 1 and the target stimulus object 2 is x 3=a1-b-c-4×r1. Note that b and r 1 are both pixel-level sizes.
Determining the corresponding annular size of the target stimulation object based on the associated information of the stimulation interface, the first visual deflection angle and the second visual deflection angle; the annular dimensions include an outer annular dimension and an inner annular dimension.
Specifically, the outer ring size corresponding to the target stimulus object is determined based on the association information of the stimulus interface and the first visual deflection angle, and the inner ring size corresponding to the target stimulus object is determined based on the association information of the stimulus interface and the second visual deflection angle. The determining manner of the outer ring size may be based on a joint mapping of the associated information range to which the associated information of the stimulus interface belongs and the first visual deflection angle range to which the first visual deflection angle belongs, or may be based on the associated information of the stimulus interface and the first visual deflection angle to perform size calculation, which is not limited in this embodiment, and the determining manner of the inner ring size is the same as that of the outer ring size, and is not repeated here. In some embodiments, the association information of the stimulation interface includes a resolution and interface size of the stimulation interface and a target distance between the stimulated object and the stimulation interface. In this embodiment, the interface dimensions of the stimulation interface include a first dimension and a second dimension, wherein the first dimension of the interface dimensions corresponds to the first resolution, the second dimension of the interface dimensions corresponds to the second resolution, and the target distance is a perpendicular distance between the stimulated object and the stimulation interface. Illustratively, the stimulus interface has a resolution of 1920pixels (pixels) x 1080pixels (pixels), wherein 1920pixels is the first resolution, 1080pixels is the second resolution, and the stimulus interface has an interface size of 54.10 centimeters (cm) x 30.30 centimeters (cm), wherein 54.10cm is the first size, 30.30cm is the second size, and the target distance is 50cm. Optionally, determining a unit resolution of the stimulation interface based on the resolution of the stimulation interface and the interface size; determining the outer ring size corresponding to the target stimulation object based on the first visual deflection angle, the unit resolution and the target distance; and determining the inner ring size corresponding to the target stimulus object based on the second visual deflection angle, the unit resolution and the target distance corresponding to the target stimulus object.
Specifically, the unit resolution is the number of pixels corresponding to the unit size of the stimulus interface. The unit resolution is obtained by performing a quotient of the first resolution and the first size or by performing a quotient of the second resolution and the second size. And multiplying the tangent value of the first visual deflection angle, the target distance and the unit resolution to obtain the outer ring size corresponding to the target stimulation object, and multiplying the tangent value of the second visual deflection angle, the target distance and the unit resolution to obtain the inner ring size corresponding to the target stimulation object. The outer ring size and the inner ring size are both pixel-level sizes, e.g., outer ring size 231pixels
Illustratively, assuming that the resolution of the stimulus interface is d 1 (first resolution) ×d 2 (second resolution), the interface size is a 1 (second size) ×a 2 (second size), the unit resolution is λ=d 1/a1=d2/a2, the target size is l, assuming that the inner ring size is an inner ring radius, and the outer ring size is an outer ring radius, the outer ring size is r 1=tanθ1 ×l×λ, and the inner ring size is r 2=tanθ2 ×l×λ.
According to the technical scheme, the outer ring size corresponding to the target stimulation object is determined based on the first visual deflection angle, the unit resolution and the target distance, and the inner ring size corresponding to the target stimulation object is determined based on the second visual deflection angle, the unit resolution and the target distance, so that the inner ring size and the outer ring size corresponding to the target stimulation object can be rapidly determined, and the efficiency of the subsequent determination of the first presentation area can be guaranteed.
And determining a first presentation area corresponding to the target stimulation object based on the key point position, the outer ring size and the inner ring size.
Specifically, for each target stimulation object, determining an outer ring contour of a first presentation area corresponding to the target stimulation object based on the position of the key point and the outer ring size, and determining an inner ring contour of the first presentation area corresponding to the target stimulation object based on the outer ring contour and the inner ring size. Taking the case that two target stimulus objects are symmetrically placed left and right in a stimulus interface as an example, the position of a key point (the left upper corner point of the maximum circumscribing rectangular outline of the region where the target stimulus object is located) corresponding to the target stimulus object 1 on the left side is (x 1,y1), the outer ring size is r 1, the inner ring size is r 2, the right lower corner point position of the maximum circumscribing rectangular outline of the region where the target stimulus object 1 is located is (x 1′,y1 '), wherein x 1′=x1+2×r1,y1′=y1+2×r1 determines the maximum inscribing circular outline of the rectangular outline with (x 1′,y1') and (x 1,y1) as the mutually circumscribing points as the outer ring outline of the target stimulus object 1, and by determining the center point position (x 1,y1) of the outer ring outline, wherein x 1=x1+r1,And will be/>A circular contour defined by radius r 2 as the center of the circle is defined as the inner contour of the target stimulus object 1. And determining the area between the outer ring outline and the inner ring outline in the stimulation interface as a first presentation area corresponding to the target stimulation object.
For example, fig. 6 is a schematic diagram of a stimulus interface provided in the second embodiment of the present invention, as shown in fig. 6, the left margin b is 200pixels,2r 1 is 618pixels,2r 2 is 400pixels, and the distance x 3 between the target stimulus object 1 and the target stimulus object 2 is 284pixels.
According to the technical scheme, the first presentation area corresponding to the target stimulation object is determined based on the key point position, the outer ring size and the inner ring size, so that the first presentation area corresponding to the target stimulation object can be determined quickly, and the target presentation efficiency can be guaranteed.
S230, determining a second presentation area based on the first presentation area, and presenting a second stimulation object in the second presentation area.
In this embodiment, the second presenting area is an area in the stimulus interface for presenting a second target stimulus object, and it can be understood that each target stimulus object corresponds to one second presenting area, and the second stimulus object is a stimulus object presented in the second presenting area.
Specifically, for each first presentation area, a circular area surrounded by the first presentation area is determined, and at least a partial area in the circular area is determined as a second presentation area corresponding to the first presentation area. The second presentation area corresponding to the first presentation area may be a circular area surrounded by the first presentation area, or may be a rectangle (for example, a maximum inscribed rectangle) located at the center of the circular area, which is not limited in this embodiment. Optionally, determining a size threshold based on the inner ring radius of the first presentation area and the preset shape, and determining a target size based on the size threshold, wherein the target size is less than or equal to the size threshold; a second presentation area is determined that is located in a circular area surrounded by the first presentation area based on the target size and the preset shape.
In this embodiment, the radius of the inner ring is the radius of the inner ring in the annular region corresponding to the first presentation region, and the radius of the outer ring is the radius of the outer ring in the annular region corresponding to the first presentation region. It will be appreciated that the inner ring radius is smaller than the outer ring radius. The preset shape is the shape of the preset second presentation area, for example, the preset shape includes, but is not limited to, square, rectangle, and circle. The target size is used for representing the size attribute of the second presenting area, and it can be understood that in the case that the preset shape is square or rectangle, the target size comprises a first size and a second size of the second presenting area, wherein the first size and the second size are the lengths of two adjacent sides of the second presenting area; in the case where the preset shape is circular, the target size includes a radius or diameter of the second presentation area. The size threshold is a threshold of the target size, it being understood that the size threshold is greater than or equal to the target size.
Specifically, the area size of the preset shape that can be accommodated by the circular area corresponding to the inner ring radius is determined based on the inner ring radius and the preset shape of the first presentation area, and the largest area size of the plurality of area sizes is determined as the size threshold, so that the size threshold can be determined as the target size, and any size smaller than the size threshold and capable of guaranteeing clear presentation of the target video can be determined as the target size. In the stimulation interface, a region of a preset shape and a target size, which takes the central position of the first presentation region as a geometric center, is determined as a second presentation region.
Exemplary, assuming the preset shape is square, the size threshold isThe target size may be determined as/>(First dimension)/>(Second size) assuming that the center position of the first presentation area is/>The second presentation area is at/>Is the center position and the side length is/>Is a square area of (c).
According to the technical scheme, the size threshold is determined based on the inner ring radius and the preset shape of the first presentation area, and the target size is set to be smaller than or equal to the size threshold, so that the second stimulation object and the first stimulation object can be prevented from being overlapped, and the second stimulation object presented on the stimulation interface and the first stimulation object are ensured not to be interfered with each other.
Rendering the second stimulus object in the second rendering area is achieved by rendering the target video in the second rendering area with the target size. For example, as shown in fig. 6, the background except the target portion in the picture of the target video is transparent, and only the target portion in the picture of the target video needs to be drawn in the process of presenting the second stimulus object in the second presentation area.
It should be noted that the stimulus interface presents the first stimulus object and the second stimulus object simultaneously. In the case that a plurality of target stimulus objects to be presented exist, the stimulus interface presents a first stimulus object and a second stimulus object of the plurality of target stimulus objects to be presented at the same time. The preset flicker frequencies and/or target videos corresponding to different target stimulus objects are different, the target positions corresponding to different target videos are different, 3 target videos are assumed to exist, the target positions corresponding to the 3 target videos are respectively left hand, right hand and two feet, the preset flicker frequency f j is set, j=1, 2, … and J, and then 3×j target stimulus objects capable of being presented on a stimulus interface exist.
S240, adjusting the brightness of the first stimulation object based on the current time stamp of the target video and the preset flicker frequency in the playing process of the target video.
According to the technical scheme, the method comprises the following steps that in response to target presentation triggering operation, a target stimulation object is presented in a stimulation interface, and the method comprises the following steps: responding to target presentation triggering operation, and acquiring association information of a stimulation interface of a target stimulation object to be presented and the number of objects of the target stimulation object to be presented; determining a first presentation area based on the association information of the stimulation interface and the number of objects, and presenting the first stimulation objects in the first presentation area, wherein the first presentation area is an annular area; a second presentation area is determined based on the first presentation area and a second stimulus object is presented in the second presentation area. The first presentation areas corresponding to one or more target stimulus objects can be determined in the stimulus interface by acquiring the number of the target stimulus objects to be presented and determining the first presentation areas in an auxiliary manner through the number of the target stimulus objects, so that the stimulus interface presents a plurality of target stimulus objects at the same time, and the target presentation performance is improved.
Example III
Fig. 7 is a schematic structural diagram of a target presenting device of a hybrid brain-computer interface according to a third embodiment of the present invention. As shown in fig. 7, the apparatus includes:
A target stimulus object presentation module 310 for presenting a target stimulus object in a stimulus interface in response to a target presentation trigger operation; the target stimulation object comprises a first stimulation object and a second stimulation object, wherein the first stimulation object is an annular stimulation object, and the second stimulation object is a target video; the second stimulus object is presented in a circular area surrounded by the first stimulus object; the target video comprises a picture for executing preset movement of a target part;
the brightness adjustment module 320 is configured to adjust the brightness of the first stimulus object based on the current timestamp of the target video and the preset flicker frequency during the playing process of the target video.
According to the technical scheme, a target stimulation object is presented in a stimulation interface by responding to a target presentation triggering operation; the target stimulation object comprises a first stimulation object and a second stimulation object, wherein the first stimulation object is an annular stimulation object, and the second stimulation object is a target video; the second stimulus object is presented in a circular area surrounded by the first stimulus object; the target video comprises a picture for executing preset movement of a target part; and in the playing process of the target video, adjusting the brightness of the first stimulation object based on the current time stamp of the target video and the preset flicker frequency. By taking the target video as the second stimulus object instead of taking the static image as the second stimulus object, the problem that the ERD phenomenon caused by the fact that the tested object is guided to complete the motor imagery task based on the static image is not obvious is solved, the frame rate of the target video is higher than the switching frequency of a plurality of static images, the influence of the refreshing of the target video image on the mu rhythm of the motor imagery electroencephalogram signal can be reduced, and the follow-up accuracy of the motor imagery electroencephalogram signal analysis is improved. By presenting the second stimulus object in the circular area surrounded by the first stimulus object and adjusting the brightness of the first stimulus object in the playing process of the target video, the flicker stimulus can be carried out on the periphery of the target video while the target video is played, so that the tested object can simultaneously carry out the steady-state visual evoked potential task and the motor imagery task only by watching the second stimulus object in the target stimulus object, and the complexity of the steady-state visual evoked potential and motor imagery mixed task is reduced.
Based on the above embodiments, optionally, the target stimulus object presentation module 310 is specifically configured to: responding to target presentation triggering operation, and acquiring association information of a stimulation interface of a target stimulation object to be presented and the number of objects of the target stimulation object to be presented; determining a first presentation area based on the association information of the stimulation interface and the number of objects, and presenting the first stimulation objects in the first presentation area, wherein the first presentation area is an annular area; a second presentation area is determined based on the first presentation area and a second stimulus object is presented in the second presentation area.
Based on the above embodiment, optionally, the target stimulus object presentation module 310 is further configured to: determining a first visual deflection angle and a second visual deflection angle corresponding to a target stimulation object, wherein the first visual deflection angle is larger than the second visual deflection angle; determining the position of a key point corresponding to the target stimulation object based on the association information of the stimulation interface and the number of the objects; determining the corresponding annular size of the target stimulation object based on the associated information of the stimulation interface, the first visual deflection angle and the second visual deflection angle; the annular dimensions include an outer annular dimension and an inner annular dimension; and determining a first presentation area corresponding to the target stimulation object based on the key point position, the outer ring size and the inner ring size.
On the basis of the above embodiment, optionally, the association information of the stimulation interface includes resolution and interface size of the stimulation interface and a target distance between the stimulated object and the stimulation interface; accordingly, the target stimulus object presentation module 310 is further configured to: determining a unit resolution of the stimulation interface based on the resolution of the stimulation interface and the interface size; determining the outer ring size corresponding to the target stimulation object based on the first visual deflection angle, the unit resolution and the target distance; and determining the inner ring size corresponding to the target stimulus object based on the second visual deflection angle, the unit resolution and the target distance corresponding to the target stimulus object.
Based on the above embodiment, optionally, the target stimulus object presentation module 310 is further configured to: determining a size threshold based on the inner ring radius of the first presentation area and the preset shape, and determining a target size based on the size threshold, wherein the target size is less than or equal to the size threshold; a second presentation area is determined that is located in a circular area surrounded by the first presentation area based on the target size and the preset shape.
Based on the above embodiment, the optional brightness adjustment module 320 is specifically configured to: determining a brightness change function of the first stimulation object based on a preset flicker frequency; determining target brightness corresponding to the first stimulation object based on the brightness change function and the current timestamp of the target video; the brightness of the first stimulus object is adjusted to the target brightness.
Alternatively, the luminance change function may be a sine wave function based on the above embodiment.
The target presenting device of the hybrid brain-computer interface provided by the embodiment of the invention can execute the target presenting method of the hybrid brain-computer interface provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the executing method.
Example IV
Fig. 8 is a schematic structural diagram of an electronic device implementing a target presentation method of a hybrid brain-computer interface according to an embodiment of the present invention. The electronic device 10 is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic equipment may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 8, the electronic device 10 includes at least one processor 11, and a memory, such as a Read Only Memory (ROM) 12, a Random Access Memory (RAM) 13, etc., communicatively connected to the at least one processor 11, in which the memory stores a computer program executable by the at least one processor, and the processor 11 may perform various appropriate actions and processes according to the computer program stored in the Read Only Memory (ROM) 12 or the computer program loaded from the storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data required for the operation of the electronic device 10 may also be stored. The processor 11, the ROM 12 and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to bus 14.
Various components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, etc.; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, an optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The processor 11 performs the various methods and processes described above, such as the target rendering method of the hybrid brain-computer interface.
In some embodiments, the target rendering method of the hybrid brain-computer interface may be implemented as a computer program tangibly embodied on a computer-readable storage medium, such as the storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM 12 and/or the communication unit 19. When the computer program is loaded into RAM 13 and executed by processor 11, one or more steps of the above-described target rendering method of the hybrid brain-computer interface may be performed. Alternatively, in other embodiments, the processor 11 may be configured to perform the target rendering method of the hybrid brain-computer interface in any other suitable manner (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
The computer program for implementing the target rendering method of the hybrid brain-computer interface of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be implemented. The computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
Example five
The fifth embodiment of the present invention also provides a computer readable storage medium storing computer instructions for causing a processor to execute a target presenting method of a hybrid brain-computer interface, the method comprising:
Responding to the target presentation triggering operation, and presenting a target stimulation object in a stimulation interface; the target stimulation object comprises a first stimulation object and a second stimulation object, wherein the first stimulation object is an annular stimulation object, and the second stimulation object is a target video; the second stimulus object is presented in a circular area surrounded by the first stimulus object; the target video comprises a picture for executing preset movement of a target part; and in the playing process of the target video, adjusting the brightness of the first stimulation object based on the current time stamp of the target video and the preset flicker frequency.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) through which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background (e.g., as a data server), or that includes middleware (e.g., an application server), or that includes a front end (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front end. The systems may be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present invention may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solution of the present invention are achieved, and the present invention is not limited herein.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.

Claims (10)

1. A target presentation method of a hybrid brain-computer interface, comprising:
responding to the target presentation triggering operation, and presenting a target stimulation object in a stimulation interface; the target stimulation object comprises a first stimulation object and a second stimulation object, wherein the first stimulation object is an annular stimulation object, and the second stimulation object is a target video; the second stimulus object is presented in a circular area surrounded by the first stimulus object; the target video comprises a picture for executing preset movement of a target part;
and in the playing process of the target video, adjusting the brightness of the first stimulation object based on the current time stamp of the target video and the preset flicker frequency.
2. The method of claim 1, wherein presenting the target stimulation object in the stimulation interface in response to the target presentation trigger operation comprises:
responding to a target presentation triggering operation, and acquiring association information of a stimulation interface of the target stimulation object to be presented and the number of objects of the target stimulation object to be presented;
determining a first presenting area based on the association information of the stimulation interface and the number of the objects, and presenting the first stimulation objects in the first presenting area, wherein the first presenting area is a ring-shaped area;
a second presentation area is determined based on the first presentation area, and the second stimulus object is presented in the second presentation area.
3. The method of claim 2, wherein the determining a first presentation area corresponding to the target stimulation object based on the association information of the stimulation interface and the number of objects comprises:
Determining a first visual deflection angle and a second visual deflection angle corresponding to the target stimulus object, wherein the first visual deflection angle is larger than the second visual deflection angle;
determining the position of a key point corresponding to the target stimulation object based on the association information of the stimulation interface and the number of the objects;
determining the annular size corresponding to the target stimulation object based on the associated information of the stimulation interface, the first visual deflection angle and the second visual deflection angle; the annular dimensions include an outer annular dimension and an inner annular dimension;
And determining a first presentation area corresponding to the target stimulation object based on the key point position, the outer ring size and the inner ring size.
4. A method according to claim 3, wherein the association information of the stimulation interface comprises the resolution and interface size of the stimulation interface and the target distance between the stimulated object and the stimulation interface;
The determining the outer ring size and the inner ring size corresponding to the target stimulus object based on the association information of the stimulus interface, the first visual deflection angle and the second visual deflection angle includes:
determining a unit resolution of the stimulation interface based on the resolution of the stimulation interface and the interface size;
Determining an outer ring size corresponding to the target stimulus object based on the first visual deflection angle, the unit resolution and the target distance; and
And determining the inner ring size corresponding to the target stimulation object based on the second visual deflection angle, the unit resolution and the target distance corresponding to the target stimulation object.
5. The method of claim 2, wherein the determining a second presentation area based on the first presentation area comprises:
Determining a size threshold based on an inner ring radius and a preset shape of the first presentation area, and determining a target size based on the size threshold, wherein the target size is less than or equal to the size threshold;
A second presentation area located in a circular area surrounded by the first presentation area is determined based on the target size and the preset shape.
6. The method of claim 1, wherein the adjusting the brightness of the first stimulus object based on the current timestamp of the target video and a preset flicker frequency comprises:
Determining a brightness change function of the first stimulus object based on the preset flicker frequency;
Determining target brightness corresponding to the first stimulus object based on the brightness change function and the current timestamp of the target video;
and adjusting the brightness of the first stimulation object to the target brightness.
7. The method of claim 6, wherein the brightness variation function is a sine wave function.
8. A target rendering device of a hybrid brain-computer interface, comprising:
The target stimulation object presentation module is used for responding to the target presentation triggering operation and presenting a target stimulation object in the stimulation interface; the target stimulation object comprises a first stimulation object and a second stimulation object, wherein the first stimulation object is an annular stimulation object, and the second stimulation object is a target video; the second stimulus object is presented in a circular area surrounded by the first stimulus object; the target video comprises a picture for executing preset movement of a target part;
And the brightness adjusting module is used for adjusting the brightness of the first stimulus object based on the current time stamp of the target video and the preset flicker frequency in the playing process of the target video.
9. An electronic device, the electronic device comprising:
at least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the target rendering method of the hybrid brain-computer interface of any one of claims 1-7.
10. A computer readable storage medium storing computer instructions for causing a processor to execute the target rendering method of the hybrid brain-computer interface of any one of claims 1-7.
CN202410436793.5A 2024-04-11 2024-04-11 Target presentation method, device, equipment and medium of hybrid brain-computer interface Pending CN118210409A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410436793.5A CN118210409A (en) 2024-04-11 2024-04-11 Target presentation method, device, equipment and medium of hybrid brain-computer interface

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410436793.5A CN118210409A (en) 2024-04-11 2024-04-11 Target presentation method, device, equipment and medium of hybrid brain-computer interface

Publications (1)

Publication Number Publication Date
CN118210409A true CN118210409A (en) 2024-06-18

Family

ID=91448090

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410436793.5A Pending CN118210409A (en) 2024-04-11 2024-04-11 Target presentation method, device, equipment and medium of hybrid brain-computer interface

Country Status (1)

Country Link
CN (1) CN118210409A (en)

Similar Documents

Publication Publication Date Title
CN110989878B (en) Animation display method and device in applet, electronic equipment and storage medium
EP3933751A1 (en) Image processing method and apparatus
CN107204044B (en) Picture display method based on virtual reality and related equipment
US20210349526A1 (en) Human-computer interaction controlling method, apparatus and system, and electronic device
CN112135041B (en) Method and device for processing special effect of human face and storage medium
US20210209837A1 (en) Method and apparatus for rendering image
CN110795177A (en) Graph drawing method and device
CN108027980A (en) The method and terminal that picture is shown
US20210279928A1 (en) Method and apparatus for image processing
CN107133347B (en) Method and device for displaying visual analysis chart, readable storage medium and terminal
CN115423919B (en) Image rendering method, device, equipment and storage medium
CN115661375B (en) Three-dimensional hair style generation method and device, electronic equipment and storage medium
CN118210409A (en) Target presentation method, device, equipment and medium of hybrid brain-computer interface
CN114863008B (en) Image processing method, image processing device, electronic equipment and storage medium
CN113487704B (en) Dovetail arrow mark drawing method and device, storage medium and terminal equipment
CN111507944B (en) Determination method and device for skin smoothness and electronic equipment
CN114911445A (en) Display control method of virtual reality device, and storage medium
DE102019107103B4 (en) METHOD AND SYSTEM FOR OBJECT SEGMENTATION IN A MIXED REALITY ENVIRONMENT
CN110764764A (en) Webpage-side image fixing and stretching method and device, computer equipment and storage medium
CN115268747B (en) Brain-computer interface data processing method and device, electronic equipment and storage medium
CN111179174B (en) Image stretching method and device based on face recognition points
CN114363713B (en) Sound adjusting method and device
DE102019107145B4 (en) METHOD, DEVICE AND NON-VOLATILE COMPUTER READABLE MEDIUM FOR MIXED REALITY INTERACTION WITH A PERIPHERAL DEVICE
CN115423827B (en) Image processing method, image processing device, electronic equipment and storage medium
US20220327757A1 (en) Method and apparatus for generating dynamic video of character, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination