CN113641246A - Method and device for determining user concentration degree, VR equipment and storage medium - Google Patents

Method and device for determining user concentration degree, VR equipment and storage medium Download PDF

Info

Publication number
CN113641246A
CN113641246A CN202110984185.4A CN202110984185A CN113641246A CN 113641246 A CN113641246 A CN 113641246A CN 202110984185 A CN202110984185 A CN 202110984185A CN 113641246 A CN113641246 A CN 113641246A
Authority
CN
China
Prior art keywords
user
space data
visual focus
determining
concentration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110984185.4A
Other languages
Chinese (zh)
Inventor
凤阳
陈晓东
谷杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lanzhou Lezhi Education Technology Co ltd
Original Assignee
Lanzhou Lezhi Education Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lanzhou Lezhi Education Technology Co ltd filed Critical Lanzhou Lezhi Education Technology Co ltd
Priority to CN202110984185.4A priority Critical patent/CN113641246A/en
Publication of CN113641246A publication Critical patent/CN113641246A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention provides a method and a device for determining user concentration, VR equipment and a storage medium, wherein the method is applied to the VR equipment and comprises the following steps: in a virtual reality-based service environment, determining a virtual reality space which a user should pay attention to before service; in a virtual reality-based service environment, recording visual focus space data of the user in a service process in real time; and determining the concentration degree of the user according to the intersection condition of the visual focus space data of the user and the virtual reality space. The VR equipment determines the concentration degree of the user through the intersection condition of the visual focus space data of the user and the virtual reality space which the user should pay attention to, and can reduce the requirements on the time and the observation capacity of the server, the network and part of the user.

Description

Method and device for determining user concentration degree, VR equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of virtual reality, in particular to a method and a device for determining user concentration degree, VR equipment and a storage medium.
Background
Virtual reality, as the name implies, is the combination of virtual and real. Theoretically, virtual reality technology (VR) is a computer simulation system that can create and experience a virtual world, which uses a computer to create a simulated environment into which a user is immersed. The virtual reality technology is to combine electronic signals generated by computer technology with data in real life to convert the electronic signals into phenomena which can be felt by people, wherein the phenomena can be true and true objects in reality or substances which can not be seen by the naked eyes, and the phenomena are expressed by a three-dimensional model. These phenomena are called virtual reality because they are not directly visible but a real world simulated by computer technology.
In a virtual reality-based environment (such as a classroom environment, etc.), since all users (such as teachers, students, etc.) are in a 720-degree panoramic virtual reality scene, some users (such as teachers, etc.) cannot directly observe the situation (such as student class situation, etc.) of another part of users (such as students, etc.), so that the view of another part of users (such as students, etc.) is uploaded at present, and real-time video observation is performed, so as to determine the concentration (such as classroom concentration) of another part of users (such as students, etc.). The traditional method of uploading the visual field of another part of users (such as students and the like) and observing the video in real time puts high demands on the time and the observation capability of a server, a network and a part of users (such as teachers and the like).
Disclosure of Invention
In order to solve the technical problem that the conventional mode of uploading the view of another part of users and observing videos in real time puts high requirements on the time and the observation capacity of a server, a network and a part of users, the embodiment of the invention provides a method and a device for determining the concentration degree of the users, VR equipment and a storage medium.
In a first aspect of the embodiments of the present invention, there is provided a method for determining a user concentration degree, where the method is applied to a VR device, and includes:
in a virtual reality-based service environment, determining a virtual reality space which a user should pay attention to before service;
in a virtual reality-based service environment, recording visual focus space data of the user in a service process in real time;
and determining the concentration degree of the user according to the intersection condition of the visual focus space data of the user and the virtual reality space.
In an optional embodiment, the determining, before the business is performed, a virtual reality space that a user should pay attention to includes:
determining a virtual reality space which a user should pay attention to before service is carried out, and marking the virtual reality space in a spherical mark mode;
determining a concentration degree of the user according to an intersection of the visual focus space data of the user and the virtual reality space, including:
determining the concentration degree of the user according to the intersection condition of the visual focus space data of the user and the spherical mark.
In an alternative embodiment, the determining the concentration of the user based on the intersection of the visual focus space data of the user with the spherical marker comprises:
determining attention parameters corresponding to the visual focus space data of the user according to the intersection condition of the visual focus space data of the user and the spherical mark;
and determining the concentration degree of the user by utilizing the attention degree parameter corresponding to the visual focus space data of the user.
In an optional embodiment, the determining, according to the intersection of the visual focus space data of the user and the spherical marker, a degree of attention parameter corresponding to the visual focus space data of the user includes:
if the visual focus space data of the user and the spherical mark have an intersection point, determining a first attention parameter as an attention parameter corresponding to the visual focus space data of the user;
and if the visual focus space data of the user does not have an intersection point with the spherical mark, determining a second attention parameter as an attention parameter corresponding to the visual focus space data of the user.
In an optional embodiment, the determining, according to the intersection of the visual focus space data of the user and the spherical marker, a degree of attention parameter corresponding to the visual focus space data of the user includes:
determining the intersection condition of the visual focus space data of the user and the spherical mark according to a preset intersection determination period;
and determining attention parameters corresponding to the visual focus space data of the user according to the intersection condition of the visual focus space data of the user and the spherical mark.
In an optional embodiment, the determining the concentration of the user by using the attention parameter corresponding to the visual focus space data of the user includes:
intercepting the visual focus space data of the user for a target time period from the visual focus space data of the user;
dividing the visual focus space data of the user for the target time period into the visual focus space data of the user for N time periods;
searching a target attention parameter corresponding to the visual focus space data of the user in each time period from the attention parameter corresponding to the visual focus space data of the user;
determining the parameter number of the target attention parameter corresponding to the visual focus space data of the user in the target time period;
and determining the concentration degree of the user by using the target attention degree parameter and the parameter quantity.
In an optional embodiment, the determining the concentration of the user by using the target attention parameter and the number of parameters includes:
inputting the target attention parameter and the parameter quantity into a concentration determination model, and acquiring the concentration of the user output by the concentration determination model;
wherein the concentration determination model comprises:
X=Total[Average(X1,X2,……,XN)]/M;
the X comprises the user's concentration, the X1, X2, … …, XN each comprise the target concentration parameter, and the M comprises the number of parameters.
In an alternative embodiment, the method further comprises:
searching a concentration degree interval corresponding to the concentration degree of the user, and determining the target concentration degree grade corresponding to the concentration degree interval as the concentration degree grade of the user.
In an optional embodiment, the method further comprises:
and sending the concentration degree grade of the user to a cloud end so that the cloud end can count the number of the users corresponding to each concentration degree grade according to the concentration degree grade of the user, and evaluate the quality of the service according to the number of the users corresponding to each concentration degree grade.
In a second aspect of the embodiments of the present invention, there is provided an apparatus for determining user attentiveness, the apparatus being applied to a VR device, and including:
the space determining module is used for determining a virtual reality space which a user should pay attention to before business is carried out in a virtual reality-based business environment;
the data recording module is used for recording visual focus space data of the user in a virtual reality-based service environment in real time in the service process;
a concentration determination module for determining the concentration of the user according to the intersection condition of the visual focus space data of the user and the virtual reality space.
In a third aspect of the embodiments of the present invention, there is further provided a VR device, including a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete communication with each other through the communication bus;
a memory for storing a computer program;
a processor configured to implement the method for determining the user concentration according to the first aspect when executing the program stored in the memory.
In a fourth aspect of the embodiments of the present invention, there is also provided a storage medium having instructions stored therein, which when run on a computer, cause the computer to perform the method for determining the user concentration degree described in the above first aspect.
In a fifth aspect of embodiments of the present invention, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the method for determining user attentiveness as described in the first aspect above.
According to the technical scheme provided by the embodiment of the invention, in a virtual reality-based service environment, a virtual reality space which a user should pay attention to before service is carried out is determined, in the virtual reality-based service environment, visual focus space data of the user in the service carrying process is recorded in real time, and the concentration degree of the user is determined according to the intersection condition of the visual focus space data of the user and the virtual reality space. The VR equipment determines the concentration degree of the user through the intersection condition of the visual focus space data of the user and the virtual reality space which the user should pay attention to, and can reduce the requirements on the time and the observation capacity of the server, the network and part of the user.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is a schematic flow chart illustrating an implementation of a method for determining a user concentration degree according to an embodiment of the present invention;
fig. 2 is a schematic view of a scene marked in a virtual reality space in the form of a spherical marker in the embodiment of the present invention;
fig. 3 is a schematic flow chart illustrating an implementation of another method for determining a user concentration degree according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating a scenario of an intersection of visual focus space data of a student with a spherical marker in an embodiment of the present invention;
fig. 5 is a schematic view of a scenario that a VR device returns a concentration level of a student to a cloud in an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a user concentration determination device shown in the embodiment of the present invention;
fig. 7 is a schematic structural diagram of a VR device shown in an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
As shown in fig. 1, an implementation flow diagram of a method for determining concentration of a user according to an embodiment of the present invention is provided, where the method is applied to a VR device (e.g., a VR headset), and specifically includes the following steps:
s101, in a virtual reality-based business environment, a virtual reality space which a user should pay attention to is determined before business is carried out.
In a virtual reality-based business environment, a business initiator can specify a business participant, namely a virtual reality space which a user should pay attention to before a business is performed, and some business models which the user should pay attention to are included in the virtual reality space. Therefore, the service participant, namely the VR equipment worn by the user, can determine the virtual reality space which the user should pay attention to before the service is carried out in the virtual reality-based service environment.
Further, in a virtual reality-based business environment, a business initiator may specify a virtual reality space that a business participant, i.e., a user, should pay attention to before a business proceeds, and specify that the virtual reality space is marked in the form of a spherical marker. Thus, a service participant, i.e., a VR device worn by a user, can determine, in a virtual reality-based service environment, a virtual reality space that the user should be interested in before the service proceeds, and mark the virtual reality space in the form of a spherical marker.
It should be noted that the form of the spherical mark is simple, and other types of marks, such as a cube mark, may also be used, and the embodiment of the present invention does not limit this.
For example, in a virtual reality-based classroom environment, a virtual reality space in which a student should be focused is specified by a teacher before a classroom is performed, and the virtual reality space is specified to be marked in the form of a spherical marker, as shown in fig. 2, in which a teaching model (not shown in the figure) exists. Therefore, VR equipment worn by students can determine a virtual reality space which a user should pay attention to before a classroom is carried out in a virtual reality-based classroom environment, and mark the virtual reality space in a spherical mark mode.
It should be noted that, for a service, for example, a classroom may be used, for example, a service such as a conference may be used, a corresponding service environment, for example, a classroom environment may be used, for example, a service environment such as a conference environment may be used, and a corresponding service model, for example, a teaching model may be used, for example, a service model such as a conference model may be used, which is not limited in this embodiment of the present invention.
And S102, recording the visual focus space data of the user in the service process in real time in the service environment based on the virtual reality.
In a virtual reality-based service environment, a service participant, namely VR equipment worn by a user, can record visual focus space data of the user in a service process in real time.
For example, in a virtual reality-based classroom environment, VR devices worn by students can record visual focus spatial data of the students in the course of a classroom in real-time.
S103, determining the concentration degree of the user according to the intersection condition of the visual focus space data of the user and the virtual reality space.
For the visual focus space data of the user in the real-time recorded service process, the service participant, namely the VR device worn by the user, can determine the concentration degree of the user according to the intersection condition of the visual focus space data of the user and the virtual reality space which the user should pay attention to.
Wherein, for convenient definite user's concentration degree, carry out the in-process user's vision focus spatial data to the business of real-time recording, the business participant can be according to the crossing condition of user's vision focus spatial data and spherical mark, confirms user's concentration degree with the VR equipment of user wearing.
Specifically, as shown in fig. 3, for another implementation flow diagram of the method for determining the concentration degree of the user according to the embodiment of the present invention, the method is applied to a VR device (e.g., a VR headset), and specifically may include the following steps:
s301, determining a focus degree parameter corresponding to the visual focus space data of the user according to the intersection condition of the visual focus space data of the user and the spherical mark.
For the visual focus space data of the user in the real-time recorded service process, the service participant, namely the VR device worn by the user, can determine the attention parameter corresponding to the visual focus space data of the user according to the intersection condition of the visual focus space data of the user and the spherical mark.
Specifically, for the visual focus space data of the user during the service process recorded in real time, if the visual focus space data of the user and the spherical mark have an intersection, a service participant, that is, a VR device worn by the user, determines that the first attention parameter is the attention parameter corresponding to the visual focus space data of the user.
And for the visual focus space data of the user in the real-time recorded service process, if the intersection point does not exist between the visual focus space data of the user and the spherical mark, determining that the second attention parameter is the attention parameter corresponding to the visual focus space data of the user by a service participant, namely VR equipment worn by the user.
For example, for the visual focus space data of the student a during the classroom progression recorded in real time, if there is an intersection between the visual focus space data of the student a and the spherical marker, as shown in fig. 4, the VR device worn by the student a determines 1 as the attention parameter corresponding to the visual focus space data of the student a.
For example, for the visual focus space data of the student B during the classroom progression recorded in real time, if there is no intersection between the visual focus space data of the student B and the spherical marker, as shown in fig. 4, the VR device worn by the student B determines 0 as the attention parameter corresponding to the visual focus space data of the student B.
In addition, in the embodiment of the present invention, for the visual focus space data of the user during the service process recorded in real time, the service participant, that is, the VR device worn by the user, may determine the intersection condition of the visual focus space data of the user and the spherical mark according to a preset intersection determination period, and determine the attention parameter corresponding to the visual focus space data of the user according to the intersection condition of the visual focus space data of the user and the spherical mark.
For example, for the vision focus space data of the student a in the course of a classroom being recorded in real time, every 1 second of the VR device worn by the student a, the intersection condition of the vision focus space data of the student a and the spherical mark is determined, if the vision focus space data of the student a and the spherical mark have an intersection, 1 is determined as the attention parameter corresponding to the vision focus space data of the student a, and otherwise 0 is determined as the attention parameter corresponding to the vision focus space data of the student a.
It should be noted that, for the preset intersection determination period, for example, the interval may be 1 second, for example, the interval may be 0.5 second, and the like, which is not limited in this embodiment of the present invention. In addition, the first attention parameter and the second attention parameter may be set according to actual requirements, which is not limited in the embodiment of the present invention.
S302, determining the concentration degree of the user by using the attention degree parameter corresponding to the visual focus space data of the user.
For the visual focus space data of the user in the real-time recorded service process, the service participant, namely the VR device worn by the user, determines the attention degree parameter corresponding to the visual focus space data of the user according to a preset intersection determination period, and can determine the attention degree of the user based on the attention degree parameter, namely the attention degree parameter corresponding to the visual focus space data of the user is utilized to determine the attention degree of the user.
For example, for the vision focus spatial data of the student a during the classroom progression recorded in real time, the attention degree parameter corresponding to the vision focus spatial data of the student a is determined every 1 second by the VR device worn by the student a, as shown in table 1 below, so that the attention degree parameter corresponding to the vision focus spatial data of the student a can be utilized to determine the attention degree of the student a, that is, the classroom attention degree.
Visual focus spatial data for student A Attention parameter
Visual focus spatial data for student A at second 1 0
Second student A's visual focus spatial data 1
Visual focus spatial data for student A at second 3 1
…… ……
TABLE 1
It should be noted that, for the visual focus space data of the user during the service process recorded in real time, the service participant, that is, the VR device worn by the user, determines the attention parameter corresponding to the visual focus space data of the user according to the preset intersection determination period, which means that for the visual focus space data of the user in each intersection determination period, there is a corresponding attention parameter, as shown in table 1 above.
In addition, in order to reduce the amount of calculation and improve the efficiency of determining the concentration of the user, for the visual focus space data of the user in the real-time recorded service proceeding process, the service participant, i.e., the VR device worn by the user, may intercept the visual focus space data of the user in the target time period from the visual focus space data, so that the visual focus space data of the user in the target time period may be divided into the visual focus space data of the user in N time periods.
And searching a target attention parameter corresponding to the visual focus space data of the user in each time period from the attention parameters corresponding to the visual focus space data of the user in the real-time recorded service process, and determining the parameter number of the target attention parameter corresponding to the visual focus space data of the user in the target time period, so that the attention of the user is determined by using the target attention parameter and the parameter number.
For example, for the visual focus space data of student a during the classroom progression recorded in real time, the VR device worn by student a may intercept the visual focus space data of student a in pastN seconds from it, so that the visual focus space data of student a in pastN seconds may be divided into N1-second visual focus space data of student a, i.e., the 1 st, 2 nd, … …, N nd, etc. of student a in pastN seconds.
From the attention parameters corresponding to the visual focus space data of the user during the service progress recorded in real time, as shown in table 1 above, the target attention parameters corresponding to the visual focus space data of student a in each of the 1 st, 2 nd, … … th, nth, and so on in the past N seconds are searched, and the number of parameters of the target attention parameters corresponding to the visual focus space data of student a in the past N seconds is determined, so that the attention of the user can be determined by using the target attention parameters and the number of parameters.
For the number of parameters of the target attention degree parameter corresponding to the visual focus space data of the user in the target time period and the target attention degree parameter corresponding to the visual focus space data of the user in each time period, the VR device worn by a business participant, that is, the user, may input the target attention degree parameter and the number of parameters into the attention degree determination model, and obtain the attention degree of the user output by the attention degree determination model, where the attention degree determination model includes:
X=Total[Average(X1,X2,……,XN)]/M;
the X comprises the user's concentration, the X1, X2, … …, XN each comprise the target concentration parameter, and the M comprises the number of parameters.
For example, for the parameter number M of the target attention degree parameter corresponding to the visual focus space data of student a in pastN seconds and the target attention degree parameter corresponding to the visual focus space data of student a in 1 st, 2 nd, … … th, nth seconds and the like in pastN seconds, the VR device worn by student a inputs the target attention degree parameter and the parameter number to the attention degree determination model, and acquires the attention degree of student a output by the attention degree determination model.
Note that X1 represents a target attention parameter corresponding to the visual focus space data of student a at the 1 st second of pastN seconds, X2 represents a target attention parameter corresponding to the visual focus space data of student a at the 2 nd second of pastN seconds, … … represents a target attention parameter corresponding to the visual focus space data of student a at the N th second of pastN seconds, XN.
In addition, different attention degree intervals are preset in the embodiment of the invention, and each attention degree interval has a corresponding attention degree grade. For example, different attention intervals are set in advance: concentration degree section 1, concentration degree section 2, concentration degree section 3, there is a corresponding concentration degree grade in each concentration degree section, as shown in table 2 below.
Concentration interval Concentration interval range Concentration level
Concentration interval 1 90%~100% Is excellent in
Concentration degree interval 2 60%~90% Qualified
Concentration degree interval 3 0%~60% Fail to be qualified
TABLE 2
For the concentration degree of the user, the service participant, namely the VR equipment worn by the user, can search the concentration degree interval corresponding to the concentration degree, and determine the target concentration degree grade corresponding to the concentration degree interval as the concentration degree grade of the user. For example, for the 92% concentration degree of student a, the VR device worn by student a can search for the concentration degree interval (concentration degree interval 1) corresponding to the concentration degree, and can determine the concentration degree grade (excellence) of student a.
To the degree of concentration level of user, the VR equipment that the business participant was worn by the user can be sent to the high in the clouds with it, so that the degree of concentration level of the user that corresponds can be sent to the high in the clouds with the VR equipment that a plurality of business participants were worn by the user, the high in the clouds is according to the degree of concentration level of the user, and the user quantity that each degree of concentration level corresponds is counted to according to the user quantity that each degree of concentration level corresponds, the quality of aassessment business.
For example, the VR device worn by student 1 sends the concentration level of student 1 to the cloud, the VR device worn by student 2 sends the concentration level of student 2 to the cloud, … …, and the VR device worn by student N sends the concentration level of student N to the cloud, as shown in fig. 5, so that the cloud can count the number of students corresponding to each concentration level according to the concentration level of each student, as shown in table 3 below, so that the quality of the classroom can be evaluated according to the number of users corresponding to each concentration level.
Concentration level Number of students
Is excellent in 30
Qualified 3
Fail to be qualified 2
TABLE 3
It should be noted that, for the concentration level (excellent), the number of students is higher, which indicates that the classroom quality is higher, and indirectly indicates that the classroom of the teacher is more popular with the students.
Through the above description of the technical solution provided by the embodiment of the present invention, in a virtual reality-based service environment, a virtual reality space that a user should pay attention to before a service is performed is determined, in the virtual reality-based service environment, visual focus space data of the user during the service is recorded in real time, and according to the intersection condition of the visual focus space data of the user and the virtual reality space, the attentiveness of the user is determined.
The VR equipment determines the concentration degree of the user through the intersection condition of the visual focus space data of the user and the virtual reality space which the user should pay attention to, so that the concentration degree of the user is determined by the VR equipment, the power of the VR equipment is consumed in a small scale, and the requirements on the time and the observation capacity of a server, a network and a part of users can be reduced. In addition, the concentration level of the user can be returned to the cloud end so as to evaluate the quality of the service.
Corresponding to the foregoing method embodiment, an embodiment of the present invention further provides a device for determining user attentiveness, where as shown in fig. 6, the device is applied to a VR device, and may include: a space determination module 610, a data recording module 620, a concentration determination module 630.
A space determining module 610, configured to determine, in a virtual reality-based service environment, a virtual reality space that a user should pay attention to before a service is performed;
a data recording module 620, configured to record, in real time, visual focus space data of the user during a service process in a virtual reality-based service environment;
a concentration determination module 630, configured to determine a concentration of the user according to an intersection of the visual focus space data of the user and the virtual reality space.
An embodiment of the present invention further provides a VR device, as shown in fig. 7, including a processor 71, a communication interface 72, a memory 73, and a communication bus 74, where the processor 71, the communication interface 72, and the memory 73 complete mutual communication through the communication bus 74,
a memory 73 for storing a computer program;
the processor 71, when executing the program stored in the memory 73, implements the following steps:
in a virtual reality-based service environment, determining a virtual reality space which a user should pay attention to before service; in a virtual reality-based service environment, recording visual focus space data of the user in a service process in real time; and determining the concentration degree of the user according to the intersection condition of the visual focus space data of the user and the virtual reality space.
The communication bus mentioned in the VR device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the VR device and other devices.
The Memory may include a Random Access Memory (RAM) or a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the Integrated Circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component.
In yet another embodiment of the present invention, a storage medium is further provided, which stores instructions that, when executed on a computer, cause the computer to perform the method for determining the user concentration as described in any of the above embodiments.
In yet another embodiment of the present invention, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the method of determining user concentration as described in any of the above embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a storage medium or transmitted from one storage medium to another, for example, from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The storage medium may be any available medium that can be accessed by a computer or a data storage device including one or more available media integrated servers, data centers, and the like. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (12)

1. A method for determining user attentiveness, applied to a VR device, the method comprising:
in a virtual reality-based service environment, determining a virtual reality space which a user should pay attention to before service;
in a virtual reality-based service environment, recording visual focus space data of the user in a service process in real time;
and determining the concentration degree of the user according to the intersection condition of the visual focus space data of the user and the virtual reality space.
2. The method of claim 1, wherein determining the virtual reality space that the user should focus on before the business is performed comprises:
determining a virtual reality space which a user should pay attention to before service is carried out, and marking the virtual reality space in a spherical mark mode;
determining a concentration degree of the user according to an intersection of the visual focus space data of the user and the virtual reality space, including:
determining the concentration degree of the user according to the intersection condition of the visual focus space data of the user and the spherical mark.
3. The method of claim 2, wherein determining the user's concentration from the intersection of the visual focus space data of the user with the spherical marker comprises:
determining attention parameters corresponding to the visual focus space data of the user according to the intersection condition of the visual focus space data of the user and the spherical mark;
and determining the concentration degree of the user by utilizing the attention degree parameter corresponding to the visual focus space data of the user.
4. The method of claim 3, wherein determining the attention parameter corresponding to the visual focus space data of the user according to the intersection of the visual focus space data of the user and the spherical marker comprises:
if the visual focus space data of the user and the spherical mark have an intersection point, determining a first attention parameter as an attention parameter corresponding to the visual focus space data of the user;
and if the visual focus space data of the user does not have an intersection point with the spherical mark, determining a second attention parameter as an attention parameter corresponding to the visual focus space data of the user.
5. The method of claim 3, wherein determining the attention parameter corresponding to the visual focus space data of the user according to the intersection of the visual focus space data of the user and the spherical marker comprises:
determining the intersection condition of the visual focus space data of the user and the spherical mark according to a preset intersection determination period;
and determining attention parameters corresponding to the visual focus space data of the user according to the intersection condition of the visual focus space data of the user and the spherical mark.
6. The method according to claim 4 or 5, wherein the determining the user's concentration using the attention parameter corresponding to the visual focus space data of the user comprises:
intercepting the visual focus space data of the user for a target time period from the visual focus space data of the user;
dividing the visual focus space data of the user for the target time period into the visual focus space data of the user for N time periods;
searching a target attention parameter corresponding to the visual focus space data of the user in each time period from the attention parameter corresponding to the visual focus space data of the user;
determining the parameter number of the target attention parameter corresponding to the visual focus space data of the user in the target time period;
and determining the concentration degree of the user by using the target attention degree parameter and the parameter quantity.
7. The method of claim 6, wherein the determining the user's concentration using the target attention parameter and the number of parameters comprises:
inputting the target attention parameter and the parameter quantity into a concentration determination model, and acquiring the concentration of the user output by the concentration determination model;
wherein the concentration determination model comprises:
X=Total[Average(X1,X2,……,XN)]/M;
the X comprises the user's concentration, the X1, X2, … …, XN each comprise the target concentration parameter, and the M comprises the number of parameters.
8. The method according to any one of claims 1 to 7, further comprising:
searching a concentration degree interval corresponding to the concentration degree of the user, and determining the target concentration degree grade corresponding to the concentration degree interval as the concentration degree grade of the user.
9. The method of claim 8, further comprising:
and sending the concentration degree grade of the user to a cloud end so that the cloud end can count the number of the users corresponding to each concentration degree grade according to the concentration degree grade of the user, and evaluate the quality of the service according to the number of the users corresponding to each concentration degree grade.
10. An apparatus for determining concentration degree of a user, the apparatus being applied to a VR device, comprising:
the space determining module is used for determining a virtual reality space which a user should pay attention to before business is carried out in a virtual reality-based business environment;
the data recording module is used for recording visual focus space data of the user in a virtual reality-based service environment in real time in the service process;
a concentration determination module for determining the concentration of the user according to the intersection condition of the visual focus space data of the user and the virtual reality space.
11. The VR device is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing the communication between the processor and the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any one of claims 1 to 9 when executing a program stored on a memory.
12. A storage medium on which a computer program is stored, which program, when being executed by a processor, carries out the method according to any one of claims 1 to 9.
CN202110984185.4A 2021-08-25 2021-08-25 Method and device for determining user concentration degree, VR equipment and storage medium Pending CN113641246A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110984185.4A CN113641246A (en) 2021-08-25 2021-08-25 Method and device for determining user concentration degree, VR equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110984185.4A CN113641246A (en) 2021-08-25 2021-08-25 Method and device for determining user concentration degree, VR equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113641246A true CN113641246A (en) 2021-11-12

Family

ID=78423872

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110984185.4A Pending CN113641246A (en) 2021-08-25 2021-08-25 Method and device for determining user concentration degree, VR equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113641246A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114063780A (en) * 2021-11-18 2022-02-18 兰州乐智教育科技有限责任公司 Method and device for determining user concentration degree, VR equipment and storage medium
CN114063780B (en) * 2021-11-18 2024-07-09 兰州乐智教育科技有限责任公司 Method and device for determining concentration of user, VR equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105677906A (en) * 2015-05-07 2016-06-15 浚鸿数据开发股份有限公司 Automatic collecting and analyzing system and method for network events
CN106980983A (en) * 2017-02-23 2017-07-25 阿里巴巴集团控股有限公司 Service authentication method and device based on virtual reality scenario
CN107957775A (en) * 2016-10-18 2018-04-24 阿里巴巴集团控股有限公司 Data object exchange method and device in virtual reality space environment
CN109144271A (en) * 2018-09-07 2019-01-04 武汉轻工大学 Three-dimensional space audio frequency attention-degree analysis method, system, server and storage medium
CN111008542A (en) * 2018-10-08 2020-04-14 上海风创信息咨询有限公司 Object concentration analysis method and device, electronic terminal and storage medium
CN112732076A (en) * 2020-12-30 2021-04-30 江西格灵如科科技有限公司 Real-time teaching guidance method and system in virtual reality environment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105677906A (en) * 2015-05-07 2016-06-15 浚鸿数据开发股份有限公司 Automatic collecting and analyzing system and method for network events
CN107957775A (en) * 2016-10-18 2018-04-24 阿里巴巴集团控股有限公司 Data object exchange method and device in virtual reality space environment
CN106980983A (en) * 2017-02-23 2017-07-25 阿里巴巴集团控股有限公司 Service authentication method and device based on virtual reality scenario
CN109144271A (en) * 2018-09-07 2019-01-04 武汉轻工大学 Three-dimensional space audio frequency attention-degree analysis method, system, server and storage medium
CN111008542A (en) * 2018-10-08 2020-04-14 上海风创信息咨询有限公司 Object concentration analysis method and device, electronic terminal and storage medium
CN112732076A (en) * 2020-12-30 2021-04-30 江西格灵如科科技有限公司 Real-time teaching guidance method and system in virtual reality environment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114063780A (en) * 2021-11-18 2022-02-18 兰州乐智教育科技有限责任公司 Method and device for determining user concentration degree, VR equipment and storage medium
CN114063780B (en) * 2021-11-18 2024-07-09 兰州乐智教育科技有限责任公司 Method and device for determining concentration of user, VR equipment and storage medium

Similar Documents

Publication Publication Date Title
Al-Emran et al. Academics’ awareness towards mobile learning in Oman
CN104539983B (en) Online class management system and method
CN108460627A (en) Marketing activity scheme method for pushing, device, computer equipment and storage medium
US11144991B2 (en) Cognitive assessment system
CN110472087A (en) A kind of facial expression image recommended method, device, equipment and medium
KR20150081172A (en) Method for providing learning guideline, server for providing learning guideline and user device
CN108419137A (en) Data processing method and data processing equipment
CN110910694A (en) Intelligent customer service training system
CN107230406A (en) A kind of electronic education learning system and method
Sukmana et al. The use of cloud firestore for handling real-time data updates: An empirical study of gamified online quiz
de Castro et al. Most used ICT methodologies for student learning in Erasmus+ projects related to eLearning
CN111353439A (en) Method, device, system and equipment for analyzing teaching behaviors
CN114021029A (en) Test question recommendation method and device
WO2021135322A1 (en) Automatic question setting method, apparatus and system
CN113496638A (en) Network security training system and method
KR101808631B1 (en) Method of posting poll response and a poll service server providing the method thereof
TW202109442A (en) Research Information Management System
CN109451332B (en) User attribute marking method and device, computer equipment and medium
CN115311920B (en) VR practical training system, method, device, medium and equipment
CN113641246A (en) Method and device for determining user concentration degree, VR equipment and storage medium
CN114063780B (en) Method and device for determining concentration of user, VR equipment and storage medium
KR102071133B1 (en) Method for and management server implementing the same
CN111382689A (en) Card punching system and method for online learning by using computer
CN108053193A (en) Educational information is analyzed and querying method and system
CN114063780A (en) Method and device for determining user concentration degree, VR equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination