CN115981514A - Intelligent visual angle switching method for virtual simulation experiment - Google Patents

Intelligent visual angle switching method for virtual simulation experiment Download PDF

Info

Publication number
CN115981514A
CN115981514A CN202211677708.1A CN202211677708A CN115981514A CN 115981514 A CN115981514 A CN 115981514A CN 202211677708 A CN202211677708 A CN 202211677708A CN 115981514 A CN115981514 A CN 115981514A
Authority
CN
China
Prior art keywords
mouse
local area
angle
visual
adjustment value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211677708.1A
Other languages
Chinese (zh)
Other versions
CN115981514B (en
Inventor
王晓蒲
刘和伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Keda Aorui Technology Co ltd
Original Assignee
Anhui Keda Aorui Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Keda Aorui Technology Co ltd filed Critical Anhui Keda Aorui Technology Co ltd
Priority to CN202211677708.1A priority Critical patent/CN115981514B/en
Publication of CN115981514A publication Critical patent/CN115981514A/en
Application granted granted Critical
Publication of CN115981514B publication Critical patent/CN115981514B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses an intelligent visual angle switching method for a virtual simulation experiment, which belongs to the technical field of visual angle switching and comprises the following specific steps: the method comprises the following steps: entering a virtual simulation experiment, acquiring a global view angle of the experiment, and observing an overall scene of the experiment by a user through the global view angle; step two: identifying the position of the mouse in the global visual angle in real time, automatically positioning to a corresponding first local area according to the position of the mouse, and adjusting the optimal visual range of the first local area; step three: when a user clicks the area where the component is located in the first local area through a mouse, automatically adjusting the operation visual distance of the component; step four: under the component visual angle, the position of the mouse is recognized in real time, a second local area is marked according to the position of the mouse, and the optimal operation visual angle and the optimal operation distance entering the second local area are zoomed in and adjusted; step five: when a user clicks the locking button under the view angle of the component, the view angle of the component is changed into a locking state, and the user adjusts the components in the second local area under the locking state.

Description

Intelligent visual angle switching method for virtual simulation experiment
Technical Field
The invention belongs to the technical field of visual angle switching, and particularly relates to an intelligent visual angle switching method for a virtual simulation experiment.
Background
In practice, the visual angle and the visual distance of people in various activities need to be changed continuously, and the work is completed by adjusting the whole situation, the local situation, the distance and the near situation, and the scientific experiment is particularly so. To address this problem, decades ago, when 2D technology was used, when local manipulation of the instrument was required, the local manipulation and observation of the experiment was presented using pop-up windows. The complicated experiment also has the condition that the pop-up window is nested in the pop-up window, and the ceaseless pop-up window increases the complexity of the experiment and deviates from the real environment of the experiment.
With the appearance of 3D technology, with the simulation experiment developed by 3D, when the local part of the instrument needs to be operated, the change of vision and visual range needs to be adjusted by continuously and repeatedly operating the combination of the keyboard, the left mouse button and the right mouse button, and the experimental operation details such as observation and reading are completed. This kind of screen operation is too complicated and takes much more time than the experimental operation. Due to unfamiliarity with such operations, experimenters cannot smoothly complete experiments, and meanwhile, operations and contents of the experiments are diluted. Therefore, in order to solve this problem, the present invention provides an intelligent view switching method for virtual simulation experiments.
Disclosure of Invention
In order to solve the problems existing in the scheme, the invention provides an intelligent view angle switching method for a virtual simulation experiment.
The purpose of the invention can be realized by the following technical scheme:
the intelligent visual angle switching method for the virtual simulation experiment specifically comprises the following steps:
the method comprises the following steps: entering a virtual simulation experiment, acquiring a global view angle of the experiment, and observing an overall scene of the experiment by a user through the global view angle;
step two: identifying the position of the mouse in the global visual angle in real time, automatically positioning to a corresponding first local area according to the position of the mouse, and adjusting the optimal visual distance of the first local area;
step three: when a user clicks the area where the component is located in the first local area through a mouse, automatically adjusting the operation visual distance of the component;
step four: identifying the position of the mouse in real time under the view angle of the component, marking a second local area according to the position of the mouse, and zooming in to adjust the optimal operation view angle and the optimal operation distance entering the second local area;
step five: when a user clicks the locking button under the view angle of the component, the view angle of the component is changed into a locking state, and the user adjusts the components in the second local area under the locking state.
Further, in steps two to four, the first partial area or the first partial area where the mouse is located is marked as the initial area, and when the user moves the mouse from the initial area to the other first partial area or second partial area, the screen is automatically adjusted to the optimal visual distance corresponding to the first partial area or second partial area.
Further, in the second step to the fourth step, the method for judging the target area according to the mouse position and automatically adjusting the visual angle and the visual distance comprises the following steps:
defining a target area needing to be operated in an experimental scene, and recording the target area as A, the central position of the target area as C, the trigger radius of the target area as R, the zoom value of the target area as S, the operating visual distance under the target area as D, and the operating visual angle as E;
when the mouse enters the triggering radius R range of the target area during experimental operation, the position of the mouse is defined as P, and the movement vector of the mouse is defined as
Figure BDA0004017735450000021
Zoom value of target region->
Figure BDA0004017735450000022
The target area size is A 1 =S 1 *A;
Carry out vector
Figure BDA0004017735450000023
And A 1 Is determined, if->
Figure BDA0004017735450000024
And A 1 Intersect, then S = S 1 Otherwise, S =0, and the real target area is A 2 =S*A;
If the mouse position
Figure BDA0004017735450000025
The mouse keeps moving without processing, otherwise, the calculation of the process of simulating the change of the visual angle and the visual distance of the human is carried out.
Further, vectors are carried out
Figure BDA0004017735450000031
And A 1 The method for judging intersection of (1) comprises:
computing
Figure BDA0004017735450000032
And/or>
Figure BDA0004017735450000033
If theta is greater than or equal to 90, then the two do not intersect;
if θ is less than 90, then the calculation is in region A 1 Edge point B of 1 、B 2 、B 3 、B 4 Vector of (2)
Figure BDA0004017735450000034
Figure BDA0004017735450000035
And/or>
Figure BDA0004017735450000036
Is an included angle theta 1 、θ 2 、θ 3 、θ 4 (ii) a Will theta 1 、θ 2 、θ 3 、θ 4 The angle of the greatest in the number is marked by theta max The smallest angle is marked theta min When theta is min ≤θ≤θ max Then, then->
Figure BDA0004017735450000037
And A 1 Otherwise, it does not.
Further, the method for calculating the process of simulating the change of the visual angle and the visual distance of the human comprises the following steps:
mouse position P ∈ A 2 The current apparent distance is recorded as D 1 The current view angle is E 1 If the current adjustment process is Q =0 and the speed of the adjustment process is V =0, then the viewing distance to be adjusted is D 2 =D-D 1 Angle of view E to be adjusted 2 =E-E 1
When vector
Figure BDA0004017735450000038
When length of (b) is zero, V = V +0.001, Q = Q + V;
current apparent distance D 3 =D 1 +Q×D 2 The current view angle is E 3 =E 1 +Q×E 2
When Q is more than or equal to 1, finishing the adjustment; when in use
Figure BDA0004017735450000039
When so, the adjustment is ended.
Further, before entering the first local area or the second local area, a dynamic reaction time is set, and the specific method comprises the following steps:
identifying the experiment historical operation duration of a user, and matching a corresponding first adjustment value according to the obtained experiment historical operation duration; acquiring operation data of a user within a period of time, and analyzing the acquired operation data to acquire a corresponding operation correction coefficient; identifying the current network speed in real time, and matching a corresponding second adjustment value according to the obtained network speed; and acquiring standard reaction time corresponding to each reaction time category, and calculating corresponding dynamic reaction time according to the acquired standard reaction time, the first adjustment value, the second adjustment value and the operation correction coefficient.
Further, the method for matching the corresponding first adjustment value according to the obtained experimental historical operation duration comprises the following steps:
establishing a first adjustment value curve, acquiring the experimental historical operation duration needing to be matched, inputting the acquired experimental historical operation duration into the first adjustment value curve for matching, and acquiring a corresponding first adjustment value.
Further, the method for calculating the corresponding dynamic reaction time according to the obtained standard reaction time, the first adjustment value, the second adjustment value and the operation correction coefficient comprises the following steps:
marking the standard reaction time as TBi, wherein i =1, 2, 8230, n and n are positive integers, setting adjustment coefficients corresponding to each reaction time category, marking the obtained adjustment coefficients as beta i, marking the first adjustment value, the second adjustment value and the operation correction coefficient as TZ1, TZ2 and alpha respectively, and calculating the corresponding dynamic reaction time according to a formula TPi = TBi + beta i x [ alpha x (TZ 1+ TZ 2) ].
Compared with the prior art, the invention has the beneficial effects that:
in the process of operating the simulation experiment, the invention represents the observation area of the person through the movement and the pointing of the mouse, simulates the adjustment change of the visual angle and the visual distance of the person, and the computer automatically realizes the entering into the optimal visual angle and visual distance state. The method for showing the local operation in the prior 2D and 3D experiments in a pop-up window mode is eliminated. Avoiding a large number of non-experimental screen operations and focusing all the effort on the operation and study of experimental content. The whole operation of the experiment becomes very smooth and is closer to the real behavior process.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a flow chart of the method of the present invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the following embodiments, and it should be understood that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, the intelligent view angle switching method for the virtual simulation experiment specifically includes the following steps:
the method comprises the following steps: entering a virtual simulation experiment, acquiring a global view angle of the experiment, and observing an overall scene of the experiment by a user through the global view angle;
step two: identifying the position of the mouse in the global visual angle in real time, automatically positioning to a corresponding first local area according to the position of the mouse, and adjusting the optimal visual distance of the first local area;
step three: when a user clicks the area where the component is located in the first local area through a mouse, automatically adjusting the operation visual distance of the component;
step four: identifying the position of the mouse in real time under the view angle of the component, marking a second local area according to the position of the mouse, and zooming in to adjust the optimal operation view angle and the optimal operation distance entering the second local area;
step five: when a user clicks the locking button under the view angle of the component, the view angle of the component is in a locked state, the component cannot move out of the component area or leaves the current view distance in a mode that the mouse enters other local areas, the locked state can be released only by clicking the unlocking button, and the user adjusts the components in the second local area under the locked state. Namely, the current visual range is adjusted by translation, rotation, zooming and the like.
Illustratively, when an experiment is opened, a global perspective is entered, under which we can observe the entire scene of the experiment.
When the experimenter needs to operate a certain local part, the mouse is moved to the area, and the screen is automatically adjusted to the optimal local part operation visual distance. The global perspective can be returned as long as the mouse is moved outside the local area.
When the experimenter is in the local view angle, if the mouse can be moved to other local areas, the screen is automatically adjusted to the optimal visual distance of other local areas.
When an experimenter operates the part in the local area under the local visual angle, the area where the part is located can be clicked, and the screen can be automatically adjusted to the operation visual distance of the part.
When the experimenter moves the mouse out of the part area under the part view angle, the screen automatically adjusts the view angle and the distance of the mouse entering the original local area again.
When the experimenter is in the view angle of the part, if the mouse is moved to a certain local area, the screen is automatically drawn to adjust the optimal operation view angle and distance entering the local area.
When an experimenter is under the view angle of an instrument component, if a locking button is clicked, the view angle of the component is in a locking state, and the experimenter cannot leave the current view distance in a mode that the mouse moves out of the component area or enters other local areas. Meanwhile, the current visual range can be subjected to operations such as translation, rotation, zooming and the like.
In steps two to four, the mouse is located in a certain local area, which may be the first local area or the second local area, and when the user moves the mouse to another local area, the screen will automatically adjust to the optimal viewing distance corresponding to the local area.
In the second step to the fourth step, the method for judging the target area according to the mouse position and automatically adjusting the visual angle and the visual distance comprises the following steps:
the target region is a region to be determined, such as the first local region and the second local region.
During experiment editing, defining a target area needing to be operated in an experiment scene as A, setting the central position of the target area as C, setting the trigger radius of the target area as R, setting the zoom value of the target area as S, setting the operating visual distance under the target area as D, and setting the operating visual angle as E;
when the mouse enters the triggering radius R range of the target area during experimental operation, the position of the mouse is defined as P, and the movement vector of the mouse is defined as
Figure BDA0004017735450000061
Zoom value of target region->
Figure BDA0004017735450000062
The size of the target area is A 1 =S 1 *A。
Calculating a vector
Figure BDA0004017735450000063
And A 1 The intersection algorithm of (1):
computing
Figure BDA0004017735450000064
And &>
Figure BDA0004017735450000065
If theta is greater than or equal to 90, then they do not intersect;
if θ is less than 90, then the calculation is in region A 1 Edge point B of 1 、B 2 、B 3 、B 4 Vector of (2)
Figure BDA0004017735450000066
Figure BDA0004017735450000071
And &>
Figure BDA0004017735450000072
Is an included angle theta 1 、θ 2 、θ 3 、θ 4 (ii) a Will theta 1 、θ 2 、θ 3 、θ 4 The angle of the greatest in the number is marked by theta max The smallest angle is marked theta min When theta is min ≤θ≤θ max Then, then->
Figure BDA0004017735450000073
And A 1 Otherwise, it is not crossed;
if it is not
Figure BDA0004017735450000074
And A 1 Intersect, then S = S 1 Otherwise, S =0, and the real target area is A 2 =S*A;
If the mouse position
Figure BDA0004017735450000075
The mouse keeps moving without processing, and on the contrary, the calculation simulating the change process of the visual angle and the visual distance of the human is carried out.
The design method of the process of simulating the change of the human visual angle and the human visual distance comprises the following steps:
if the mouse position P ∈ A 2 The current apparent distance is recorded as D 1 The current view angle is E 1 If the current adjustment process is Q =0 and the speed of the adjustment process is V =0, then the viewing distance to be adjusted is D 2 =D-D 1 Angle of view E to be adjusted 2 =E-E 1
When vector
Figure BDA0004017735450000076
When length of (b) is zero, V = V +0.001, Q = Q + V;
current apparent distance D 3 =D 1 +Q×D 2 The current view angle is E 3 =E 1 +Q*E 2
When Q is more than or equal to 1, finishing the adjustment;
when in use
Figure BDA0004017735450000077
When so, the adjustment is ended.
In order to increase the user experience, a reaction time needs to be set, that is, after the reaction time passes, the corresponding local area is entered, and the operational flow of the user in the virtual simulation experiment is increased, the specific method may adopt the following modes in several embodiments:
in one embodiment, a standard reaction time is set manually, and the local area is accessed after the standard reaction time; i.e. all with the same reaction time.
In one embodiment, for further refinement, the category having the required response time may be identified, for example, different situations of entering the first local area and the second local area, and the standard response time corresponding to each category may be manually set, and the corresponding standard response time may be matched according to the corresponding category for comparison.
In an embodiment, different users have different requirements for reaction time due to the influence of multiple factors such as personality and proficiency, so that the embodiment dynamically adjusts based on the two embodiments to realize personalized experience of the users, and the specific method includes:
identifying the experiment historical operation duration of a user, and matching a corresponding first adjustment value according to the obtained experiment historical operation duration; acquiring operation data of a user within a period of time, wherein the period of time refers to a recent historical operation time and is used for user behavior analysis, and a preset time can be set manually, wherein the operation data is image operation data; analyzing the obtained operation data to obtain a corresponding operation correction coefficient; identifying the current network speed in real time, and matching a corresponding second adjustment value according to the obtained network speed; acquiring standard reaction time corresponding to each reaction time category, namely the standard reaction time corresponding to various conditions in the first embodiment and the second embodiment; and calculating corresponding dynamic response time according to the obtained standard response time, the first adjustment value, the second adjustment value and the operation correction coefficient.
The experiment history operation duration is the accumulated operation duration of the user using the virtual simulation experiment from the first time; the method comprises the steps of obtaining a possibly-occurring historical operation duration range, obtaining the range through counting previous use data, when the historical operation duration exceeds a certain duration, not influencing a first adjustment value, namely the first adjustment value has a range, setting a corresponding first adjustment value curve according to the historical operation duration range in a manual mode, namely setting a plurality of historical operation durations and coordinate points corresponding to the first adjustment value through manual simulation, obtaining the first adjustment value curve after curve simulation, and when matching is needed, inputting the corresponding historical operation duration into the first adjustment value curve to obtain the corresponding first adjustment value.
Analyzing the obtained operation data, namely analyzing the operation behavior of the operation data in the recent historical operation process, whether the operation is skilled, the mouse track in the reaction time and the like, wherein the mouse track is used for analyzing whether the operation is impatient and the like, specifically establishing a corresponding operation analysis model based on a CNN network or a DNN network, establishing a corresponding training set in a manual mode for training, and analyzing the operation analysis model after the training is successful to obtain an operation correction coefficient corresponding to the operation data.
According to the second adjustment value corresponding to the obtained network speed matching, because the network speed has a certain influence on the reaction time of the equipment, corresponding analysis is performed in order to improve the corresponding analysis precision, and the adopted analysis method is simple to apply, and can quickly analyze the corresponding second adjustment value, specifically: the method comprises the steps of obtaining a fluctuation interval of the network speed, dividing the network speed interval into a plurality of small intervals in a manual mode, setting a corresponding second adjusting value for each small interval, arranging the second adjusting values into an interval matching table, and matching the obtained network speed in real time to obtain the corresponding second adjusting value.
The method for calculating the corresponding dynamic reaction time according to the obtained standard reaction time, the first adjustment value, the second adjustment value and the operation correction coefficient comprises the following steps:
marking the standard reaction time as TBi, wherein i =1, 2, ..., n and n are positive integers, and i represents corresponding each reaction time category; setting adjustment coefficients corresponding to the reaction time categories, wherein the adjustment coefficients are subjected to zooming adjustment one by one according to corresponding proportions, are specifically set manually according to the reaction time categories, and are all the same as one if no difference exists; marking the obtained adjustment coefficient as beta i, respectively marking the first adjustment value, the second adjustment value and the operation correction coefficient as TZ1, TZ2 and alpha, and calculating the corresponding dynamic reaction time according to a formula TPi = TBi + beta i x [ alpha x (TZ 1+ TZ 2) ].
The above formulas are all calculated by removing dimensions and taking numerical values thereof, the formula is a formula which is obtained by acquiring a large amount of data and performing software simulation to obtain the closest real situation, and the preset parameters and the preset threshold value in the formula are set by the technical personnel in the field according to the actual situation or obtained by simulating a large amount of data.
Although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted for elements thereof without departing from the scope of the present invention.

Claims (8)

1. An intelligent visual angle switching method for a virtual simulation experiment is characterized by comprising the following specific steps:
the method comprises the following steps: entering a virtual simulation experiment, acquiring a global view angle of the experiment, and observing an overall scene of the experiment by a user through the global view angle;
step two: identifying the position of the mouse in the global visual angle in real time, automatically positioning to a corresponding first local area according to the position of the mouse, and adjusting the optimal visual distance of the first local area;
step three: when a user clicks the area where the component is located in the first local area through a mouse, automatically adjusting the operation visual distance of the component;
step four: identifying the position of the mouse in real time under the view angle of the component, marking a second local area according to the position of the mouse, and zooming in to adjust the optimal operation view angle and the optimal operation distance entering the second local area;
step five: when a user clicks the locking button under the view angle of the component, the view angle of the component is changed into a locking state, and the user adjusts the components in the second local area under the locking state.
2. The method as claimed in claim 1, wherein in steps two to four, the first local area or the first local area where the mouse is located is marked as an initial area, and when the user moves the mouse from the initial area to the other first local area or second local area, the screen is automatically adjusted to the optimal viewing distance corresponding to the first local area or second local area.
3. The method of claim 1, wherein in steps two to four, the method of determining the target area according to the mouse position and automatically adjusting the viewing angle and the viewing distance comprises:
defining a target area needing to be operated in an experimental scene, and recording the target area as A, the central position of the target area as C, the trigger radius of the target area as R, the zoom value of the target area as S, the operating visual distance under the target area as D, and the operating visual angle as E;
when the mouse enters the triggering radius R range of the target area during experimental operation, the position of the mouse is defined as P, and the movement vector of the mouse is defined as
Figure FDA0004017735440000021
Zoom value of target region->
Figure FDA0004017735440000022
The size of the target area is A 1 =S 1 *A;
Carry out vector
Figure FDA0004017735440000023
And A 1 Is determined, if->
Figure FDA0004017735440000024
And A 1 Intersect, then S = S 1 Otherwise, S =0, and the real target area is A 2 =S*A;
If the mouse position
Figure FDA0004017735440000025
The mouse keeps moving without processing, otherwise, the calculation of the process of simulating the change of the visual angle and the visual distance of the human is carried out.
4. The method of claim 3The intelligent visual angle switching method for the virtual simulation experiment is characterized in that vectors are carried out
Figure FDA0004017735440000026
And A 1 The method for judging intersection of (1) comprises:
calculating out
Figure FDA0004017735440000027
And/or>
Figure FDA0004017735440000028
If theta is greater than or equal to 90, then the two do not intersect;
if θ is less than 90, then the calculation is in region A 1 Edge point B of 1 、B 2 、B 3 、B 4 Vector of (2)
Figure FDA0004017735440000029
Figure FDA00040177354400000210
And with
Figure FDA00040177354400000211
Is an included angle theta 1 、θ 2 、θ 3 、θ 4 (ii) a Will theta 1 、θ 2 、θ 3 、θ 4 The angle of the medium maximum is marked by theta max The smallest angle is marked as theta min When theta is equal to min ≤θ≤θ max Then, then->
Figure FDA00040177354400000212
And A 1 Otherwise, it is not.
5. The method of claim 3, wherein the method of calculating the simulated human visual angle and visual distance comprises:
mouse position P ∈ A 2 The current apparent distance is recorded as D 1 The current view angle is E 1 When the current adjustment process is Q =0 and the speed of the adjustment process is V =0, the visual range to be adjusted is D 2 =D-D 1 Angle of view E to be adjusted 2 =E-E 1
When vector
Figure FDA00040177354400000213
When length of (b) is zero, V = V +0.001, Q = Q + V;
current apparent distance D 3 =D 1 +Q×D 2 The current view angle is E 3 =E 1 +Q×E 2
When Q is more than or equal to 1, finishing the adjustment; when in use
Figure FDA00040177354400000214
When so, the adjustment is ended.
6. The method of claim 1, wherein a dynamic response time is set before entering the first local area or the second local area, and the method comprises:
identifying the experiment historical operation duration of a user, and matching a corresponding first adjustment value according to the obtained experiment historical operation duration; acquiring operation data of a user within a period of time, and analyzing the acquired operation data to acquire a corresponding operation correction coefficient; identifying the current network speed in real time, and matching a corresponding second adjustment value according to the obtained network speed; and acquiring standard reaction time corresponding to each reaction time category, and calculating corresponding dynamic reaction time according to the acquired standard reaction time, the first adjustment value, the second adjustment value and the operation correction coefficient.
7. The method of claim 6, wherein the step of matching the first adjustment value according to the obtained experiment history operation duration comprises:
establishing a first adjustment value curve, acquiring the experimental historical operation duration needing to be matched, inputting the acquired experimental historical operation duration into the first adjustment value curve for matching, and acquiring a corresponding first adjustment value.
8. The method of claim 6, wherein the step of calculating the corresponding dynamic response time according to the obtained standard response time, the first adjustment value, the second adjustment value and the operation correction factor comprises:
marking standard reaction time as TBi, wherein i =1, 2, ..., n and n are positive integers, setting adjustment coefficients corresponding to each reaction time category, marking the obtained adjustment coefficients as beta i, respectively marking a first adjustment value, a second adjustment value and an operation correction coefficient as TZ1, TZ2 and alpha, and calculating corresponding dynamic reaction time according to a formula TPi = TBi + beta i x [ alpha x (TZ 1+ TZ 2) ].
CN202211677708.1A 2022-12-26 2022-12-26 Intelligent visual angle switching method for virtual simulation experiment Active CN115981514B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211677708.1A CN115981514B (en) 2022-12-26 2022-12-26 Intelligent visual angle switching method for virtual simulation experiment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211677708.1A CN115981514B (en) 2022-12-26 2022-12-26 Intelligent visual angle switching method for virtual simulation experiment

Publications (2)

Publication Number Publication Date
CN115981514A true CN115981514A (en) 2023-04-18
CN115981514B CN115981514B (en) 2023-10-03

Family

ID=85959220

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211677708.1A Active CN115981514B (en) 2022-12-26 2022-12-26 Intelligent visual angle switching method for virtual simulation experiment

Country Status (1)

Country Link
CN (1) CN115981514B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010019355A1 (en) * 1997-04-21 2001-09-06 Masakazu Koyanagi Controller for photographing apparatus and photographing system
US20050195157A1 (en) * 2004-03-03 2005-09-08 Gary Kramer System for delivering and enabling interactivity with images
CN103257707A (en) * 2013-04-12 2013-08-21 中国科学院电子学研究所 Three-dimensional roaming method utilizing eye gaze tracking and conventional mouse control device
CN107562326A (en) * 2017-09-30 2018-01-09 东莞市同立方智能科技有限公司 A kind of method of the model line in virtual 3D scenes
CN107741782A (en) * 2017-09-20 2018-02-27 国网山东省电力公司泰安供电公司 A kind of equipment virtual roaming method and apparatus
CN112437286A (en) * 2020-11-23 2021-03-02 成都易瞳科技有限公司 Method for transmitting panoramic original picture video in blocks
CN112807686A (en) * 2021-01-28 2021-05-18 网易(杭州)网络有限公司 Game fighting method and device and electronic equipment
CN113506489A (en) * 2021-07-09 2021-10-15 洛阳师范学院 Virtual simulation technology-based unmanned aerial vehicle training method and device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010019355A1 (en) * 1997-04-21 2001-09-06 Masakazu Koyanagi Controller for photographing apparatus and photographing system
US20050195157A1 (en) * 2004-03-03 2005-09-08 Gary Kramer System for delivering and enabling interactivity with images
CN103257707A (en) * 2013-04-12 2013-08-21 中国科学院电子学研究所 Three-dimensional roaming method utilizing eye gaze tracking and conventional mouse control device
CN107741782A (en) * 2017-09-20 2018-02-27 国网山东省电力公司泰安供电公司 A kind of equipment virtual roaming method and apparatus
CN107562326A (en) * 2017-09-30 2018-01-09 东莞市同立方智能科技有限公司 A kind of method of the model line in virtual 3D scenes
CN112437286A (en) * 2020-11-23 2021-03-02 成都易瞳科技有限公司 Method for transmitting panoramic original picture video in blocks
CN112807686A (en) * 2021-01-28 2021-05-18 网易(杭州)网络有限公司 Game fighting method and device and electronic equipment
CN113506489A (en) * 2021-07-09 2021-10-15 洛阳师范学院 Virtual simulation technology-based unmanned aerial vehicle training method and device

Also Published As

Publication number Publication date
CN115981514B (en) 2023-10-03

Similar Documents

Publication Publication Date Title
KR102014377B1 (en) Method and apparatus for surgical action recognition based on learning
CN108229442B (en) Method for rapidly and stably detecting human face in image sequence based on MS-KCF
Peng et al. A mixed bag of emotions: Model, predict, and transfer emotion distributions
CN105718878B (en) The aerial hand-written and aerial exchange method in the first visual angle based on concatenated convolutional neural network
CN106097393B (en) It is a kind of based on multiple dimensioned with adaptive updates method for tracking target
CN111931585A (en) Classroom concentration degree detection method and device
CN111931869B (en) Method and system for detecting user attention through man-machine natural interaction
CN109886356A (en) A kind of target tracking method based on three branch's neural networks
CN107146237A (en) A kind of method for tracking target learnt based on presence with estimating
CN112613384A (en) Gesture recognition method, gesture recognition device and control method of interactive display equipment
CN108229432A (en) Face calibration method and device
CN108877771B (en) Data processing method, storage medium and electronic device
CN108416800A (en) Method for tracking target and device, terminal, computer readable storage medium
CN115981514A (en) Intelligent visual angle switching method for virtual simulation experiment
Ikram et al. Real time hand gesture recognition using leap motion controller based on CNN-SVM architechture
Lin et al. A computational intelligence system for cell classification
Shitole et al. Dynamic hand gesture recognition using PCA, Pruning and ANN
Casy et al. “Stand-up straight!”: human pose estimation to evaluate postural skills during orthopedic surgery simulations
Kaulage et al. Exercise Movement Detection Using Spearman Correlation-based Sliding Window Technique
CN110275608A (en) Human eye sight method for tracing
Surasak et al. Application of Deep Learning on Student Attendance Checking in Virtual Classroom
Mao Evaluation of classroom teaching effect based on facial expression recognition
CN116757524B (en) Teacher teaching quality evaluation method and device
Sampaio et al. Development of a Computer Interface for People with Disabilities based on Computer Vision.
Goudeaux et al. Principal component analysis for facial animation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant