CN115981514B - Intelligent visual angle switching method for virtual simulation experiment - Google Patents

Intelligent visual angle switching method for virtual simulation experiment Download PDF

Info

Publication number
CN115981514B
CN115981514B CN202211677708.1A CN202211677708A CN115981514B CN 115981514 B CN115981514 B CN 115981514B CN 202211677708 A CN202211677708 A CN 202211677708A CN 115981514 B CN115981514 B CN 115981514B
Authority
CN
China
Prior art keywords
mouse
angle
area
adjustment value
local area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211677708.1A
Other languages
Chinese (zh)
Other versions
CN115981514A (en
Inventor
王晓蒲
刘和伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Keda Aorui Technology Co ltd
Original Assignee
Anhui Keda Aorui Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Keda Aorui Technology Co ltd filed Critical Anhui Keda Aorui Technology Co ltd
Priority to CN202211677708.1A priority Critical patent/CN115981514B/en
Publication of CN115981514A publication Critical patent/CN115981514A/en
Application granted granted Critical
Publication of CN115981514B publication Critical patent/CN115981514B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses an intelligent visual angle switching method for a virtual simulation experiment, which belongs to the technical field of visual angle switching and comprises the following specific steps: step one: entering a virtual simulation experiment, obtaining a global view of the experiment, and observing the whole scene of the experiment by a user through the global view; step two: the position of the mouse in the global visual angle is identified in real time, the corresponding first local area is automatically positioned according to the position of the mouse, and the optimal visual distance of the first local area is adjusted; step three: when a user clicks the area where the component is located in the first local area through a mouse, automatically adjusting the operation visual distance to the component; step four: recognizing the position of the mouse in real time under the view angle of the component, marking the second local area according to the position of the mouse, and pulling up and adjusting the optimal operation view angle and distance entering the second local area; step five: when the user clicks the lock button under the view angle of the component, the view angle of the component becomes the locked state, and the user adjusts the component in the second partial area under the locked state.

Description

Intelligent visual angle switching method for virtual simulation experiment
Technical Field
The invention belongs to the technical field of visual angle switching, and particularly relates to an intelligent visual angle switching method for a virtual simulation experiment.
Background
The visual angles and visual distances of people in various activities in practice need to be changed continuously, and the overall and local, far and near adjustment is realized to finish the work, and the scientific experiment is more so. For this problem, ten years ago, when 2D technology was used, when the local operation of the instrument was required, a popup window was required to present the local operation and observation of the experiment. Complicated experiments also have the condition that popup windows are nested in popup windows, and the continuous popup windows increase the complexity of the experiments and deviate from the real environment of the experiments.
With the advent of 3D technology, simulation experiments developed by 3D require repeated operations to adjust changes in vision and viewing distance by using a combination of a keyboard, a left mouse button and a right mouse button when a local part of an instrument needs to be operated, so as to complete experimental operation details such as observation, reading and the like. This screen operation is too cumbersome and takes much more time than the experimental operation. The experimenter is unfamiliar with the operation, so that the experiment cannot be successfully completed, and the operation and the content of the experiment are diluted. Therefore, in order to solve this problem, the present invention provides an intelligent view angle switching method for virtual simulation experiments.
Disclosure of Invention
In order to solve the problems of the scheme, the invention provides an intelligent visual angle switching method for a virtual simulation experiment.
The aim of the invention can be achieved by the following technical scheme:
the intelligent visual angle switching method for the virtual simulation experiment comprises the following specific steps:
step one: entering a virtual simulation experiment, obtaining a global view of the experiment, and observing the whole scene of the experiment by a user through the global view;
step two: the position of the mouse in the global visual angle is identified in real time, the corresponding first local area is automatically positioned according to the position of the mouse, and the optimal visual distance of the first local area is adjusted;
step three: when a user clicks the area where the component is located in the first local area through a mouse, automatically adjusting the operation visual distance to the component;
step four: recognizing the position of the mouse in real time under the view angle of the component, marking the second local area according to the position of the mouse, and pulling up and adjusting the optimal operation view angle and distance entering the second local area;
step five: when the user clicks the lock button under the view angle of the component, the view angle of the component becomes the locked state, and the user adjusts the component in the second partial area under the locked state.
Further, in the second to fourth steps, the first partial area or the first partial area where the mouse is located is marked as an initial area, and when the user moves the mouse from the initial area to the other first partial area or the other second partial area, the screen is automatically adjusted to the optimal viewing distance corresponding to the first partial area or the second partial area.
Further, in the second to fourth steps, the method for judging the target area and automatically adjusting the viewing angle and the viewing distance according to the mouse position includes:
defining a target area to be operated in an experimental scene, wherein the center position of the target area is C, the trigger radius of the target area is R, the scaling value of the target area is S, the operating viewing distance under the target area is D, and the operating viewing angle is E;
when the mouse enters the trigger radius R range of the target area during experimental operation, defining the position of the mouse as P and the movement vector of the mouse asScaling value +.>At this time, the target area size is A 1 =S 1 *A;
Vector of progressAnd A is a 1 If->And A is a 1 Crossing, then s=s 1 Otherwise s=0, the true target area is a 2 =S*A;
If the mouse is positionedThe mouse remains moving and does not process, otherwise, the calculation simulating the change process of the visual angle and the visual distance of the person is performed.
Further, vector is performedAnd A is a 1 The intersection judgment method of (1) comprises the following steps:
calculation ofAnd->Included angle theta of (2)If θ is greater than or equal to 90, then do not intersect;
if θ is less than 90, then calculate in region A 1 Edge point B of (2) 1 、B 2 、B 3 、B 4 Vector of (3) And->Is included angle theta 1 、θ 2 、θ 3 、θ 4 The method comprises the steps of carrying out a first treatment on the surface of the Will be theta 1 、θ 2 、θ 3 、θ 4 The largest angle in (2) is marked as theta max The smallest angle is marked as theta min When theta is min ≤θ≤θ max When in use, then->And A is a 1 And otherwise disjoint.
Further, the method for performing the calculation simulating the human visual angle and visual distance change process comprises the following steps:
mouse position P E A 2 The current line of sight is recorded as D 1 The current viewing angle is E 1 When the current adjusting process is q=0 and the speed of the adjusting process is v=0, the sight distance to be adjusted is D 2 =D-D 1 Viewing angle E to be adjusted 2 =E-E 1
When vector isV=v+0.001, q=q+v;
current viewing distance D 3 =D 1 +Q×D 2 The current viewing angle is E 3 =E 1 +Q×E 2
When Q is more than or equal to 1, finishing the adjustment; when (when)When this is done, the adjustment is ended.
Further, the method comprises the steps of setting a dynamic response time before entering the first local area or the second local area:
identifying the historical operation time length of the user experiment, and matching the corresponding first adjustment value according to the obtained historical operation time length of the experiment; acquiring operation data of a user within a period of time, and analyzing the acquired operation data to acquire a corresponding operation correction coefficient; identifying the current network speed in real time, and matching a corresponding second adjustment value according to the obtained network speed; and obtaining standard reaction time corresponding to each reaction time category, and calculating corresponding dynamic reaction time according to the obtained standard reaction time, the first adjustment value, the second adjustment value and the operation correction coefficient.
Further, the method for matching the corresponding first adjustment value according to the obtained experimental historical operation duration comprises the following steps:
establishing a first adjustment value curve, acquiring experimental historical operation time length required to be matched, inputting the acquired experimental historical operation time length into the first adjustment value curve for matching, and acquiring a corresponding first adjustment value.
Further, the method for calculating the corresponding dynamic response time according to the obtained standard response time, the first adjustment value, the second adjustment value and the operation correction coefficient comprises the following steps:
the standard reaction time is marked as TBi, wherein i=1, 2, … …, n are positive integers, the adjustment coefficient corresponding to each reaction time category is set, the obtained adjustment coefficient is marked as βi, the first adjustment value, the second adjustment value and the operation correction coefficient are respectively marked as TZ1, TZ2 and alpha, and the corresponding dynamic reaction time is calculated according to the formula TPi=TBi+βi× [ alpha× (TZ 1+TZ 2) ].
Compared with the prior art, the invention has the beneficial effects that:
in the process of operating the simulation experiment, the invention expresses the observation area of the person through the movement and the pointing of the mouse, simulates the adjustment and the change of the visual angle and the visual distance of the person, and automatically realizes the entering of the optimal visual angle and visual distance state by the computer. The method for displaying local operation in the popup window mode in the existing 2D and 3D experiments is eliminated. A large number of non-experimental screen manipulations are avoided, focusing all effort on the manipulation and study of experimental content. The whole operation of the experiment becomes very smooth and is more close to the real behavior process.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings can be obtained according to these drawings without inventive effort to a person skilled in the art.
FIG. 1 is a flow chart of the method of the present invention.
Detailed Description
The technical solutions of the present invention will be clearly and completely described in connection with the embodiments, and it is obvious that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
As shown in fig. 1, the intelligent view angle switching method for the virtual simulation experiment comprises the following specific steps:
step one: entering a virtual simulation experiment, obtaining a global view of the experiment, and observing the whole scene of the experiment by a user through the global view;
step two: the position of the mouse in the global visual angle is identified in real time, the corresponding first local area is automatically positioned according to the position of the mouse, and the optimal visual distance of the first local area is adjusted;
step three: when a user clicks the area where the component is located in the first local area through a mouse, automatically adjusting the operation visual distance to the component;
step four: recognizing the position of the mouse in real time under the view angle of the component, marking the second local area according to the position of the mouse, and pulling up and adjusting the optimal operation view angle and distance entering the second local area;
step five: when the user clicks the locking button under the view angle of the component, the view angle of the component becomes a locking state, and the user can not move out of the component area or leave the current view distance by entering the mouse into other local areas, and can only unlock the locking state by clicking the unlocking button, so that the user can adjust the component in the second local area under the locking state. I.e. the current viewing distance is subject to adjustment operations such as translation, rotation, scaling, etc.
Illustratively, when an experiment is open, a global view is entered, under which we can observe the overall scene of the experiment.
When the experimenter needs to operate a certain part, the experimenter moves the mouse to the region, and the screen is automatically adjusted to the optimal local operation visual distance. The global view can be returned as long as the mouse is moved outside the local area.
When the experimenter is at a local view angle, if the mouse can be moved to other local areas, the screen will automatically adjust to the optimal viewing distance of the other local areas.
When the experimenter operates the part in the local area under the local visual angle, the area where the part is located can be clicked, and the screen can be automatically adjusted to the operation visual distance of the part.
When the experimenter is under the view angle of the part, if the experimenter moves the mouse outside the part area, the view angle and the distance of the original local area are automatically adjusted again.
When the experimenter is under the view angle of the component, if the mouse is moved to a certain local area, the screen is automatically pulled to adjust the optimal operation view angle and distance into the local area.
When the experimenter clicks the locking button under the view angle of the instrument component, the view angle of the component becomes a locking state, and the experimenter cannot leave the current view distance by moving the mouse outside the component area or entering the mouse into other local areas. Meanwhile, the method can perform operations such as translation, rotation, scaling and the like on the current sight distance.
In the second to fourth steps, the mouse is located in a certain local area, which may be the first local area or the second local area, and when the user moves the mouse to other local areas, the screen will automatically adjust to the optimal viewing distance of the corresponding local area.
In the second to fourth steps, the method for judging the target area and automatically adjusting the viewing angle and the viewing distance according to the position of the mouse comprises the following steps:
the target area is the first local area, the second local area and the like which need to be determined.
During experimental editing, defining a target area to be operated in an experimental scene, wherein the center position of the target area is C, the trigger radius of the target area is R, the scaling value of the target area is S, the operating visual distance under the target area is D, and the operating visual angle is E;
when the mouse enters the trigger radius R range of the target area during experimental operation, defining the position of the mouse as P and the movement vector of the mouse asScaling value +.>At this time, the target area size is A 1 =S 1 *A。
Calculating vectorsAnd A is a 1 Is a complex algorithm:
calculation ofAnd->If θ is greater than or equal to 90, then do not intersect;
if θ is less than 90, then calculate in region A 1 Edge point B of (2) 1 、B 2 、B 3 、B 4 Vector of (3) And->Is included angle theta 1 、θ 2 、θ 3 、θ 4 The method comprises the steps of carrying out a first treatment on the surface of the Will be theta 1 、θ 2 、θ 3 、θ 4 The largest angle in (2) is marked as theta max The smallest angle is marked as theta min When theta is min ≤θ≤θ max When in use, then->And A is a 1 Or else do not intersect;
if it isAnd A is a 1 Crossing, then s=s 1 Otherwise s=0, the true target area is a 2 =S*A;
If the mouse is positionedThe mouse remains moving and does not process, otherwise, the calculation simulating the process of changing the visual angle and the visual distance of the person is entered.
The design method for simulating the process of changing the visual angle and the visual distance of a person comprises the following steps:
if the mouse position P epsilon A 2 The current line of sight is recorded as D 1 The current viewing angle is E 1 When the current adjusting process is q=0 and the speed of the adjusting process is v=0, the sight distance to be adjusted is D 2 =D-D 1 Viewing angle E to be adjusted 2 =E-E 1
When vector isV=v+0.001, q=q+v;
current viewing distance D 3 =D 1 +Q×D 2 The current viewing angle is E 3 =E 1 +Q*E 2
When Q is more than or equal to 1, finishing the adjustment;
when (when)When this is done, the adjustment is ended.
In order to increase the use experience of the user, a reaction time needs to be set, namely, after the reaction time, the user enters a corresponding local area, so that the operation flow of the user in the virtual simulation experiment is increased, and the specific method can be as follows:
in one embodiment, a standard reaction time is set manually, and the local area is entered after the standard reaction time; i.e. all with the same reaction time.
In one embodiment, for further refinement, the types of the required reaction time may be identified, for example, different situations of entering the first local area and the second local area may be set manually, and the standard reaction time corresponding to each type may be matched according to the corresponding type and compared.
In one embodiment, different users have different requirements on response time due to the influence of various factors such as personality and proficiency, so that the embodiment dynamically adjusts based on the two embodiments to realize personalized experience of the users, and the specific method comprises the following steps:
identifying the historical operation time length of the user experiment, and matching the corresponding first adjustment value according to the obtained historical operation time length of the experiment; acquiring operation data of a user in a period of time, wherein the period of time refers to the latest historical operation time for user behavior analysis, and the operation data is image operation data by manually setting a preset time; analyzing the obtained operation data to obtain a corresponding operation correction coefficient; identifying the current network speed in real time, and matching a corresponding second adjustment value according to the obtained network speed; obtaining standard reaction time corresponding to each reaction time category, namely standard reaction time corresponding to various conditions in the first and second embodiments; and calculating the corresponding dynamic reaction time according to the obtained standard reaction time, the first adjustment value, the second adjustment value and the operation correction coefficient.
The historical operation duration of the experiment, namely the accumulated operation duration of the virtual simulation experiment used by the user from the first time; the possible historical operation duration range is obtained by counting the previous use data, when the historical operation duration exceeds a certain duration, the first adjustment value is not influenced, namely the first adjustment value is in a range, a corresponding first adjustment value curve is set according to the historical operation duration range in a manual mode, namely a plurality of historical operation durations and coordinate points corresponding to the first adjustment value are set through manual simulation, after curve simulation is carried out, a first adjustment value curve is obtained, and when matching is needed, the corresponding historical operation duration is input into the first adjustment value curve, and the corresponding first adjustment value can be obtained.
The obtained operation data is analyzed, namely whether the operation behavior of the operation data is skilled in the recent historical operation process, whether the operation is impatient, a mouse track in the reaction time and the like are analyzed, a corresponding operation analysis model is specifically built based on a CNN network or a DNN network, the operation analysis model is manually built for training, and the operation correction coefficient corresponding to the operation data is obtained through the analysis of the operation analysis model after the training is successful.
According to the obtained second adjustment value corresponding to the network speed matching, because the network speed has a certain influence on the response time of the equipment, in order to improve the corresponding analysis precision, the corresponding analysis is performed, and the adopted analysis method is simple to apply, so that the corresponding second adjustment value can be rapidly analyzed, and specifically, the method comprises the following steps: the fluctuation interval of the network speed is obtained, the network speed interval is divided into a plurality of cells by a manual mode, corresponding second adjustment values are set for each cell, the cells can be further arranged into an interval matching table, and the obtained network speed is matched in real time to obtain the corresponding second adjustment values.
The method for calculating the corresponding dynamic response time according to the obtained standard response time, the first adjustment value, the second adjustment value and the operation correction coefficient comprises the following steps:
marking a standard reaction time as TBi, wherein i=1, 2, … …, n is a positive integer, and i represents each corresponding reaction time class; setting adjustment coefficients corresponding to each reaction time category, wherein the adjustment coefficients are scaled and adjusted one by one according to the corresponding proportion, specifically, are set according to each reaction time category in a manual mode, and are all the same as one if no distinction is considered; the obtained adjustment coefficient is labeled βi, the first adjustment value, the second adjustment value, and the operation correction coefficient are labeled TZ1, TZ2, and α, respectively, and the corresponding dynamic reaction time is calculated according to the formula tpi=tbi+βi× [ α× (tz1+tz2) ].
The above formulas are all formulas with dimensions removed and numerical values calculated, the formulas are formulas which are obtained by acquiring a large amount of data and performing software simulation to obtain the closest actual situation, and preset parameters and preset thresholds in the formulas are set by a person skilled in the art according to the actual situation or are obtained by simulating a large amount of data.
The above embodiments are only for illustrating the technical method of the present invention and not for limiting the same, and it should be understood by those skilled in the art that the technical method of the present invention may be modified or substituted without departing from the spirit and scope of the technical method of the present invention.

Claims (5)

1. The intelligent visual angle switching method for the virtual simulation experiment is characterized by comprising the following specific steps of:
step one: entering a virtual simulation experiment, obtaining a global view of the experiment, and observing the whole scene of the experiment by a user through the global view;
step two: the position of the mouse in the global visual angle is identified in real time, the corresponding first local area is automatically positioned according to the position of the mouse, and the optimal visual distance of the first local area is adjusted;
step three: when a user clicks the area where the component is located in the first local area through a mouse, automatically adjusting the operation visual distance to the component;
step four: recognizing the position of the mouse in real time under the view angle of the component, marking the second local area according to the position of the mouse, and pulling up and adjusting the optimal operation view angle and distance entering the second local area;
step five: when the user clicks the locking button under the view angle of the component, the view angle of the component becomes a locking state, and the user adjusts the component in the second local area under the locking state;
in the second to fourth steps, the method for judging the target area and automatically adjusting the viewing angle and the viewing distance according to the position of the mouse comprises the following steps:
defining a target area to be operated in an experimental scene, wherein the center position of the target area is C, the trigger radius of the target area is R, the scaling value of the target area is S, the operating viewing distance under the target area is D, and the operating viewing angle is E;
when the mouse enters the trigger radius R range of the target area during experimental operation, defining the position of the mouse as P and the movement vector of the mouse asScaling value +.>At this time, the target area size is A 1 =S 1 *A;
Vector of progressAnd A is a 1 If->And A is a 1 Crossing, then s=s 1 Otherwise s=0, the true target area is a 2 =S*A;
If the mouse is positionedThe mouse keeps moving and does not process, otherwise, calculation simulating the change process of the visual angle and the visual distance of the person is carried out;
vector of progressAnd A is a 1 The intersection judgment method of (1) comprises the following steps:
calculation ofAnd->If θ is greater than or equal to 90, then do not intersect;
if θ is less than 90, then calculate in region A 1 Edge point B of (2) 1 、B 2 、B 3 、B 4 Vector of (3) And->Is included angle theta 1 、θ 2 、θ 3 、θ 4 The method comprises the steps of carrying out a first treatment on the surface of the Will be theta 1 、θ 2 、θ 3 、θ 4 The largest angle in (2) is marked as theta max The smallest angle is marked as theta min When theta is min ≤θ≤θ max When in use, then->And A is a 1 Or else do not intersect;
the method for calculating the simulated human visual angle and visual distance change process comprises the following steps:
mouse position P E A 2 The current line of sight is recorded as D 1 The current viewing angle is E 1 When the current adjusting process is q=0 and the speed of the adjusting process is v=0, the sight distance to be adjusted is D 2 =D-D 1 Viewing angle E to be adjusted 2 =E-E 1
When vector isV=v+0.001, q=q+v;
current viewing distance D 3 =D 1 +Q×D 2 The current viewing angle is E 3 =E 1 +Q×E 2
When Q is more than or equal to 1, finishing the adjustment; when (when)When this is done, the adjustment is ended.
2. The intelligent view angle switching method for virtual simulation experiments according to claim 1, wherein in the second to fourth steps, the first partial area or the first partial area where the mouse is located is marked as an initial area, and when the user moves the mouse from the initial area to the other first partial area or the other second partial area, the screen is automatically adjusted to an optimal view distance corresponding to the first partial area or the second partial area.
3. The intelligent view switching method for virtual simulation experiments according to claim 1, wherein the dynamic response time is set before entering the first local area or the second local area, the method comprising:
identifying the historical operation time length of the user experiment, and matching the corresponding first adjustment value according to the obtained historical operation time length of the experiment; acquiring operation data of a user within a period of time, and analyzing the acquired operation data to acquire a corresponding operation correction coefficient; identifying the current network speed in real time, and matching a corresponding second adjustment value according to the obtained network speed; and obtaining standard reaction time corresponding to each reaction time category, and calculating corresponding dynamic reaction time according to the obtained standard reaction time, the first adjustment value, the second adjustment value and the operation correction coefficient.
4. The intelligent view angle switching method for virtual simulation experiments according to claim 3, wherein the method of matching the corresponding first adjustment value according to the obtained experimental history operation duration comprises:
establishing a first adjustment value curve, acquiring experimental historical operation time length required to be matched, inputting the acquired experimental historical operation time length into the first adjustment value curve for matching, and acquiring a corresponding first adjustment value.
5. The intelligent view angle switching method for virtual simulation experiments according to claim 3, wherein the method for calculating the corresponding dynamic response time according to the obtained standard response time, the first adjustment value, the second adjustment value and the operation correction coefficient comprises:
the standard reaction time is marked as TBi, wherein i=1, 2, … …, n are positive integers, the adjustment coefficient corresponding to each reaction time category is set, the obtained adjustment coefficient is marked as βi, the first adjustment value, the second adjustment value and the operation correction coefficient are respectively marked as TZ1, TZ2 and alpha, and the corresponding dynamic reaction time is calculated according to the formula TPi=TBi+βi× [ alpha× (TZ 1+TZ 2) ].
CN202211677708.1A 2022-12-26 2022-12-26 Intelligent visual angle switching method for virtual simulation experiment Active CN115981514B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211677708.1A CN115981514B (en) 2022-12-26 2022-12-26 Intelligent visual angle switching method for virtual simulation experiment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211677708.1A CN115981514B (en) 2022-12-26 2022-12-26 Intelligent visual angle switching method for virtual simulation experiment

Publications (2)

Publication Number Publication Date
CN115981514A CN115981514A (en) 2023-04-18
CN115981514B true CN115981514B (en) 2023-10-03

Family

ID=85959220

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211677708.1A Active CN115981514B (en) 2022-12-26 2022-12-26 Intelligent visual angle switching method for virtual simulation experiment

Country Status (1)

Country Link
CN (1) CN115981514B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103257707A (en) * 2013-04-12 2013-08-21 中国科学院电子学研究所 Three-dimensional roaming method utilizing eye gaze tracking and conventional mouse control device
CN107562326A (en) * 2017-09-30 2018-01-09 东莞市同立方智能科技有限公司 A kind of method of the model line in virtual 3D scenes
CN107741782A (en) * 2017-09-20 2018-02-27 国网山东省电力公司泰安供电公司 A kind of equipment virtual roaming method and apparatus
CN112437286A (en) * 2020-11-23 2021-03-02 成都易瞳科技有限公司 Method for transmitting panoramic original picture video in blocks
CN112807686A (en) * 2021-01-28 2021-05-18 网易(杭州)网络有限公司 Game fighting method and device and electronic equipment
CN113506489A (en) * 2021-07-09 2021-10-15 洛阳师范学院 Virtual simulation technology-based unmanned aerial vehicle training method and device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4332231B2 (en) * 1997-04-21 2009-09-16 ソニー株式会社 Imaging device controller and imaging system
US7542050B2 (en) * 2004-03-03 2009-06-02 Virtual Iris Studios, Inc. System for delivering and enabling interactivity with images

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103257707A (en) * 2013-04-12 2013-08-21 中国科学院电子学研究所 Three-dimensional roaming method utilizing eye gaze tracking and conventional mouse control device
CN107741782A (en) * 2017-09-20 2018-02-27 国网山东省电力公司泰安供电公司 A kind of equipment virtual roaming method and apparatus
CN107562326A (en) * 2017-09-30 2018-01-09 东莞市同立方智能科技有限公司 A kind of method of the model line in virtual 3D scenes
CN112437286A (en) * 2020-11-23 2021-03-02 成都易瞳科技有限公司 Method for transmitting panoramic original picture video in blocks
CN112807686A (en) * 2021-01-28 2021-05-18 网易(杭州)网络有限公司 Game fighting method and device and electronic equipment
CN113506489A (en) * 2021-07-09 2021-10-15 洛阳师范学院 Virtual simulation technology-based unmanned aerial vehicle training method and device

Also Published As

Publication number Publication date
CN115981514A (en) 2023-04-18

Similar Documents

Publication Publication Date Title
CN101561710B (en) Man-machine interaction method based on estimation of human face posture
CN106097393B (en) It is a kind of based on multiple dimensioned with adaptive updates method for tracking target
CN108171770A (en) A kind of human face expression edit methods based on production confrontation network
CN107150347A (en) Robot perception and understanding method based on man-machine collaboration
CN108230383A (en) Hand three-dimensional data determines method, apparatus and electronic equipment
CN108053398A (en) A kind of melanoma automatic testing method of semi-supervised feature learning
CN110688965A (en) IPT (inductive power transfer) simulation training gesture recognition method based on binocular vision
CN109375765A (en) Eyeball tracking exchange method and device
CN107146237A (en) A kind of method for tracking target learnt based on presence with estimating
CN105426882A (en) Method for rapidly positioning human eyes in human face image
CN107247466B (en) Robot head gesture control method and system
CN115981514B (en) Intelligent visual angle switching method for virtual simulation experiment
CN108416800A (en) Method for tracking target and device, terminal, computer readable storage medium
CN107894834A (en) Gesture identification method and system are controlled under augmented reality environment
CN110598719A (en) Method for automatically generating face image according to visual attribute description
CN109886091A (en) Three-dimensional face expression recognition methods based on Weight part curl mode
CN116052264B (en) Sight estimation method and device based on nonlinear deviation calibration
CN107193384B (en) Switching method of mouse and keyboard simulation behaviors based on Kinect color image
CN109753922A (en) Anthropomorphic robot expression recognition method based on dense convolutional neural networks
CN108111868A (en) A kind of constant method for secret protection of expression based on MMDA
CN106934339A (en) A kind of target following, the extracting method of tracking target distinguishing feature and device
CN112990105B (en) Method and device for evaluating user, electronic equipment and storage medium
Huang et al. Real-time precise human-computer interaction system based on gaze estimation and tracking
JPWO2016021152A1 (en) Posture estimation method and posture estimation apparatus
Sugiharti et al. Convolutional neural Network-XGBoost for accuracy enhancement of breast cancer detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant