CN110288714B - Virtual simulation experiment system - Google Patents
Virtual simulation experiment system Download PDFInfo
- Publication number
- CN110288714B CN110288714B CN201910543576.5A CN201910543576A CN110288714B CN 110288714 B CN110288714 B CN 110288714B CN 201910543576 A CN201910543576 A CN 201910543576A CN 110288714 B CN110288714 B CN 110288714B
- Authority
- CN
- China
- Prior art keywords
- virtual
- experiment
- container
- dimensional
- camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
- G06T7/85—Stereo camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
Abstract
The invention discloses a virtual simulation experiment system which is characterized by comprising a camera and a virtual experiment container, wherein the virtual experiment container comprises an electronic chip and a display, the camera acquires a data image of an object and a human face of an operator in a scene, the electronic chip processes the data image, a two-dimensional projection of an experiment three-dimensional animation sequence prestored by the electronic chip along a sight line direction is calculated according to the position relation between a human face sight line in the data image and the virtual experiment container, and the display displays the two-dimensional projection. This virtual simulation experiment system sets up the display at experiment container body structural, has solved the inconsistent problem of experiment container direction of user's eyes and operation in the experiment operation, makes user's operation process convenient nature, reduces user's cognitive load, improves user experience.
Description
Technical Field
The invention relates to the field of virtual experiments, in particular to a virtual simulation experiment system.
Background
The virtual experiment means that a simulation method and a virtuality and reality combined technology are adopted to realize a scientific experiment. For example, an experimenter simulates an actual experimental process by using two mold beakers, and the phenomenon of the experimental result is shown on a computer.
However, in this process, when the user actually operates the experimental container, the user does not look at the experimental container being operated, but looks at the direction of the computer screen, so that the problem that the gesture operation direction is inconsistent with the operation result display position occurs, and the problem that the hands and the eyes are inconsistent causes various problems, namely, the operation process of the experimenter is unnatural and inconvenient, the user experience is influenced, and the cognitive load of the user is increased.
Disclosure of Invention
In order to solve the technical problem, the invention provides a virtual simulation experiment system which can enable hands and eyes of a user to be consistent when the user performs a virtual simulation experiment.
In order to achieve the purpose, the invention adopts the following technical scheme:
the utility model provides a virtual simulation experiment system, characterized by, includes camera and virtual experiment container, the virtual experiment container includes electronic chip and display, object and operating personnel face data image in the scene are acquireed to the camera, the electronic chip handles data image calculates the two-dimensional projection that obtains the experiment three-dimensional animation sequence that the electronic chip prestores along the sight direction according to the position relation of face sight and virtual experiment container in the data image, the display shows two-dimensional projection.
Further, different color marks are respectively arranged on the surface of the virtual experiment container along the same generatrix parallel to the central axis, and the posture L of the virtual simulation experiment container is obtained by obtaining the spatial positions P1 and P2 of the different marks:
L=(P1-P2)/||P1-P2||。
furthermore, a position sensor and an attitude sensor are further arranged on the virtual experiment container and used for verifying the three-dimensional position of the virtual experiment container in the world coordinate system and the direction vector of the central axis of the virtual experiment container in the world coordinate system, which are obtained by processing the data image acquired by the camera.
Further, the position sensor is an infrared sensing sensor.
Furthermore, the virtual experiment container is provided with a plurality of virtual experiment containers, and different virtual experiment containers are identified through the position sensor.
Further, the display is a plane display arranged at the top of the virtual experiment container and a ring screen display arranged at the side.
Furthermore, the number of the cameras is two, and the electronic chip analyzes and processes data images acquired by the two cameras through a binocular reconstruction algorithm;
the binocular reconstruction algorithm acquires images of the two cameras, calibrates the images to acquire external parameters and internal parameters of the cameras, extracts features in the images, establishes a corresponding relation between the two images according to the extracted features, enables imaging points of the same physical space point in the two images to be correspondingly matched, and combines the external parameters and the internal parameters of the cameras to construct three-dimensional scene information according to matching results;
the camera external parameter determines the relative position relation between the camera coordinate and the world coordinate system, and the camera internal parameter determines the projection relation of the camera from a three-dimensional space to a two-dimensional image;
the features include feature points, feature lines, and feature regions.
Further, extracting image features by adopting a scale-invariant feature transform algorithm, wherein the scale-invariant feature transform algorithm comprises the following steps of:
extreme value detection of the scale space: searching images on all scale spaces, and identifying potential interest points which are invariable in scale and selection through a Gaussian differential function;
positioning the characteristic points: determining position scale on each candidate position by fitting a fine model, and selecting key points according to the stability of the position scale;
and (4) characteristic direction assignment: assigning one or more directions to each keypoint location based on the local gradient direction of the image;
description of characteristic points: local gradients of the image are measured at a selected scale in a neighborhood around each feature point, and these gradients are transformed into a representation that allows for local shape deformation and illumination transformation.
Further, the construction of the three-dimensional animation sequence comprises scene modeling and motion modeling, wherein the scene modeling is used for establishing a three-dimensional model of the container according to the shape, the container, the volume and the posture L of the real experimental container; the motion modeling establishes different motion models according to different substances and different motion characteristics.
Further, the detection algorithm of the human face sight line is as follows: and carrying out face detection on the data image, intercepting ROI areas of the left eye and the right eye according to the detected face image, and carrying out eyeball center detection and tracking according to the intercepted ROI.
The invention has the beneficial effects that:
the invention provides a virtual simulation experiment system, wherein a display is arranged on the structure of an experiment container body, so that the problem that the directions of the eyes of a user and the operated experiment container are inconsistent in experiment operation is solved, the operation process of the user is convenient and natural, the cognitive load of the user is reduced, and the user experience is improved.
Drawings
FIG. 1 is a schematic diagram of a virtual experiment container according to an embodiment of the present invention.
The system comprises an electronic chip 1, a display 2, a position sensor 3 and an attitude sensor 4.
Detailed Description
In order to clearly explain the technical features of the present invention, the following detailed description of the present invention is provided with reference to the accompanying drawings. The following disclosure provides many different embodiments, or examples, for implementing different features of the invention. To simplify the disclosure of the present invention, the components and arrangements of specific examples are described below. Furthermore, the present invention may repeat reference numerals and/or letters in the various examples. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed. It should be noted that the components illustrated in the figures are not necessarily drawn to scale. Descriptions of well-known components and processing techniques and processes are omitted so as to not unnecessarily limit the invention.
The embodiment of the invention provides a virtual simulation experiment system, which comprises a camera and a virtual experiment container, wherein the schematic structural diagram of the virtual experiment container is shown in figure 1 and comprises an electronic chip 1 and a display 2, the camera acquires data images of objects and human faces of operators in a scene, the electronic chip 1 processes the data images, two-dimensional projection of an experiment three-dimensional animation sequence prestored in the electronic chip 1 along a sight line direction is calculated according to the position relation between a human face sight line in the data images and the virtual experiment container, and the display 2 displays the two-dimensional projection. The output result of the electronic chip is sent to be dynamically presented on the display in a wired or wireless mode. The display can be a part of the virtual experiment container body structure, and can also be connected with the body by adopting a split structure. The display can be a ring screen, and also can be a display device positioned on the side, the top or the bottom of the experimental container, in this embodiment, a flat display is arranged on the top of the cylindrical experimental container, and a ring screen display is arranged on the side. The body structure of the virtual experiment container is constructed by common materials such as plastics.
The virtual experiment container is also provided with a position sensor 3 and an attitude sensor 4 which are used for verifying the spatial position and the attitude of the virtual experiment container obtained by processing the data image acquired by the camera, wherein the spatial position is the three-dimensional position of the virtual experiment container in a world coordinate system, and the attitude is the direction vector of the central axis of the virtual experiment container in the world coordinate system.
The position sensor is preferably an infrared sensing sensor.
The algorithm program of the virtual simulation experiment system can be carried out on the electronic chip 1, and can also be connected with a computer in a wired or wireless mode, run on the computer, and return the calculation result to the virtual experiment container for presentation.
The algorithm in this embodiment is as follows:
step 1: sensing spatial position and posture of virtual experiment container
The surface of the virtual experiment container is respectively provided with different color marks along the same generatrix parallel to the central axis, and the posture L of the virtual simulation experiment container is obtained by obtaining the spatial positions P1 and P2 of the different marks:
L=(P1-P2)/||P1-P2||。
the electronic chip analyzes and processes data images acquired by the two cameras through a binocular reconstruction algorithm, and calculates the spatial position of the virtual simulation experiment container.
The binocular reconstruction algorithm mainly comprises the following steps: the method comprises the steps of obtaining images of two cameras, calibrating and obtaining external parameters and internal parameters of the cameras, extracting features in the images, establishing a corresponding relation between the two images according to the extracted features, enabling imaging points of a same physical space point in the two images to be correspondingly matched, and combining a matching result with the external parameters and the internal parameters of the cameras to construct three-dimensional scene information.
The conversion parameters of the cameras of different models for projection imaging of the real object are different, so that the external parameters and the internal parameters of the cameras need to be calibrated and obtained. The camera external reference determines the relative position relation between the camera coordinate and the world coordinate system, and the camera internal reference determines the projection relation of the camera from a three-dimensional space to a two-dimensional image.
In the embodiment, a Zhang calibration method is adopted, a chessboard is used as a calibration object, and images are captured by changing the direction of the chessboard for multiple times to obtain coordinate information, so that the internal parameters and the external parameters of the camera are obtained through solving.
The extracted features mainly include feature points, feature lines, and regions. SIFT (scale invariant feature transform) algorithm, SURF, ORB, SURF (fast robust feature) algorithm based on SIFT improvement, etc. may be adopted, and the present embodiment uses SIFT (scale invariant feature transform) algorithm, which has better robustness to rotation, scale, and perspective.
SIFT has 4 main steps:
and (3) extremum detection of the scale space: images over all scale spaces are searched, potential scale-invariant interest points are identified by gaussian differential functions, and the interest points are selected to be invariant.
Positioning the characteristic points: at each candidate location, the location scale is determined by a fitting fine model, and the keypoints are selected according to their degree of stability.
And (4) characteristic direction assignment: based on the local gradient direction of the image, one or more directions are assigned to each key point position, and all the subsequent operations are to transform the directions, scales and positions of the key points so as to provide invariance of the features.
Description of characteristic points: local gradients of the image are measured at a selected scale in a neighborhood around each feature point, and these gradients are transformed into a representation that allows for relatively large local shape deformation and illumination transformations.
The imaging point corresponding matching uses a stereo matching algorithm in OpenCV, including but not limited to BM and SGBM, and the present embodiment selects SGBM for global matching.
The spatial position can also adopt infrared sensing equipment such as Kinect, and the functions provided in SDK development kits carried by the equipment are called to directly obtain the spatial position of the virtual simulation experiment container. The frame of the camera is obtained by using OpenCV, and the effect of identifying the object in real time can be finally achieved based on a model of a Tensorflow training data set of Google.
Step 2: rendering a three-dimensional animation sequence corresponding to a container in a real experiment container
Tool software such as OpenGL, autoCAD, maya, unity and the like is adopted to generate three-dimensional animation of the contained object, the contained object in the experimental container is M, the volume is V, the motion parameters S are combined with the posture L of the experimental container to construct a three-dimensional animation sequence, and M, V and S can be preset in a manual mode or provided by other application programs.
The construction of the three-dimensional animation sequence comprises scene modeling and motion modeling, wherein the scene modeling is used for establishing a three-dimensional model of the contained object according to the shape, the contained object, the volume and the posture L of a real experiment container, M determines the physical characteristics such as the color and the like of the contained object, V determines the relative size of the contained object, and L determines the local shape of the contained object (for example, if the contained object is liquid, the shape of the top surface of the model is a diagonal plane of the experiment container); the motion modeling establishes different motion models according to different substances and different motion characteristics. For example, a liquid rotation motion model, a liquid vibration motion model, and a solid rolling motion model, and a motion model based on the motion parameter S is constructed by using the existing known models.
And 3, step 3: calculating the attention direction of the human eye sight
And reading a data image from the camera, carrying out face detection, intercepting ROI (region of interest) areas of a left eye and a right eye according to the detected face image, and finally carrying out eyeball center detection and tracking according to the intercepted ROI. The algorithm therefore mainly comprises three parts: face detection, ROI interception and eyeball center positioning. Using a cascaded classifier integrated in OpenCV.
And 4, step 4: shooting the three-dimensional animation sequence in the same viewpoint direction
In general, the direction of the viewpoint of the human eye is not changed, and the virtual experiment container in the hand of the user moves along with the movement of the human hand. For this reason, the motion scene needs to be "photographed". The specific method comprises the following steps:
acquiring a camera projection parameter by adopting the existing camera calibration method;
and projecting the three-dimensional animation sequence along the sight line direction in a camera coordinate system to obtain a two-dimensional animation sequence.
And 5, step 5: displaying the photographing result on the display
And displaying the two-dimensional animation sequence on a display of the virtual experiment container in real time.
Although the embodiments of the present invention have been described with reference to the accompanying drawings, the scope of the present invention is not limited thereto. Various modifications and alterations will occur to those skilled in the art based on the foregoing description. And are neither required nor exhaustive of all embodiments. On the basis of the technical scheme of the invention, various modifications or changes which can be made by a person skilled in the art without creative efforts are still within the protection scope of the invention.
Claims (7)
1. A virtual simulation experiment system is characterized by comprising a camera and a virtual experiment container, wherein the virtual experiment container comprises an electronic chip and a display, the camera acquires data images of objects and human faces of operators in a scene, the electronic chip processes the data images, a two-dimensional projection of an experiment three-dimensional animation sequence prestored by the electronic chip along a sight line direction is calculated according to the position relation between a human face sight line in the data images and the virtual experiment container, and the display displays the two-dimensional projection;
the surface of the virtual experiment container is respectively provided with different color marks along the same generatrix parallel to the central axis, and the posture L of the virtual simulation experiment container is obtained by obtaining the spatial positions P1 and P2 of the different marks:
L=(P1-P2)/||P1-P2||;
the virtual experiment container is also provided with a position sensor and an attitude sensor, and is used for verifying the three-dimensional position of the virtual experiment container in a world coordinate system and the direction vector of the central shaft of the virtual experiment container in the world coordinate system, which are obtained by processing the data image acquired by the camera;
the electronic chip analyzes and processes data images acquired by the two cameras through a binocular reconstruction algorithm;
the binocular reconstruction algorithm acquires images of the two cameras, calibrates the images to acquire external parameters and internal parameters of the cameras, extracts features in the images, establishes a corresponding relation between the two images according to the extracted features, enables imaging points of the same physical space point in the two images to be correspondingly matched, and combines the external parameters and the internal parameters of the cameras to construct three-dimensional scene information according to matching results;
the camera external reference determines the relative position relationship between the camera coordinate and a world coordinate system, and the camera internal reference determines the projection relationship of the camera from a three-dimensional space to a two-dimensional image;
the features include feature points, feature lines, and feature regions.
2. The virtual simulation experiment system of claim 1, wherein the position sensor is an infrared sensor.
3. The virtual simulation experiment system of claim 2, wherein a plurality of virtual experiment containers are provided, and different virtual experiment containers are identified by the position sensor.
4. The virtual simulation experiment system of claim 1, wherein the display is a flat panel display arranged on the top of the virtual experiment container and a ring-screen display arranged on the side.
5. The virtual simulation experiment system of claim 1, wherein the image features are extracted by a scale invariant feature transform algorithm, the scale invariant feature transform algorithm comprising:
extreme value detection of the scale space: searching images on all scale spaces, and identifying potential interest points which are invariable in scale and selection through a Gaussian differential function;
positioning the characteristic points: determining position scale on each candidate position by fitting a fine model, and selecting key points according to the stability of the position scale;
and (4) characteristic direction assignment: assigning one or more directions to each keypoint location based on the local gradient direction of the image;
description of characteristic points: local gradients of the image are measured at a selected scale in a neighborhood around each feature point, and these gradients are transformed into a representation that allows for local shape deformation and illumination transformation.
6. The virtual simulation experiment system of claim 1, wherein the construction of the three-dimensional animation sequence comprises scene modeling and motion modeling, and the scene modeling establishes a three-dimensional model of the container according to the shape, the content, the volume and the posture L of the real experiment container; the motion modeling establishes different motion models according to different substances and different motion characteristics.
7. The virtual simulation experiment system according to claim 1, wherein the detection algorithm of the human face sight line is as follows: and carrying out face detection on the data image, intercepting ROI areas of the left eye and the right eye according to the detected face image, and carrying out eyeball center detection and tracking according to the intercepted ROI.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910543576.5A CN110288714B (en) | 2019-06-21 | 2019-06-21 | Virtual simulation experiment system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910543576.5A CN110288714B (en) | 2019-06-21 | 2019-06-21 | Virtual simulation experiment system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110288714A CN110288714A (en) | 2019-09-27 |
CN110288714B true CN110288714B (en) | 2022-11-04 |
Family
ID=68005315
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910543576.5A Active CN110288714B (en) | 2019-06-21 | 2019-06-21 | Virtual simulation experiment system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110288714B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111275731B (en) * | 2020-01-10 | 2023-08-18 | 杭州师范大学 | Projection type physical interaction desktop system and method for middle school experiments |
CN113724360A (en) * | 2021-08-25 | 2021-11-30 | 济南大学 | Virtual-real simulation method for taking experimental materials |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109407547A (en) * | 2018-09-28 | 2019-03-01 | 合肥学院 | Multi-cam assemblage on-orbit test method and system towards panoramic vision perception |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7768527B2 (en) * | 2006-05-31 | 2010-08-03 | Beihang University | Hardware-in-the-loop simulation system and method for computer vision |
-
2019
- 2019-06-21 CN CN201910543576.5A patent/CN110288714B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109407547A (en) * | 2018-09-28 | 2019-03-01 | 合肥学院 | Multi-cam assemblage on-orbit test method and system towards panoramic vision perception |
Non-Patent Citations (1)
Title |
---|
海上舰艇作战虚拟视景图像系统仿真研究;王昂;《计算机仿真》;20160415(第04期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN110288714A (en) | 2019-09-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104937635B (en) | More hypothesis target tracking devices based on model | |
WO2020010979A1 (en) | Method and apparatus for training model for recognizing key points of hand, and method and apparatus for recognizing key points of hand | |
CN107111833B (en) | Fast 3D model adaptation and anthropometry | |
CN111243093B (en) | Three-dimensional face grid generation method, device, equipment and storage medium | |
JP7453470B2 (en) | 3D reconstruction and related interactions, measurement methods and related devices and equipment | |
CN102812416B (en) | Pointing input device, indicative input method, program, recording medium and integrated circuit | |
CN107484428B (en) | Method for displaying objects | |
US20180321776A1 (en) | Method for acting on augmented reality virtual objects | |
JP2019510297A (en) | Virtual try-on to the user's true human body model | |
CN106355153A (en) | Virtual object display method, device and system based on augmented reality | |
CN111783820A (en) | Image annotation method and device | |
WO2019035155A1 (en) | Image processing system, image processing method, and program | |
TW201709718A (en) | Method and apparatus for displaying a light field based image on a user's device, and corresponding computer program product | |
CN109978931A (en) | Method for reconstructing three-dimensional scene and equipment, storage medium | |
WO2016029939A1 (en) | Method and system for determining at least one image feature in at least one image | |
JP7026825B2 (en) | Image processing methods and devices, electronic devices and storage media | |
WO2018075053A1 (en) | Object pose based on matching 2.5d depth information to 3d information | |
CN113822977A (en) | Image rendering method, device, equipment and storage medium | |
JP2020008972A (en) | Information processor, information processing method, and program | |
JP2019096113A (en) | Processing device, method and program relating to keypoint data | |
CN109271023B (en) | Selection method based on three-dimensional object outline free-hand gesture action expression | |
US20200057778A1 (en) | Depth image pose search with a bootstrapped-created database | |
CN108305321B (en) | Three-dimensional human hand 3D skeleton model real-time reconstruction method and device based on binocular color imaging system | |
CN109949900B (en) | Three-dimensional pulse wave display method and device, computer equipment and storage medium | |
CN111553284A (en) | Face image processing method and device, computer equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |