CN115875652A - VR equipment and adjusting method thereof, doctor operating table and surgical robot system - Google Patents

VR equipment and adjusting method thereof, doctor operating table and surgical robot system Download PDF

Info

Publication number
CN115875652A
CN115875652A CN202211418141.6A CN202211418141A CN115875652A CN 115875652 A CN115875652 A CN 115875652A CN 202211418141 A CN202211418141 A CN 202211418141A CN 115875652 A CN115875652 A CN 115875652A
Authority
CN
China
Prior art keywords
image
barrel
distance
user
eyes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211418141.6A
Other languages
Chinese (zh)
Inventor
请求不公布姓名
朱祥
何超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Weimi Medical Instrument Co ltd
Original Assignee
Shanghai Weimi Medical Instrument Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Weimi Medical Instrument Co ltd filed Critical Shanghai Weimi Medical Instrument Co ltd
Priority to CN202211418141.6A priority Critical patent/CN115875652A/en
Publication of CN115875652A publication Critical patent/CN115875652A/en
Pending legal-status Critical Current

Links

Images

Abstract

The specification provides VR equipment and an adjusting method thereof, a doctor operating table and a surgical robot system, wherein the method comprises the following steps: acquiring a first distance between a virtual image corresponding to a target object in a next frame of image to be displayed and a display screen; determining a second distance of a convergence plane of the sight lines of the two eyes of the user relative to the display screen at the current moment; and under the condition that the difference value of the first distance and the second distance reaches a preset threshold value, adjusting the pose of an optical imaging assembly in the VR equipment, so that the convergence plane of the sight lines of the eyes of the user at the next moment moves towards the position of the virtual image corresponding to the target object in the next frame of image to be displayed, and the distance between the convergence planes of the sight lines of the eyes of the user at the next moment and the current moment is smaller than or equal to a preset distance threshold value. This scheme can enough make the user can perceive 3D's depth and feel, can alleviate again because frequently carry out the dizzy sense that vergence regulation by a wide margin brought.

Description

VR equipment and adjusting method thereof, doctor operating table and surgical robot system
Technical Field
The specification relates to the technical field of medical instruments, in particular to VR equipment, an adjusting method of the VR equipment, a doctor operating table and a surgical robot system.
Background
By using two eyes, we can observe the same object from different angles, and because there is a certain distance (interpupillary distance) between the two eyes, the positions of the objects seen by the two eyes are slightly deviated, which is commonly called parallax. Due to parallax, after two object images seen by the left eye and the right eye are transmitted to the brain, a complete single object image with 3D stereoscopic vision is generated through analysis and synthesis.
When a system based on the VR principle displays a 3D image, parallax is generated because different stereoscopic display mechanisms of the same image are respectively viewed by the left eye and the right eye; and then the images are integrated through the brain, thereby realizing the stereoscopic display. However, in the process, the two eyes focus on the position of the display screen through the lens respectively, after the brain is integrated, due to the depth difference between the three-dimensional object and the display screen, when the convergence surface of the human eyes and the display screen are not overlapped, the human eyes feel uncomfortable, the eyes need to perform convergence rotation, and if the eyes frequently perform convergence rotation, dizziness is easy to occur.
Disclosure of Invention
The embodiment of the application aims to provide VR equipment, an adjusting method of the VR equipment, a doctor operating table and a surgical robot system, so as to solve the problem that a user easily feels dizzy when watching a 3D image through the VR equipment.
A first aspect of the present description provides a VR device adjustment method, comprising: acquiring a first distance between a virtual image corresponding to a target object in a next frame of image to be displayed and a display screen; determining a second distance of a convergence plane of the sight lines of the two eyes of the user relative to the display screen at the current moment; and under the condition that the difference value of the first distance and the second distance reaches a preset threshold value, adjusting the pose of an optical imaging assembly in the VR equipment, so that the convergence plane of the sight lines of the eyes of the user at the next moment moves towards the position of the virtual image corresponding to the target object in the next frame of image to be displayed, and the distance between the convergence planes of the sight lines of the eyes of the user at the next moment and the current moment is smaller than or equal to a preset distance threshold value.
In some embodiments, prior to determining the second distance of the convergence plane of the user's eyes from the display screen at the current time, comprises: a convergence plane of the user's eyes at the current time is determined.
In some embodiments, determining a convergence plane for the user's eyes at the current time comprises: under the condition that laser irradiates to the scanning galvanometer and is reflected to human eyes through the scanning galvanometer, adjusting the reflection angle of the scanning galvanometer to enable the reflection light of the laser to move on the retina and simultaneously acquiring images on the retinas of the two eyes; determining the staring directions of the left eye and the right eye of the user according to the images on the retinas of the two eyes corresponding to the at least two reflection angles; determining the positions of the user fixation points corresponding to the reflection angles according to the fixation directions of the left eye and the right eye of the user corresponding to the reflection angles; and determining the convergence plane of the sight lines of the two eyes of the user at the current moment according to the positions of the fixation points of the user corresponding to the reflection angles.
In some embodiments, adjusting the pose of an optical imaging assembly in a VR device includes: adjusting an included angle between a first lens barrel corresponding to the left eye and a second lens barrel corresponding to the right eye; the VR device comprises an objective lens group, a left ocular lens group and a right ocular lens group, wherein the left ocular lens group is arranged in the first lens cone, and the right ocular lens group is arranged in the second lens cone.
In some embodiments, adjusting an angle between a first barrel corresponding to a left eye and a second barrel corresponding to a right eye comprises: adjusting the depth of a tip of a wedge block on the VR device inserted between the first lens barrel and the second lens barrel; the wedge block is located between the first barrel and the second barrel, and has a pressing force with the first barrel and the second barrel.
In some embodiments, adjusting a depth to which a tip of a wedge on the VR device is inserted between the first barrel and the second barrel includes; adjusting the depth of a tip of a wedge block on the VR device inserted between the first lens barrel and the second lens barrel through a lead screw; the lead screw is arranged at one end, corresponding to the tip, of the wedge-shaped block.
A second aspect of the present specification provides a VR device adjustment apparatus, comprising: the device comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring a first distance between a virtual image corresponding to a target object in a next frame image to be displayed and a display screen; a first determination unit configured to determine a second distance of a convergence plane of the user's eyes sight line with respect to the display screen at a current time; and the adjusting unit is used for adjusting the pose of the optical imaging assembly in the VR equipment under the condition that the difference value of the first distance and the second distance reaches a preset threshold value, so that the convergence plane of the binocular vision of the user at the next moment moves towards the position of the virtual image corresponding to the target object in the next frame image to be displayed, and the distance between the convergence plane of the binocular vision of the user at the next moment and the current moment is smaller than or equal to the preset distance threshold value.
In some embodiments, the apparatus further comprises: and the second determining unit is used for determining the convergence plane of the sight lines of the eyes of the user at the current moment.
In some embodiments, the second determination unit includes: the first adjusting subunit is used for adjusting the reflection angle of the scanning galvanometer to enable the reflection light of the laser to move on the retina and simultaneously acquire images on the retinas of two eyes under the condition that the laser irradiates the scanning galvanometer and is reflected to the human eyes through the scanning galvanometer; the first determining subunit is used for determining the gazing directions of the left eye and the right eye of the user according to the images on the retinas of the two eyes corresponding to the at least two reflection angles; the second determining subunit is used for determining the positions of the user fixation points corresponding to the reflection angles according to the fixation directions of the left eye and the right eye of the user corresponding to the reflection angles; and the third determining subunit is used for determining the convergence plane of the binocular sight of the user at the current moment according to the positions of the fixation points of the user corresponding to the reflection angles.
In some embodiments, the adjusting unit includes: the second adjusting subunit is used for adjusting an included angle between the first lens cone corresponding to the left eye and the second lens cone corresponding to the right eye; the VR device comprises an objective lens group, a left ocular lens group and a right ocular lens group, wherein the left ocular lens group is arranged in the first lens cone, and the right ocular lens group is arranged in the second lens cone.
In some embodiments, the second regulating subunit comprises: the third adjusting subunit is used for adjusting the depth of the tip of a wedge block on the VR device inserted between the first lens barrel and the second lens barrel; the wedge block is located between the first barrel and the second barrel, and has a pressing force with the first barrel and the second barrel.
In some embodiments, the third conditioning subunit comprises; the fourth adjusting subunit is used for adjusting the depth of a tip of a wedge block on the VR device inserted between the first lens barrel and the second lens barrel through a lead screw; the lead screw is arranged at one end, corresponding to the tip, of the wedge-shaped block.
A third aspect of the present specification provides a VR device comprising: a display screen for displaying an image; the objective lens group is used for adjusting the imaging quality; the eyepiece group is used for amplifying the image formed by the objective lens component; a controller to perform the VR device adjustment method of any of claims 1-6.
In some embodiments, the eyepiece lens assembly comprises a left eyepiece lens group, a right eyepiece lens group; the left ocular group corresponds to the left eye and is arranged in the first lens cone; the right ocular group corresponds to the right eye and is arranged in the second lens cone; the included angle between the first lens barrel and the second lens barrel can be adjusted.
In some embodiments, a wedge block is disposed between the first barrel and the second barrel, the wedge block has a pressing force with the first barrel and the second barrel, and a depth of a tip of the wedge block inserted between the first barrel and the second barrel is adjustable.
In some embodiments, the VR device further comprises: a scanning mirror disposed between the objective lens group and the eyepiece lens group; the laser is used for emitting laser to the scanning reflector along a preset direction; the driver is used for controlling the scanning reflector to adjust the included angle between the reflecting surface and the incident laser; an image sensor for sensing an image on the retina reflected by the scanning mirror; the controller is also used for sending control instructions to the driver and receiving images sensed by the image sensor.
A fourth aspect of the specification provides a physician's console comprising a VR device of any one of the third aspects.
A fifth aspect of the present specification provides a surgical robotic system comprising: the surgical robot carries the image acquisition assembly to acquire an image of a target surgical position and carries a surgical instrument to execute surgical operation; the image trolley processes the image acquired by the image acquisition assembly to form a 3D image and feeds the 3D image back to the doctor operating table; the doctor operating table is used for displaying the 3D image, sensing the operation intention of a doctor and sending a control instruction to the surgical robot according to the operation intention of the doctor; wherein the physician console comprises the VR device of any of the third aspects.
A sixth aspect of the present description provides a computer storage medium having computer program instructions stored thereon that, when executed, implement the steps of the method of any one of the first aspects.
In the VR device and the adjustment method thereof, the doctor operating table, and the surgical robot system provided in this specification, when a difference between a first distance between a virtual image corresponding to a target object in a next frame of image to be displayed and a second distance between a convergence plane of a binocular vision of a user at a current time and the display screen reaches a predetermined threshold, a pose of an optical imaging component in the VR device is adjusted, so that the convergence plane of the binocular vision of the user at the next time moves toward a position of the virtual image corresponding to the target object in the next frame of image to be displayed, and the user can perceive a 3D depth sensation; and the distance between the convergence surfaces of the eyes of the user at the next moment and the current moment is kept to be smaller than or equal to the preset distance threshold, so that the convergence angle of the eyes is adjusted to be smaller, the convergence angle is not required to be adjusted, and the vertigo feeling caused by frequent large convergence adjustment is relieved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic view showing the convergence adjustment relationship when the human eye observes an object;
FIG. 2 is a schematic view showing the convergence conflict when the human eyes watch a 3D image on a display screen;
FIG. 3 shows a flow chart of a VR device adjustment method provided herein;
FIG. 4 is a schematic diagram illustrating the use of a scanning galvanometer to determine the convergence plane of the user's lines of sight for both eyes;
FIG. 5 shows a schematic view of a converging surface adjusting the distance between different lenses to adjust the line of sight of the user's eyes;
FIG. 6 is a schematic view showing adjustment of the angle between the left and right ocular groups;
fig. 7 shows a schematic diagram of a VR device near both eyes;
FIG. 8 shows a schematic view of a wedge block;
fig. 9 shows a schematic side view of a VR device;
FIG. 10 illustrates a functional block diagram of a VR modulating device provided herein;
FIG. 11 illustrates a schematic diagram of a VR device as provided herein;
FIG. 12 shows a schematic view of a surgical robotic system;
FIG. 13 shows a schematic structural view of a doctor's station;
FIG. 14 shows a schematic view of the structure of the image trolley;
fig. 15 shows a schematic structural view of the surgical robot;
fig. 16 shows a schematic configuration of the control provided in the present specification.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making creative efforts shall fall within the protection scope of the present application.
The convergence is that the eyes rotate inward (inward strabismus) or outward (outward strabismus) to adjust the positions of the two objective images, so that the brain can combine the two objective images into one objective image.
Focusing, namely, the eyes automatically adjust the focal distance according to the distance of an object, so that the image of the object clearly falls on the retina, and people can see the world clearly.
The convergence accommodation conflict, and under normal conditions, when people see objects in the real world, the convergence and focusing functions of eyes are mutually coordinated. The convergence function converges the visual lines of both eyes to the same object, and the focusing function focuses the object at the same distance. Over time, the brain has become accustomed to the rule that the line of sight and focus will always be in the same location. However, when watching a 3D movie, the distance between the viewer and the screen is constant, so the focal distance cannot be changed. This results in the focusing function failing to focus at the same distance following the convergence function as in the daily routine. Further breaks the long-standing law, and separates the positions of convergence and focusing, which is the convergence regulation conflict.
As shown in fig. 1, when observing an object at a distance X, the human eyes automatically adjust so that the object at the distance X just falls on the retina to form a clear image. At this time, the angle a between the two eye lines is the convergence angle, and a-X is the normal convergence adjustment relationship pair in the real world. When the eyes of a person look at objects with different distances, the value of a and the value of X are in one-to-one correspondence, and the brain is used to the correspondence.
As shown in fig. 2, in VR, objects are displayed on the display screen at the distance Y, and when the human eyes look at objects at different depths in the VR image, the convergence angle a changes constantly, but the focus point of the crystalline lens in the human eye is always at the distance Y, which breaks the original a-X correspondence in the brain, causes convergence conflict, and further causes vertigo to some extent. This vertigo is usually not very pronounced, but is aggravated if the vergence adjustment is frequently carried out.
In order to solve the problem that a user easily feels dizzy when watching a 3D image through a VR device, the present specification provides a VR device adjustment method, as shown in fig. 3, including the following steps:
s10: and acquiring a first distance of a virtual image corresponding to the target object in the next frame of image to be displayed relative to the display screen.
The virtual image is an object image formed by reverse extension lines of light rays entering human eyes. Under the condition that the positions of the imaging devices on the VR equipment are not changed, after light rays of objects displayed on the display screen enter human eyes, the object images seen by the human eyes are virtual images actually. Fig. 5 and 6 show the position of the virtual image.
Before step S10, position parameter information of the display screen in the VR device may be acquired, and the position parameter information may be a distance between the display screen and the objective lens, spatial coordinates of a center point of the display screen, or the like.
In some embodiments, the images displayed in the VR device may be frames of a pre-recorded video, so that after determining the position parameter information of the display screen, a first distance of a virtual image corresponding to the target object in each frame of image relative to the display screen may be calculated in advance before displaying the first frame of image.
In some embodiments, the image displayed in the VR device may be an image acquired in real time, and after each frame of image is acquired, a first distance between a virtual image corresponding to the target object in the image and the display screen may be calculated according to the position parameter information of the display screen, and then the image is displayed. The image may be acquired by an image acquisition unit such as an endoscope mounted on the surgical robot in fig. 12.
The "virtual image corresponding to the target object" in the image refers to an image formed by converging light rays reflected by the target object on a retina of a human eye through an optical imaging component in the VR device when the image is displayed, and then converging the light rays according to a reverse extension line of the light rays converged on the retina. That is, step S10 is to estimate in advance a first distance from the display screen to the position of the target object as perceived by the human eye when the image is displayed.
The target object in the image refers to an object of interest to the user. In the pre-recorded video, the target object may be a preset object desired to be focused on by the user. In the image acquired in real time, the target object may be an object recognized by a target recognition method.
The distance between the virtual image corresponding to the target object and the display screen can be directional, the distance is a positive value when the virtual image is in front of the display screen, and the distance is a negative value when the virtual image is behind the display screen, so that the situations that the virtual image is the same as the distance between the virtual image and the display screen but is respectively in front of and behind the display screen can be distinguished.
S20: a second distance of a convergence plane of the user's eyes at the current time relative to the display screen is determined.
When a user watches images on the display screen, the position of the convergence surface of the human eyes can be adjusted by adjusting the focal length of the eyes. When the picture displayed on the display screen is a 3D picture, the position of the virtual image may change with the focal length of the human eye unchanged. For example, as shown in fig. 5, the virtual image position coincides with the human eye convergence plane at the present time, and at the next time, as shown in fig. 6, the virtual image may be positioned between the human eye and the human eye convergence plane.
In this application, the convergence plane of the human eye is also the focal plane of the user.
In some embodiments, the converging surfaces of the user' S eyes at the current time may be determined by the following method before step S20:
s41: under the condition that laser irradiates to the scanning galvanometer and is reflected to human eyes through the scanning galvanometer, the reflection angle of the scanning galvanometer is adjusted to enable the reflection light of the laser to move on the retina, and images on the retinas of two eyes are shot simultaneously.
S42: and determining the gazing directions of the left eye and the right eye of the user according to the images on the retinas of the two eyes corresponding to the at least two reflection angles.
S43: and determining the positions of the user fixation points corresponding to the reflection angles according to the fixation directions of the left eye and the right eye of the user corresponding to the reflection angles.
S44: and determining the convergence plane of the sight lines of the two eyes of the user at the current moment according to the positions of the fixation points of the user corresponding to the reflection angles.
Fig. 4 is a schematic diagram showing a scanning galvanometer used for determining the convergence plane of the sight lines of the eyes of the user, wherein A is a laser, B is the scanning galvanometer, E is the pupil and the crystalline lens of the eyes of the user, W is the retina of the eyes of the user, C is an image sensor used for acquiring images on the retina, and D is a processor. Laser emitted by the laser A is reflected to human eyes through the scanning galvanometer B, the reflected laser irradiates on a retina W through pupils and crystalline lenses E of the human eyes, light on the retina W is further reflected to the image sensor D for sensitization, and the processor D carries out image recognition and analysis on the sensitization images of the image sensor D to obtain the gazing directions of the left eyes and the right eyes of the user. The scanning galvanometer B is driven by a driver F to adjust the reflection angle. The direction of the laser light emitted by laser a may be constant.
The second distance between the convergence plane of the user's eyes and the display screen may be directional, and the distance is a positive value in front of the display screen and a negative value in back of the display screen, so that the situations that the distance from the display screen is the same but the distance is respectively in front of and behind the display screen can be distinguished.
S30: and under the condition that the difference value of the first distance and the second distance reaches a preset threshold value, adjusting the pose of an optical imaging assembly in the VR equipment, so that the convergence plane of the binocular vision of the user at the next moment moves towards the position of a virtual image corresponding to the target object in the next frame of image to be displayed, and the distance between the convergence planes of the binocular vision of the user at the next moment and the current moment is smaller than or equal to the preset distance threshold value.
As shown in fig. 6, in the case that the distance between the virtual image position and the converging surface of the human eye reaches the predetermined threshold value, S30 is to adjust the posture of the optical imaging assembly in the VR device so that the converging surface of the human eye moves toward the virtual image position.
The adjusted optical imaging component may be the eyepiece in fig. 6, or the position and posture of each lens in the objective lens group in fig. 6, or an imaging flat plate arranged in the objective lens group.
Because different optical imaging assemblies have different functions on the VR display, the adjustment modes and the adjustment directions of the different optical imaging assemblies are different and can be determined according to the functions of the specific optical imaging assemblies.
The virtual image position may be moved in synchronization while adjusting the position of the optical imaging assembly to adjust the position of the converging surface of the human eye.
In some embodiments, the predetermined threshold may be fixed, i.e. the predetermined thresholds used for different vergence angle ranges are all the same.
In some embodiments, different predetermined thresholds may be set corresponding to different convergence angle ranges, for example, the predetermined threshold is X at a convergence angle of 50 ° to 60 °, the predetermined threshold is Y at a convergence angle of 30 ° to 40 °, and the predetermined threshold is Z at a convergence angle of 20 ° to 30 °.
In some embodiments, S30 may adjust the convergence plane of the user' S eyes by focusing. For example, the distance between different lenses can be adjusted to achieve focusing effect. As shown in fig. 5, a focusing plate may be provided in the objective lens group, and focusing may be performed by adjusting the position of the focusing plate.
In some embodiments, the optical imaging assembly in the VR device includes an objective lens group for adjusting imaging quality (e.g., light intensity, sharpness, chromatic aberration, aberration), a left eyepiece group corresponding to the left eye and disposed in the first barrel, and a right eyepiece group corresponding to the right eye and disposed in the second barrel, wherein the left eyepiece and the right eyepiece are used for magnifying an image output by the objective lens so that the size of the image formed on the retina of the human eye is distinguishable. Then, S30 may be to adjust an angle between the first barrel and the second barrel, thereby adjusting an angle between the left eyepiece group and the right eyepiece group. Accordingly, the binocular vision of the person is adjusted with the left and right eyepiece groups. That is, adjusting the first barrel and the second barrel substantially adjusts the angle between the binocular vision, that is, the convergence angle. Fig. 6 shows a schematic diagram of adjusting the angle between the left and right ocular groups.
Fig. 7 shows a schematic diagram of a VR device on a side close to two eyes, where a left concentric circle 3 represents a left eyepiece barrel, a right concentric circle 6 represents a right eyepiece barrel, 1 is a first lead screw, the left eyepiece barrel can be controlled to move in the left-right direction, 2 is a second lead screw, the left eyepiece barrel can be controlled to move in the up-down direction, 4 is a third lead screw, the right eyepiece barrel can be controlled to move in the up-down direction, 5 is a fourth lead screw, the right eyepiece barrel can be controlled to move in the left-right direction, 7 is a wedge block with a lead screw, a tip of the wedge block is squeezed between the two eyepiece barrels, one end of the wedge block corresponding to the tip is provided with a fifth lead screw 8, a depth of the wedge block inserted between the two eyepiece barrels can be adjusted through the fifth lead screw 8, and an included angle between the two eyepiece barrels can be adjusted. The fifth lead screw 8 can be adjusted manually or automatically by means of a motor. Fig. 8 shows a schematic side view of a wedge block, and fig. 9 shows a schematic side view of a VR device. The fifth threaded spindle 8 shown in fig. 7 and 9 can be provided with a wrench part suitable for manual operation, or the fifth threaded spindle 8 can be directly connected with a driver and controlled by the driver.
And S40, before the pose of an optical imaging assembly in VR equipment is adjusted, the distance between a virtual image corresponding to a target object in the next frame of image to be displayed and a convergence plane of the binocular vision of the user at the current moment can be calculated in advance, and if the distance is greater than a preset distance threshold, the preset distance threshold is used as the moving distance of the convergence plane of the binocular vision of the user.
After the moving distance and the moving direction of the convergence surface of the binocular vision are determined, the moving amount of the optical imaging assembly can be calculated, and a moving control command can be generated.
In the VR device and the adjustment method thereof provided by this specification, when a difference between a first distance, with respect to a display screen, of a virtual image corresponding to a target object in a next frame of image to be displayed and a second distance, with respect to the display screen, of a convergence plane of a binocular sight of a user at a current time reaches a predetermined threshold, a pose of an optical imaging assembly in the VR device is adjusted, so that the convergence plane of the binocular sight of the user at the next time moves toward a position of the virtual image corresponding to the target object in the next frame of image to be displayed, and the user can perceive a 3D depth sensation; and the distance between the convergence planes of the eyes of the user at the next moment and the current moment is kept to be smaller than or equal to the preset distance threshold, so that the convergence angle of the eyes is adjusted to be smaller, the convergence angle does not need to be adjusted, and the vertigo caused by frequent and large convergence adjustment is reduced.
The present specification provides a VR device adjusting apparatus, which can be used to implement the above VR device adjusting method. As shown in fig. 10, the apparatus includes an acquisition unit 10, a first determination unit 20, and an adjustment unit 30.
The acquiring unit 10 is configured to acquire a first distance, with respect to the display screen, of a virtual image corresponding to the target object in the next frame of image to be displayed.
The first determination unit 20 is configured to determine a second distance of the convergence plane of the user's eyes with respect to the display screen at the current time.
The adjusting unit 30 is configured to, when a difference between the first distance and the second distance reaches a predetermined threshold, adjust the pose of the optical imaging assembly in the VR device such that a convergence plane of the binocular vision of the user at the next time moves toward a position of a virtual image corresponding to the target object in the next frame image to be displayed, and a distance between the convergence planes of the binocular vision of the user at the next time and the current time is less than or equal to a preset distance threshold.
In some embodiments, the apparatus further comprises: and the second determination unit is used for determining the convergence plane of the sight lines of the two eyes of the user at the current moment.
In some embodiments, the second determination unit comprises: the first adjusting subunit is used for adjusting the reflection angle of the scanning galvanometer to enable the reflection light of the laser to move on the retina and simultaneously acquire images on the retinas of two eyes under the condition that the laser irradiates the scanning galvanometer and is reflected to the human eyes through the scanning galvanometer; the first determining subunit is used for determining the gazing directions of the left eye and the right eye of the user according to the images on the retinas of the two eyes corresponding to the at least two reflection angles; the second determining subunit is used for determining the positions of the user fixation points corresponding to the reflection angles according to the fixation directions of the left eye and the right eye of the user corresponding to the reflection angles; and the third determining subunit is used for determining the convergence plane of the binocular sight of the user at the current moment according to the positions of the fixation points of the user corresponding to the reflection angles.
In some embodiments, the adjusting unit includes: the second adjusting subunit is used for adjusting an included angle between the first lens cone corresponding to the left eye and the second lens cone corresponding to the right eye; the VR device comprises an objective lens group, a left ocular lens group and a right ocular lens group, wherein the left ocular lens group is arranged in the first lens cone, and the right ocular lens group is arranged in the second lens cone.
In some embodiments, the second regulating subunit comprises: the third adjusting subunit is used for adjusting the depth of the tip of a wedge block on the VR device inserted between the first lens barrel and the second lens barrel; the wedge block is located between the first barrel and the second barrel, and has a pressing force with the first barrel and the second barrel.
In some embodiments, the third conditioning subunit comprises; the fourth adjusting subunit is used for adjusting the depth of a tip of a wedge block on the VR device inserted between the first lens barrel and the second lens barrel through a lead screw; the screw rod is arranged at one end of the wedge block, which corresponds to the tip.
The present specification provides a VR device, as shown in fig. 11, comprising: the VR equipment adjusting method comprises a display screen for displaying images, an objective lens group for adjusting imaging quality, an eyepiece group for amplifying images formed by the objective lens group and a controller, wherein the controller is used for executing the VR equipment adjusting method.
In some embodiments, the eyepiece lens assembly comprises a left eyepiece lens group, a right eyepiece lens group; the left ocular group corresponds to the left eye and is arranged in the first lens cone; the right eyepiece group corresponds to the right eye and is arranged in the second lens cone; the included angle between the first lens barrel and the second lens barrel can be adjusted.
In some embodiments, a wedge block is disposed between the first barrel and the second barrel, the wedge block has a pressing force with the first barrel and the second barrel, and a depth of a tip of the wedge block inserted between the first barrel and the second barrel is adjustable.
In some embodiments, as shown in fig. 11, the VR device also includes a scanning mirror, a laser, a driver, and an image sensor. The scanning reflector is arranged between the objective lens group and the eyepiece lens group s; the laser is used for emitting laser to the scanning reflector along a preset direction; the driver is used for controlling the scanning reflector to adjust the included angle between the reflecting surface and the incident laser; the image sensor is used for sensing the image on the retina reflected by the scanning reflector; the controller is also used for sending control instructions to the driver and receiving images sensed by the image sensor.
The present specification provides a physician's console including a VR device as shown in fig. 11.
The present specification also provides a surgical robot system including a surgical robot, an imaging trolley, and a doctor's console.
The surgical robot carries the image acquisition assembly to acquire an image of a target surgical position and carries a surgical instrument to execute surgical operation; the image trolley processes the image acquired by the image acquisition assembly to form a 3D image and feeds the 3D image back to the doctor operating table; the doctor operating console is used for displaying the 3D image, sensing the operation intention of a doctor and sending a control instruction to the surgical robot according to the operation intention of the doctor; wherein the physician's console includes the VR device shown in FIG. 11.
The surgical robot system described above will be described in detail below. As shown in fig. 12, the surgical robot system is composed of a control end device 100, an execution end device 200, and an image end device 300. The control-side device 100, which is generally called a console, and a doctor console, is located outside the sterile field of the operating room, and is used for sending control instructions to the execution-side device 200. The execution-side device 200, i.e., a surgical robot device (in this specification, the surgical robot device is simply referred to as a surgical robot, and the robot device is simply referred to as a robot), is configured to control a surgical instrument mounted at the end of a mechanical arm thereof to perform a specific surgical action on a patient according to a control instruction. The surgical robot apparatus may further include an endoscope head. The image-side device 300, generally referred to as an image trolley, is used for processing the information collected by the endoscope to form a three-dimensional stereoscopic high-definition image, and feeding the three-dimensional stereoscopic high-definition image back to the control-side device 100.
As shown in fig. 13, a main manipulator, an imaging device, and a main controller are provided on a control-end device 100, i.e., a doctor console. The main operating hand detects the hand motion information of the operating doctor as a control signal of the whole surgical robot system. The imaging device provides a stereoscopic image of the patient body detected by the endoscope for the surgeon, and provides reliable image information for the surgeon to perform the surgical operation. When an operation is performed, an operator sits on the operator console, and controls the surgical robot and the endoscope through the main operator hand. The operator observes the transmitted intracavity three-dimensional image according to the imaging equipment, and the two hands drive the pose change of the master manipulator to control the movement of a mechanical arm mechanism and a surgical instrument on the surgical robot so as to complete various operations, thereby achieving the purpose of performing operations on patients. The main controller is a core control element of the surgical robot system and is used for controlling the surgical robot system to realize various operations and functions.
As shown in fig. 14, the image-side apparatus 300 mainly includes an endoscope (not shown), an endoscope processor, and a display apparatus. The endoscope comprises a tube body inserted into the body of a patient, a lens for observation and a lens for illumination arranged at the front end of the tube body, an optical fiber and an eyepiece, and is used for illuminating the inside of the cavity and acquiring a stereo image of the inside of the cavity. The endoscope processor is used for processing the acquired stereoscopic images in the cavity, and the display device is used for displaying the processed images in real time.
As shown in fig. 15, the execution end device 200, i.e., the surgical robot device, is located in a sterile area of an operating room, and mainly functions to control a surgical instrument mounted at the end of a robot arm to perform a specific surgical operation on a patient according to a control instruction given by a surgeon, and to carry an endoscope. In the sterile field, an assistant doctor is also usually arranged to replace the surgical instruments installed on the surgical robot to assist the surgeon in performing the surgery. To ensure patient safety, assistant surgeons typically have a higher priority on the control of the surgical robot.
The embodiment of the invention also provides a controller which can be used as a controller in VR equipment. As shown in fig. 16, the controller may include a processor 1601 and a memory 1602, wherein the processor 1601 and the memory 1602 may be connected by a bus or by other means, and the bus connection is taken as an example in fig. 16.
Processor 1601 may be a Central Processing Unit (CPU). The Processor 1601 may also be other general purpose processors, digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, or any combination thereof.
The memory 1602, as a non-transitory computer-readable storage medium, may be used to store non-transitory software programs, non-transitory computer-executable programs, and modules, such as program instructions/modules corresponding to the VR device adjustment method in the embodiments of the present invention (e.g., the obtaining unit 10, the first determining unit 20, and the adjusting unit 30 in fig. 10). The processor 1601 performs various functional applications and data classification of the processor by running non-transitory software programs, instructions and modules stored in the memory 1602, i.e., implements the VR device adjustment method in the above method embodiments.
The memory 1602 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created by the processor 1601, and the like. Further, the memory 1602 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 1602 may optionally include memory located remotely from the processor 1601, which may be connected to the processor 1601 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The one or more modules are stored in the memory 1602 and, when executed by the processor 1601, perform VR device adjustment methods as in the embodiment shown in fig. 3.
The details of the controller can be understood with reference to the description and effects of the embodiment shown in fig. 3, and are not described herein again.
The present specification also provides a computer storage medium having computer program instructions stored thereon that, when executed, implement the steps of the corresponding embodiment of fig. 3.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a Flash Memory (Flash Memory), a Hard Disk (Hard Disk Drive, abbreviated as HDD), a Solid State Drive (SSD), or the like; the storage medium may also comprise a combination of memories of the kind described above.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the hardware + program class embodiment, since it is substantially similar to the method embodiment, the description is simple, and reference may be made to part of the description of the method embodiment for relevant points.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
Those skilled in the art will also appreciate that, in addition to implementing the controller as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may therefore be considered as a hardware component, and the means included therein for performing the various functions may also be considered as a structure within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
The above description is only an example of the embodiments of the present disclosure, and is not intended to limit the embodiments of the present disclosure. Various modifications and variations to the embodiments described herein will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the embodiments of the present specification should be included in the scope of the claims of the embodiments of the present specification.

Claims (14)

1. A VR device adjustment method, comprising:
acquiring a first distance between a virtual image corresponding to a target object in a next frame of image to be displayed and a display screen;
determining a second distance of a convergence plane of the sight lines of the two eyes of the user relative to the display screen at the current moment;
and under the condition that the difference value of the first distance and the second distance reaches a preset threshold value, adjusting the pose of an optical imaging assembly in the VR equipment, so that the convergence plane of the sight lines of the eyes of the user at the next moment moves towards the position of the virtual image corresponding to the target object in the next frame of image to be displayed, and the distance between the convergence planes of the sight lines of the eyes of the user at the next moment and the current moment is smaller than or equal to a preset distance threshold value.
2. The method of claim 1, prior to determining a second distance of the convergence plane of the user's eyes from the display screen at the current time, comprising:
a convergence plane of the user's eyes at the current time is determined.
3. The method of claim 2, wherein determining a convergence plane for the user's eyes at the current time comprises:
under the condition that laser irradiates to the scanning galvanometer and is reflected to human eyes through the scanning galvanometer, adjusting the reflection angle of the scanning galvanometer to enable the reflection light of the laser to move on the retina and simultaneously acquiring images on the retinas of the two eyes;
determining the staring directions of the left eye and the right eye of the user according to the images on the retinas of the two eyes corresponding to the at least two reflection angles;
determining the positions of the user fixation points corresponding to the reflection angles according to the fixation directions of the left eye and the right eye of the user corresponding to the reflection angles;
and determining the convergence plane of the sight lines of the two eyes of the user at the current moment according to the positions of the fixation points of the user corresponding to the reflection angles.
4. The method of claim 1, wherein adjusting the pose of an optical imaging assembly in the VR device comprises:
adjusting an included angle between a first lens barrel corresponding to the left eye and a second lens barrel corresponding to the right eye; the VR device comprises an objective lens group, a left ocular lens group and a right ocular lens group, wherein the left ocular lens group is arranged in the first lens cone, and the right ocular lens group is arranged in the second lens cone.
5. The method of claim 4, wherein adjusting an angle between a first barrel corresponding to the left eye and a second barrel corresponding to the right eye comprises:
adjusting the depth of a tip of a wedge block on the VR device inserted between the first lens barrel and the second lens barrel; the wedge block is located between the first barrel and the second barrel, and has a pressing force with the first barrel and the second barrel.
6. The method of claim 4, wherein adjusting a depth to which a tip of a wedge on the VR device is inserted between the first barrel and the second barrel comprises;
adjusting the depth of a tip of a wedge block on the VR device inserted between the first lens barrel and the second lens barrel through a lead screw; the lead screw is arranged at one end, corresponding to the tip, of the wedge-shaped block.
7. A VR device adjustment apparatus, comprising:
the device comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring a first distance between a virtual image corresponding to a target object in a next frame image to be displayed and a display screen;
a first determination unit configured to determine a second distance of a convergence plane of the user's eyes sight line with respect to the display screen at a current time;
and the adjusting unit is used for adjusting the pose of the optical imaging assembly in the VR equipment under the condition that the difference value of the first distance and the second distance reaches a preset threshold value, so that the convergence plane of the binocular vision of the user at the next moment moves towards the position of the virtual image corresponding to the target object in the next frame image to be displayed, and the distance between the convergence plane of the binocular vision of the user at the next moment and the current moment is smaller than or equal to the preset distance threshold value.
8. A VR device comprising:
a display screen for displaying an image;
the objective lens group is used for adjusting the imaging quality;
the eyepiece group is used for amplifying the image formed by the objective lens component;
a controller to perform the VR device adjustment method of any of claims 1 to 6.
9. The VR device of claim 8, wherein the eyepiece assembly includes a left eyepiece set, a right eyepiece set; the left eyepiece group corresponds to a left eye and is arranged in the first lens cone; the right ocular group corresponds to the right eye and is arranged in the second lens cone; the included angle between the first lens barrel and the second lens barrel can be adjusted.
10. The VR device of claim 9, wherein a wedge block is provided between the first barrel and the second barrel, the wedge block having a pressing force with the first barrel and the second barrel, and a depth of a tip of the wedge block inserted between the first barrel and the second barrel is adjustable.
11. The VR device of claim 9, further comprising:
a scanning mirror disposed between the objective lens group and the eyepiece lens group;
the laser is used for emitting laser to the scanning reflector along a preset direction;
the driver is used for controlling the scanning reflector to adjust the included angle between the reflecting surface and the incident laser;
an image sensor for sensing an image on the retina reflected by the scanning mirror;
the controller is also used for sending control instructions to the driver and receiving images sensed by the image sensor.
12. A physician's console, comprising: the VR device of any one of claims 8 to 11.
13. A surgical robotic system, comprising:
the surgical robot carries the image acquisition assembly to acquire an image of a target surgical position and carries a surgical instrument to execute surgical operation;
the image trolley processes the image acquired by the image acquisition assembly to form a 3D image and feeds the 3D image back to the doctor operating table;
the doctor operating console is used for displaying the 3D image, sensing the operation intention of a doctor and sending a control instruction to the surgical robot according to the operation intention of the doctor;
wherein the physician console comprises the VR device of any of claims 8 to 11.
14. A computer storage medium storing computer program instructions which, when executed, implement the steps of the method of any one of claims 1 to 6.
CN202211418141.6A 2022-11-14 2022-11-14 VR equipment and adjusting method thereof, doctor operating table and surgical robot system Pending CN115875652A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211418141.6A CN115875652A (en) 2022-11-14 2022-11-14 VR equipment and adjusting method thereof, doctor operating table and surgical robot system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211418141.6A CN115875652A (en) 2022-11-14 2022-11-14 VR equipment and adjusting method thereof, doctor operating table and surgical robot system

Publications (1)

Publication Number Publication Date
CN115875652A true CN115875652A (en) 2023-03-31

Family

ID=85759804

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211418141.6A Pending CN115875652A (en) 2022-11-14 2022-11-14 VR equipment and adjusting method thereof, doctor operating table and surgical robot system

Country Status (1)

Country Link
CN (1) CN115875652A (en)

Similar Documents

Publication Publication Date Title
US11336804B2 (en) Stereoscopic visualization camera and integrated robotics platform
US9330477B2 (en) Surgical stereo vision systems and methods for microsurgery
US9766441B2 (en) Surgical stereo vision systems and methods for microsurgery
US9772495B2 (en) Digital loupe device
JP4398352B2 (en) Medical stereoscopic imaging device
EP2903551B1 (en) Digital system for surgical video capturing and display
US20200051320A1 (en) Methods, devices and systems for focus adjustment of displays
JP2022036255A (en) Systems, methods and computer-readable storage media for controlling aspects of robotic surgical device and viewer adaptive stereoscopic display
CA3093009A1 (en) Stereoscopic visualization camera and integrated robotics platform
CN111295128B (en) Active visual alignment stimulation in fundus photography
JP2006158452A5 (en)
US20180344413A1 (en) Personalized hand-eye coordinated digital stereo microscopic systems and methods
US11094283B2 (en) Head-wearable presentation apparatus, method for operating the same, and medical-optical observation system
US11219364B2 (en) Automatic XY centering for digital microscope
CN115605828A (en) Eye tracking system for inputting commands
CN115087413A (en) Method for operating a surgical microscope and surgical microscope
CN115875652A (en) VR equipment and adjusting method thereof, doctor operating table and surgical robot system
US20230222740A1 (en) Medical image processing system, surgical image control device, and surgical image control method
JP4246510B2 (en) Stereoscopic endoscope system
TWI612335B (en) Head-mounted display device and binocular vision image calibrating method of the same
US20220313085A1 (en) Surgery 3D Visualization Apparatus
US11504001B2 (en) Surgery 3D visualization apparatus
CN211791831U (en) Integrated imaging display system
WO2019216049A1 (en) Ophthalmic surgery microscope system, control device, and control method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination