CN110780455B - Stereo glasses - Google Patents

Stereo glasses Download PDF

Info

Publication number
CN110780455B
CN110780455B CN201911086487.9A CN201911086487A CN110780455B CN 110780455 B CN110780455 B CN 110780455B CN 201911086487 A CN201911086487 A CN 201911086487A CN 110780455 B CN110780455 B CN 110780455B
Authority
CN
China
Prior art keywords
image
screen
lens
stereoscopic
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911086487.9A
Other languages
Chinese (zh)
Other versions
CN110780455A (en
Inventor
彭波
毛玉
毛新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201911086487.9A priority Critical patent/CN110780455B/en
Publication of CN110780455A publication Critical patent/CN110780455A/en
Priority to PCT/CN2020/116604 priority patent/WO2021088540A1/en
Priority to PCT/CN2020/116603 priority patent/WO2021088539A1/en
Application granted granted Critical
Publication of CN110780455B publication Critical patent/CN110780455B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B35/00Stereoscopic photography
    • G03B35/08Stereoscopic photography by simultaneous recording

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

The invention discloses a pair of stereoscopic glasses. A user sees a front real scene through glasses lenses in the glasses, and converts the front real scene in the glasses lenses into two different contents played in a screen of a screen module on a screen lens in a mode of rotating eyeballs upwards or downwards; the first content is an image of a front real scene collected by a stereo camera on the stereo glasses, and the second content is an image of the front real scene collected by the stereo camera on the stereo glasses and a prefabricated image which are overlapped together. The invention also discloses a same-screen chip, which solves the problem of separation of the focal plane of eyes and the image plane of the stereoscopic image, and provides a healthy stereoscopic player, so that the stereoscopic image measuring process is more intelligent, simpler and more accurate. The invention can be applied to the fields of medical treatment, industry, science, education, entertainment, stereo image production and the like.

Description

Stereo glasses
Technical Field
The invention relates to Mixed Reality (MR) stereoscopic glasses, a stereoscopic image linear optical design, a focal plane of an eye and an image plane coincidence technology of a stereoscopic image, a stereoscopic image measurement technology and a positioning and tracking technology.
Background
Currently, all Augmented Reality (AR) and Mixed Reality (MR) glasses insert a pre-fabricated image into the real scene in front. The problems with such glasses are: firstly, a prefabricated image inserted into a real scene can shield the front real scene, so that a local picture loss and visual disturbance are caused; secondly, the image quality of the prefabricated image cannot be greatly improved, because the pixel precision and the contrast of the optical waveguide technology and the micro projector cannot be compared with those of a screen; third, the darkness and brightness of the inserted pre-image are limited; fourth, the viewing angle (FOV) of the optical waveguide technology is small; fifth, the front scene cannot be enlarged.
Currently, all stereoscopic players, including display technologies of AR and MR glasses, are based on flat screen display technology. The biggest problem with this technique is that the focal plane of the eye is separated from the image plane of the stereoscopic image. This problem is one of the main causes of fatigue and physical discomfort to the eyes after the eyes watch the stereoscopic image for a certain period of time.
Doctors typically wear a specially customized magnifying glass for minimally invasive and neurosurgical procedures. Two optical magnifying lenses with 2-3 times of magnification are respectively fixed on the left lens and the right lens of the magnifying glasses. Problems with such magnifying glasses are; first, the distance between the magnifying glasses and the diseased tissue changes, which causes the optical magnifier head to defocus the diseased tissue. To prevent defocus, the doctor needs to keep the body in a fixed position and posture for a long time; second, the optical magnification lens has a small magnification and a small angle of view.
Mainstream AR and MR glasses do not provide the need for the physician to return to the real scene at any time because one optical waveguide lens causes visual interference and obstruction to the front real scene in the common field of view.
The stereoscopic glasses provided by the invention solve the problems in different application fields. The using mode of the stereoscopic glasses is consistent with the daily using habit, and the using effect has the characteristics of nature, controllability, simple operation, lower cost and easy popularization.
Disclosure of Invention
The invention aims to provide three-dimensional glasses with two independent viewing fields, and firstly, the technical problem that vision interference and shielding are brought to a front real scene due to the fact that a prefabricated image is inserted into the viewing fields of glasses lenses is solved; secondly, the technical problem of mutual conversion of two fields of view is solved; thirdly, the technical problem of separation of the focal plane of the eyes and the image plane of the stereoscopic image is solved; fourthly, the technical problem that a three-dimensional player is changed into a healthy three-dimensional player is solved; fifthly, the technical problem that the process of stereo image measurement is more intelligent, simpler and more accurate is solved.
A pair of stereoscopic glasses comprises a left lens, a right lens, a left screen lens, a right screen module, a core-shifting stereoscopic camera, two wireless modules and an image processor.
A pair of stereoscopic glasses is provided with a glasses frame, a left glasses leg, a right glasses leg, a left glasses lens and a right glasses lens, which are the same as the traditional glasses. The left and right two glasses are a left and a right glasses respectively. A left eye lens is composed of a left eye lens and a left screen lens. A right eye lens is composed of a right eye lens and a right screen lens. The left or right glasses have their lenses and screen lenses arranged in an up-and-down manner. One eye lens and one screen lens in one spectacle lens are arranged in a different manner from each other, and one spectacle lens is classified into a conventional design and a non-conventional design. The conventional design is one of the ophthalmic lenses above one of the screen lenses. A non-conventional design is one of the ophthalmic lenses below the screen lens. The left and right spectacle lenses in a pair of stereoscopic spectacles are designed identically, and whether a traditional design or a non-traditional design is adopted depends on the use habits of users, different application purposes and application scenes.
An ophthalmic lens can be a normal lens or a vision correction lens, and can also be a transparent lens or a colored lens with a surface coated with a mold or a film. The user's eyes see the front real scene through the left and right eye lenses, and no image is inserted into the front real scene. The left and right straight lines passing through the centers of the left and right eye lenses and the centers of the left and right eye pupils are called the center line of the left eye lens and the center line of the right eye lens, respectively. The central lines of the left and right eye lenses are parallel to each other, and the horizontal distance between the central lines is equal to the interpupillary distance of the two eyes.
One screen lens is a normal lens. A screen lens may be a tinted lens with a surface that is molded or filmed. A screen module is fixed on the inner surface of one screen lens.
For a user with normal vision, the left and right eye glasses of a pair of stereoscopic glasses may be two complete common lenses. The two different areas of the spectacle lens and the screen lens are divided according to different functions on one spectacle lens, and no boundary line exists between the two different areas. The outer surface of an ophthalmic lens is a complete and continuously varying surface.
For a user who needs to wear vision correction lenses, both the left eye lens and the right eye lens are vision correction lenses. In this case, the spectacle lenses have two different designs;
in the first design, both the left and right eye lenses are two complete common lenses. The two different areas of the spectacle lens and the screen lens are divided according to different functions on one spectacle lens, and no boundary line exists between the two different areas. The shape and the curvature radius of the inner surface of the corresponding area of the glasses lenses on the left and the right glasses lenses are the same as the shape and the curvature radius of the outer surfaces of the left and the right vision correction lenses. The left and right vision correction lenses are respectively stuck on the inner surfaces of the corresponding areas of the lenses on the left and right glasses lenses along the outer surfaces. The outer surfaces of the left and right eye lenses of this design are two complete and continuously varying outer surfaces.
In the second design, the vision correction lens and the screen lens are two different lenses. For an ophthalmic lens of conventional design, the lower edge of a vision correcting lens is bonded together along the upper edge of a screen lens. For an ophthalmic lens of a non-conventional design, the upper edge of a vision correcting lens is bonded together along the lower edge of a screen lens.
The inner surfaces of the screen lenses on the left and right spectacle lenses of the stereoscopic spectacles are respectively fixed with a left screen module and a right screen module. A screen module is composed of a screen base, a screen, a shell, a lens group and a module vision correction lens. The shape and radius of curvature of the lower surface of the screen chassis are the same as the shape and radius of curvature of the inner surface of the screen lens. The left screen module and the right screen module are respectively fixed on the inner surfaces of the left screen lens and the right screen lens through the lower surface of the base. The upper surface of the screen base can be a plane or a curved surface, and the shape and the curvature radius of the curved surface are the same as those of the back surface of a screen fixed on the upper surface.
One screen may be an OLED or Micro LED screen. The screen may be a flexible screen or a non-flexible screen. The shape of the screen may be a planar shape or a curved surface shape. The shape and radius of curvature of the back surface of the screen are the same as the shape and radius of curvature of the upper surface of the base of the screen. The screen is fixed with the upper surface of the screen base through the back of the screen. The left and right straight lines passing through the centers of the effective playing surfaces of the left and right screens and the centers of the pupils of the eyes of a user are called as the center lines of the left and right screen modules of the stereoscopic glasses. The central lines of the left screen module and the right screen module of the stereoscopic glasses are parallel to each other, and the distance between the central lines of the left screen module and the right screen module is equal to the pupil distance of eyes. A line passing through the center of the screen's effective playing surface and perpendicular to a tangent plane passing through the center is called the screen center line. The center lines of the left screen and the right screen and the center lines of the left screen module and the right screen module are respectively intersected at the centers of the effective playing surfaces of the left screen and the right screen. The included angle between the central line of the screen module and the central line of the screen is called the screen inclination angle theta.
One lens group is composed of one or more spherical or aspherical lenses, and can also be a Fresnel lens or a lens group comprising a Fresnel lens. The back surface of a fresnel lens may be spherical or aspherical. The central line of one lens group or one Fresnel lens coincides with the central line of the screen module. The content seen by the eye through the set of lenses in the screen is a magnified virtual image. The optical magnification of the lens group can be determined according to the requirements of customers, different purposes of use, application fields and market demands. The lens in the lens group can be a traditional circular lens or a non-circular lens. The lens group is an optical module with ultra-short focus capability, and the height of the whole screen module along the central line direction of the screen module is reduced to the maximum extent. The smaller height of the screen module makes the wearing more comfortable and convenient. The outer surface of the ocular lens of the lens group is plated with an antifouling coating. The antifouling coating can prevent the oil stain on the skin of the face from polluting the outer surface of the eyepiece of the lens group, and the oil stain and other foreign matters attached to the outer surface of the eyepiece are easy to clean.
The inner surface of a screen module housing is coated or adhered with a material that absorbs light from the screen and prevents the inner surface of the housing from reflecting light from the screen onto the screen.
The modular vision correction lens is a vision correction lens. To those users who need to wear vision correction glasses, the left and right two modules of vision correction lenses are respectively stuck on the outer surfaces of the eyepieces of the lens groups in the left and right two screen modules, and can be taken down and reused. The shape of the module vision correction lens is the same as that of the lens in the lens group, and can be a circular lens or a non-circular lens. The center line of the module vision correction lens coincides with the center line of the screen module. The antifouling coating has been plated on the surface of module vision correction lens, not only can prevent that the greasy dirt on the skin of face from making dirty the surface of module vision correction lens, easily washs greasy dirt and other foreign matters on adhering to module vision correction lens surface moreover. For those users with normal vision, the module vision correction lens is not needed.
The screen generates heat during operation, so the temperature control of the screen module is a design factor that cannot be ignored. Various solutions for natural ventilation and heat dissipation have been proposed in engineering practice. One simple and effective solution is to provide a plurality of interconnected channels in the screen base, each channel communicating directly to the atmosphere. The screen lens may have a plurality of vent holes formed therein and in direct communication with the channel in the screen base. The heat generated at the back of the screen can be naturally convected to the atmosphere through channels in the base of the screen. The shell of the screen module is provided with a plurality of vent holes, and heat generated on the surface of the screen can directly convect to the atmosphere through the vent holes.
The user can convert the front real scene in the eye lens into the content played in the screen of the screen module by rotating the eyeball upwards or downwards, or convert the content played in the screen of the screen module into the front real scene in the eye lens. The screen in the screen module plays two different contents. The first content is the image of the real scene in front captured by a stereo camera on the stereo glasses. The second content is an image of a front real scene captured by a stereo camera on the stereo glasses and a pre-made image superimposed together. Two different contents can be switched at any time. The pre-made image is from a database in an image processor, a third party database or the internet. The pre-processed image may be a planar image or a stereoscopic image.
The left and right eyes of a user respectively see two left and right images with different visual angles from the screens of the left and right screen modules in the stereo glasses. The brain fuses the left and right images with different visual angles seen by the left and right eyes, and then feels a stereoscopic image. The stereoscopic image is a virtual image.
When the eyes of a person observe an object of interest at infinity, both eyes are in a natural and comfortable viewing state. So that the eyes do not feel fatigue and physiological discomfort after long-time viewing. The center lines of the left and right eye lenses of a pair of stereoscopic glasses and the center line of the left and right screen modules form a left and right vertical plane which are respectively called as a left and right vertical viewing plane of the stereoscopic glasses. The left and right vertical viewing planes of a pair of stereoscopic glasses are parallel to each other. The horizontal distance between the two vertical viewing planes is equal to the interpupillary distance of the two eyes of the user.
The vertical included angle between the central line of the glasses lens of a pair of stereo glasses and the central line of the screen module is called the view field conversion angle
Figure GDA0002270695170000031
Angle of conversion between left and right fields
Figure GDA0002270695170000032
Are equal. Angle of field conversion
Figure GDA0002270695170000033
The range of variation of (c) from 10 ° to 60 ° is a reasonable range. Field of view conversion angle when the eye is converted from one field of view to another
Figure GDA0002270695170000034
The smaller the eye feels more natural and relaxedLoose and comfortable. However, when the eyes watch the real scene in front through the glasses lens, the content played in the screen will generate a bypass interference to the eyes. And the angle of field conversion
Figure GDA0002270695170000035
The smaller the bypass interference. Angle of field conversion
Figure GDA0002270695170000036
The larger the size, the more the eyes feel tired after watching the contents played in the screen for a longer time, and the fatigue feels change over the angle with the field of view
Figure GDA0002270695170000037
The larger and the stronger. An ideal field conversion angle
Figure GDA0002270695170000038
Related to the screen tilt angle theta, the screen module height, and the facial features of the person.
When the human eyes are positioned right in front of the center of a screen to watch the image played in the screen, the change of the brightness, the contrast and the color of the screen has the best performance and effect, and the image deformation is minimum. Whether the screen module center line coincides with the screen center line depends on factors (not limited to) the vertical height of the screen, the height of the screen module, facial features, and size. When theta is equal to 0 DEG, the screen center line is coincident with the screen module center line. The brightness, contrast and color of the image seen by eyes from the screen are not changed, and the image is not deformed. When θ >0 °, since the screen inclination angle θ occurs in the vertical direction, the image in the horizontal direction is not affected by the screen inclination angle θ. The image seen by the eyes from a screen tilted in the vertical direction has problems of vertical keystone distortion and compression. These problems can be solved by enlarging the image in the vertical direction and compensating for the brightness, color and contrast of the image. Although the included angle theta between the central line of the screen module of the stereoscopic glasses and the central line of the screen is negligible compared with the visual angle of the screen of the television, the screen inclination angle theta becomes a factor which cannot be ignored because the magnification of the screen module is large.
Generally, the horizontal field angle of the real scene in front of the user is larger than the horizontal field angle of the image viewed by the eyes from the screen of the screen module through the left and right eye lenses. The closer the field angles of the two different fields of view, the more natural, real and comfortable the transition will be felt by the user's eyes when switching between the two different fields of view. In fact, the frame of the stereo glasses limits the visual angle range of the front real scene, and reduces the visual angle of the front real scene. In the design, when the lens group in the screen module is effectively controlled in image distortion, the angle of view of the lens group is close to or equal to the angle of view of the front real scene in the spectacle lens is the best.
Two photographing methods, a convergence method and a parallel method, are generally used when two cameras photograph a stereoscopic image. The stereoscopic image effect obtained in the manner of photographing an object of interest using the convergence method is the same as the manner and effect of observing an object of interest with eyes. When the left camera and the right camera converge the central line of the lens group on a concerned object positioned on the central axis of the stereo camera, left images and right images collected by the left camera and the right camera are respectively imaged at the centers of effective imaging surfaces of the left image sensor and the right image sensor. However, the two images obtained on the left and right have trapezoidal distortion in the horizontal direction, and cannot be fused perfectly. Further, the horizontal keystone distortion and the vertical keystone distortion described in [0024] above cannot be cancelled out. The way of photographing an object of interest using the parallel method is the same as the way of observing an object of interest at an infinite distance with eyes, and the obtained image has no keystone distortion. However, for an object of interest located at a limited distance, the stereoscopic image of the object of interest obtained by the parallel method is different from the way in which the eyes observe an object of interest, and the stereoscopic effect of the out-of-screen is not an ideal representation.
The equivalent convergence principle is that in a stereo camera consisting of two independent lens groups which are arranged in parallel with each other and have the same center line, two lens groups or two image sensors respectively perform equivalent translation along a straight line direction which is positioned on a plane formed by the center lines of the two lens groups and is vertical to the center lines of the two lens groups, so that two images of an object of interest which is acquired by the two lens groups and is positioned on the center line of the stereo camera are respectively imaged in the centers of effective imaging surfaces of the two image sensors. The equivalent convergence method is a stereo shooting method based on the principle of equivalent convergence. Before shooting, two lens groups or two image sensors in a stereo camera composed of two independent lens groups with the same center line arranged parallel to each other are respectively translated by a distance of L ═ T ÷ (2A) or h ═ T ÷ (2A) along a straight line direction which is positioned on a plane formed by the center lines of the two lens groups and is perpendicular to the center lines of the two lens groups. The stereoscopic effect of the stereoscopic image of one object of interest obtained using the equivalent convergence method is the same as the stereoscopic effect of the stereoscopic image of one object of interest obtained using the convergence method, but there is no keystone distortion in the two images. In fact, the most important meaning of the principle of the equivalent convergence and the use of the equivalent convergence method is to establish a linear relationship between the stereoscopic depth of an object of interest in a real scene and the stereoscopic depth of a stereoscopic image convergence point of the object of interest. The physical meaning is that the stereo images of one attention point, one attention straight line and one attention plane in the real scene are unique and have no deformation.
According to [0027] above, it is a sufficient requirement that the stereoscopic depth of the object of interest in the real scene is a linear relationship with the stereoscopic depth of the stereoscopic image convergence point of the object of interest in the real scene, when one object of interest in the real scene is photographed by the equivalent convergence method. The two stereo cameras are designed according to the equivalent convergence principle; the first is a tilt-shift stereo camera. Before shooting, two lens groups in a tilt-shift stereo camera are respectively translated in directions opposite to each other by a distance of L ═ T ÷ (2A) along a straight line which is positioned on a plane formed by center lines of the two lens groups and is perpendicular to the center lines of the two lens groups. During translation, the position of one or two image sensors in the tilt-shift stereo camera is kept unchanged. After the axes are shifted, an object of interest on the central axis is imaged by a tilt-shift stereo camera at the center of the effective imaging surfaces of the two image sensors. For a stereo camera provided with an image sensor, a tilt-shift stereo camera is an ideal optical design and solution. The second is a moving core stereo camera. Before shooting, two image sensors in a core-shifting stereo camera are respectively translated in opposite directions by a distance h ═ T ÷ (2A) along a direction perpendicular to the center line of two lens groups and on a plane formed by the center lines of the two lens groups. During translation, the positions of the two lens groups in the core-shifting stereo camera are kept unchanged. After core shifting, a core shifting stereo camera images an object of interest on a central axis in the center of the effective imaging surfaces of the two image sensors. In both stereo cameras, the form, coordinate system and origin of the coordinate system of the translation equations L ═ T ÷ (2A) and h ═ T ÷ (2A) are the same. However, the meaning of t in two different stereo cameras is different. T in the tilt-shift stereo camera is the distance between the central lines of the two lens groups after the tilt shift. T in the core-shifting stereo camera is the distance between the central lines of the left and right lens groups. The relationship between shift axis and shift core is L ═ t × h ÷ (t +2 h).
A stereoscopic image translation instruction is based on the principle of equivalent convergence, and two images collected by a stereoscopic camera consisting of two lens groups or cameras which are independent of each other and arranged in parallel with the center line are respectively translated in the direction of h ═ T ÷ (2A) along the direction of a straight line which is positioned on a plane formed by the center lines of the two lens groups or cameras and is perpendicular to the center lines of the two lens groups or cameras. After the translation, the stereoscopic effect of the two images is the same as the stereoscopic effect of the two images obtained by the equivalent convergence method. For a moving-core stereo camera provided with an image sensor, a stereo image translation instruction provides an optical alternative solution for the moving-core stereo camera with an image sensor because an image sensor cannot be translated in a split manner. A stereoscopic image translation command can also be applied to a shift-axis stereoscopic camera and a core-shift stereoscopic camera with two image sensors. There are various methods of image translation, and the following example is only one of them and one explanation of image translation is made in principle. For a left-right format image; in the first step, a vertical line where the right edge of the left image and the left edge of the right image in a left-right format image intersect is used as a dividing line. In the left image, the left image is cut along a vertical line with a distance h ═ T ÷ (2A) — (T × W) ÷ (4W) from the dividing line, and the left image of the vertical line after cutting is retained. In the right image, the right image is cut along a vertical line with a distance h ═ T ÷ (2A) — (T × W) ÷ (4W) from the dividing line, and the image on the right side of the vertical line after cutting is retained. In the second step, the left image is shifted to the right by a distance h ═ T ÷ (2A) — (T × W) ÷ (4W). The left right image is moved to the left by a distance h ═ T ÷ (2A) — (T × W) ÷ (4W). The left and right images are re-stitched into a new left and right format image. In a new left-right format image, two vertical image blank areas with a width of h are respectively arranged at the left edge of the left image and the right edge of the right image. For two independent images on the left and right; in the first step, the left image is cut along a vertical line h ═ T ÷ (2A) ═ T × W ÷ (2W) from the right edge, and the left image of the cut vertical line is retained. In the right image, the right image is cut along a vertical line from the left edge, where h ═ T ÷ (2A) — (T × W) ÷ (2W), and the image on the right side of the vertical line after cutting is retained. The shifting method causes two vertical image blank areas with the width h at the left edge of the left image and the right edge of the right image respectively. Compared with the shift-axis stereo camera and the shift-core stereo camera described in [0028], a stereo image shift command has advantages (not limited to); firstly, the problem that a core-moving stereo camera with one image sensor cannot translate the image sensor is solved; secondly, after translation, the stereoscopic effect of the stereoscopic image is the same as that of the stereoscopic image obtained by the shift-axis or shift-core stereoscopic camera; thirdly, not only the stereoscopic camera with the shift shaft and the shift core can be used, but also the stereoscopic camera which consists of a left lens group and a right lens group or cameras which are independent from each other and are arranged in parallel with the center line can be applied; fourth, it can be applied not only to a stereo camera provided with one image sensor but also to a stereo camera provided with two image sensors; fifthly, for the shooting requirement of frequently changing the attention object, the process of resetting a new attention object is simple, easy to operate and convenient to use; sixthly, the convergence point of the three-dimensional images of different attention objects can be changed at any time, and the three-dimensional effect and the expression mode of changing the original scene of the whole three-dimensional image are obtained. However, the drawbacks of this technique are also evident; firstly, after translation, the image of a vertical area with the width of h at the left outer edge and the right outer edge of the image is cut, which is equivalent to reducing the visual angle of the lens group; second, image delay is caused.
A stereo camera with a movable core is arranged on a pair of stereo glasses. A core-shifting stereo camera is composed of two independent and identical lens groups with parallel central lines and one or two identical image sensors CCD or CMOS, which can be shifted by the distance of h ═ T ÷ (2A) in the direction opposite to each other along a straight line perpendicular to the central lines of two lens groups. In order to ensure that the center lines of the two cameras on the stereo glasses can keep a parallel state with each other at any time, the two cameras are arranged on the stereo glasses frame, and the lenses face forwards. The left and right independent images collected by the left and right cameras are imaged on respective image sensors respectively, and the left and right independent images are output. According to the above [0029], the stereoscopic effect of the stereoscopic image obtained by a stereoscopic image translation pair after the stereoscopic image translation is the same as and equivalent to the stereoscopic effect of the stereoscopic image acquired by a core-shifting stereoscopic camera.
After the core is moved, the minimum imaging circle of the two lens groups when the two images obtained by the two lens groups meet the required image resolution format is the minimum core-moved imaging circle of the two lens groups. The minimum tilt imaging circle diameters of two lens groups in one core-shifting stereo camera are equal. For a moving-core stereo camera provided with an image sensor, the minimum moving core of the two lens groupsDiameter of imaging circle is Dmin=2√[(w/4+h)2+(g/2)2]. For a depoling stereo camera provided with two image sensors, the minimum depoling circle diameter of the two lens groups is Dmin=2√[(w/2+h)2+(g/2)2]. Where g is the vertical height of the required image resolution format.
A core shifting device is a device for shifting two image sensors of a core shifting stereo camera by a distance h ═ T ÷ (2A) in opposite directions along a straight line direction perpendicular to the center lines of two lens groups on a plane formed by the center lines of the two lens groups. For a shift-axis stereo camera provided with two image sensors, the amount of translation of each lens group is a distance of h ═ T × W ÷ (2W). When moving the core, the positions of the two lens groups in the core-moving stereo camera are kept unchanged. One core moving device has two different core moving setting modes; the first setting mode is fixed. When the terminal stereo player is determined, the translation amount h required by the image sensor can be preset by one core-shifting stereo camera before packaging. However, the images obtained by the core-shifting stereo camera need to be played in a determined stereo player to obtain the optimal stereo effect. If the terminal stereo player is changed, the shift axis h can be additionally compensated through a stereo image translation instruction so as to obtain an ideal stereo effect. The second setting mode is adjustable; a core moving device is provided with a core moving fine adjustment mechanism with an original zero point and scales and a knob. Adjusting a knob on the fine adjustment mechanism can synchronously change the distance between the two image sensors. When the knob is rotated in one direction, the two image sensors translate in opposite directions to each other. When the knob is rotated in the opposite direction, the two image sensors are then translated in directions opposite to each other. The coring apparatus is a fine tuning apparatus because the change in the distance between the two image sensors is small. For a core-shifting stereo camera provided with an image sensor, a core-shifting device is not needed, but a stereo image translation command is used.
A core-shifting stereo camera outputs two different core-shifting image formats, one is a core-shifting left-right format and two independent core-shifting image formats. For a core-shifting stereo camera provided with an image sensor, the central lines of the left lens group and the right lens group respectively pass through the center of the left half part and the center of the right half part of the imaging surface of the image sensor. In the core shift, the two lens groups are respectively translated in the horizontal direction toward the opposite direction to each other by a distance h ═ T ÷ (2A) ÷ (T × W) ÷ (4W). After the core is moved, the left image and the right image collected by the left lens group and the right lens group are imaged on the left half part and the right half part of the imaging surface of an image sensor respectively and an image in a core-moving left-right format is output. A left image and a right image in a core-shifting left-right format are placed together in a left-right arrangement to form a complete format image.
For a core-shifting stereo camera provided with two independent image sensors, the central lines of the left lens group and the right lens group respectively pass through the centers of imaging surfaces of the left image sensor and the right image sensor. When shifting the core, the left and right lens groups are respectively translated in the horizontal direction by a distance h ═ T ÷ (2A) · (T × W) ÷ (2W) toward the directions opposite to each other. After the core is moved, the left image and the right image collected by the left lens group and the right lens group are imaged on the imaging surfaces of the left image sensor and the right image sensor respectively and the left image sensor and the right image sensor output the left image and the right image which are independent of each other.
An image in a shifted left-right format and two independent shifted images have advantages (not limited) over conventional left-right format images and two independent images; firstly, in an image in a core-shifting format, a linear relation is formed between the three-dimensional depth of a concerned object in a real scene and the three-dimensional depth of a three-dimensional image convergence point of the concerned object; secondly, one object of interest in the real scene corresponds to only one undistorted stereo image; third, for an object of interest located on the central axis of the stereo camera, the object of interest is imaged in the center of the effective imaging surface of the image sensor.
The image processor is provided with two image processing chips ISP, two wireless modules, an image synchronizer, a touch screen, a data memory and an operating system, and also comprises a device which integrates and stores a plurality of instructions and is loaded and executed by the processor and is provided with a same-screen chip.
Two image processing chips in an image processor process, modify and optimize each frame of image from the image sensors in the left and right cameras on a pair of stereo glasses, respectively, including (without limitation) white balance, increasing color saturation, improving sharpness, brightness, contrast, reducing noise, image edge and detail restoration, compression, and other parameters.
Two wireless modules are arranged in one image processor, and are used for respectively receiving the images, the pictures and the audio signals output by the two wireless modules respectively corresponding to the three-dimensional glasses and respectively outputting the images, the pictures and the audio signals to the two image processing chips respectively corresponding to the image processor for processing, correcting and optimizing. And finally, outputting the processed, corrected and optimized images, pictures, audios, data and operation instructions to two corresponding wireless modules, earphones, a memory and other third parties on the stereoscopic glasses respectively, and performing multimedia interaction and communication with the third parties in real time.
A touch screen in an image processor provides an interface for human-computer interaction with an operating system. The operation modes include a touch screen pen, a finger, a mouse and a keyboard. In operation, on-screen motion indicators (icon) appear on both the image processor screen and the screen in the screen module on the stereoscopic eyewear. The mouse may use a wireless ring mouse. The user can move the mobile identification in the screen by only rotating a small rotating wheel on the ring mouse, and the operation is determined and executed by pressing a button on the ring mouse.
An operating system arranged in an image processor manages pages and images, and the operating system performs image input, image output, storage, loading and executes an instruction integrated and stored by a same-screen chip, an open interface and two microphones on the stereoscopic glasses, and respectively outputs processed, corrected and optimized images, pictures, audios, data and operating instructions to the stereoscopic glasses, a touch screen, a remote control center and a database in a wired or wireless mode, wherein the open interface is compatible with other operating systems and third-party application software, downloads links of various applications and APPs, and realizes real-time multimedia interaction and communication with a third party.
A same-screen chip in an image processor is a chip which integrates and stores a three-dimensional image translation instruction, a three-dimensional image measurement instruction, a three-dimensional image positioning and tracking instruction, a three-dimensional image same-screen instruction and an equivalent convergence point reset instruction. And the same-screen chip is arranged in the image processor as an application chip, and is loaded by the processor and performs the functions of positioning, matching, tracking, measuring, resetting an equivalent convergence point and the same screen of the stereo image.
The stereo glasses are provided with two wireless modules, left and right independent images, pictures and audio signals collected by left and right cameras on the stereo glasses are output to two corresponding wireless modules in an image processor respectively, and the processed, corrected and optimized images, pictures, audio, data and operation instructions input by the two corresponding wireless modules in the image processor are received respectively.
Two microphones are arranged on one pair of stereo glasses. The two microphones are respectively arranged at the left side and the right side of the stereo glasses frame to form a stereo sound acquisition system with double sound channels. The sound of the user and the sound in the environment are recorded by the microphone and then directly output to the image processor.
Two sockets are respectively arranged on the left and the right glasses legs of the stereoscopic glasses, one socket is a power socket, and the other socket is a data line socket. An external battery is connected to a power socket through a power line to supply power to the stereoscopic glasses. The image, audio and signal between the stereo glasses and the image processor can be transmitted by a data line or wireless mode.
The origin (0 ', 0', 0 ') of a spatial coordinate system (x', y ', z') for the acquisition of stereoscopic images is located at the midpoint of a line connecting the centers of the two camera lenses whose center lines are arranged parallel to each other. Three-dimensional image playing spaceThe origin (0 ", 0", 0 ") of the coordinate system (x", y ", z") is located at the midpoint of the line connecting the eyes of the person. A stereoscopic image capturing spatial coordinate system (x ', y', z ') and a stereoscopic image playing spatial coordinate system (x ", y", z ") are put together, and the origins (0', 0 ', 0') and (0", 0 ", 0") of the two coordinate systems are superposed together into a new coordinate system (x, y, z) and (0,0, 0). In the new coordinate system, the relation between the three-dimensional depth of an object of interest in a real scene acquired by a core-shifting three-dimensional camera and the three-dimensional depth of a three-dimensional image convergence point of the object of interest is ZC=ZD×[T÷(A×F×t)]And x Z. The formula shows that the stereo depth Z of a concerned object in the real scene and the stereo depth Z of a stereo image convergence point of the concerned objectCThe relationship between is a linear relationship. In the formula, ZDIs the distance from the origin of the coordinate system to the plane screen, Z is the stereo depth of an object of interest in the real scene, ZCThe stereoscopic depth of the convergence point of the stereoscopic image of the object of interest.
At present, all mainstream stereoscopic image display technologies in the market are technologies based on the convergence principle of flat-screen stereoscopic images. When two images with different visual angles on the left and the right of a concerned object collected by the left and the right cameras are projected on a plane screen simultaneously and the left eye and the right eye can only see the left image and the right image on the screen respectively, the brain fuses the left image and the right image with different visual angles seen by the left eye and the right eye respectively to feel a three-dimensional image.
In real life, the eyes automatically converge on an object of interest when observing the object of interest. The brain perceives a stereoscopic image appearing on an object of interest after fusing two images with different viewing angles obtained by the eyes. In a flat screen display system, the left and right eyes of a person are focused on the left and right images, respectively, on a flat screen, so that the flat screen is the focal plane of the eyes. ZDIs a constant. According to the experience in real life, eyes focus on a left image and a right image on a plane screen, a convergence point of the two images after brain fusion also appears on the screen, and ZC=ZD. However, the above-mentioned [0043]Formula Z as described inC=ZD×[T÷(A×F×t)]X Z denotes ZCIs not equal to ZDOr the focal plane of the eye and the image plane of the convergence point of the stereoscopic image are not coincident. This phenomenon is one of the root causes of eye fatigue, dizziness and physiological discomfort after viewing stereoscopic images for a period of time.
A three-dimensional image on-screen instruction is based on the principle of equivalent convergence, and the relation between the screen magnification A and the three-dimensional depth Z of an object of interest in the real scene is according to the formula A ═ T ÷ (F × T)]When the XZ is changed, a convergence point of a stereoscopic image of an object of interest in a real scene collected by a stereoscopic camera consisting of two lens groups or cameras which are independent of each other, identical and arranged with center lines parallel to each other is always kept on a screen. [0043 ] above]Formula Z as described inC=ZD×[T÷(A×F×t)]XZ indicates that the condition necessary for the focal plane of the human eye to coincide with the image plane of the stereoscopic image is [ T/F (A × F × T)]X Z ═ 1 or a ═ T ÷ (F × T)]X Z is k × Z, where k ═ T ÷ (F × T) is a constant. When the stereoscopic depth coordinate Z of an object of interest in the real scene changes by Δ Z, Δ a is k × Δ Z. By definition, a ═ W/W,. DELTA.a ═ W/Δ W,. DELTA.w ÷ (k × Δ Z). In the formula, the parameter W is a constant, and the parameter W is regarded as a variable. When the distance Z between an object of interest in the scene and the camera changes, w will change equivalently in synchrony. The equivalent result of this change is that the stereoscopic image in the display screen is enlarged or reduced, which is equivalent to the zooming process of a zoom lens. Δ Z as one object of interest in the scene gets farther from the camera>0, then Δ A>0,Δw<0, corresponds to the fact that the focal length of the stereo camera becomes larger, the angle of view becomes smaller, and the image formation on the image sensor becomes smaller, so that the image in the screen becomes smaller and smaller. The visual effect appears as if a stereoscopic image corresponding to an object of interest in a real scene becomes farther and farther in the screen. Similarly, Δ Z is the distance between an object of interest in the scene and the camera<0, then Δ A<0,Δw>0, corresponding to the fact that the focal length of the stereo camera becomes smaller, the angle of view becomes larger, the image formation on the image sensor becomes larger, so that the image on the screen becomes largerThe image becomes larger and larger. The visual effect appears as a stereoscopic image equivalent to an object of interest in a real scene becomes closer and closer on the screen. The change mode, process and perspective effect of the image in the screen are consistent with the observation mode, experience and perspective effect of the eyes of the person on an object of interest in the real scene. The above description is a qualitative description of the variation of the image magnification a in order to satisfy the on-screen condition. The specific and unambiguous quantification of Δ a requires the introduction of the concept of parallax between the two images, a detailed derivation of which will be derived in the following description. The screen vertical magnification is B ═ V/V, where V is the screen effective playing surface vertical height, and V is the image sensor effective imaging surface vertical height. When the stereoscopic depth of an object of interest in the real scene changes by Δ Z, the stereoscopic image on the screen is also enlarged or reduced, and the enlargement change rates in the horizontal and vertical directions of the screen are equal, Δ B being Δ a.
For a panning stereo camera, the screen magnification a may be used to determine or change the spatial coordinates (0,0, Zconv) of the equivalent convergence point M of a panning stereo camera, where Zconv ═ F × T ÷ (2h) — (a × F × T) ÷ T ═ C × a, where C ═ F × T ÷ T ═ 1/k is a constant. Since h ═ T ÷ (2A), the same results can be obtained by changing either a or h. When a point M of equivalent convergence of one panning stereo camera is set on one object of interest, the spatial coordinates of the object of interest are (0,0, Z ═ Zconv). When the left and right images of the object of interest are projected onto the screen, the images of the left and right images on the flat screen are superposed, and the stereoscopic image of the object of interest sensed in the brain appears on the screen, and the parallax of the left and right images of the object of interest is zero. When the equivalent convergence point M of one stereo camera is set at the rear of one attention object, the spatial coordinates of the attention object are (0,0, Z > Zconv). When the left and right images of the object of interest are projected onto the screen, a stereoscopic image of the object of interest sensed in the brain appears behind the screen, and at this time, the parallax between the left and right images is positive. When an equivalent convergence point M of one stereo camera is set between one attention object and the stereo camera, the spatial coordinates of the attention object are (0,0, Z < Zconv). When the left and right images of the object of interest are projected on the screen, a stereoscopic image of the object of interest is perceived by the brain to appear between the screen and the viewer, and at this time, the parallax of the left and right images of the object of interest is negative.
The stereoscopic depth magnification ratio of one attention object and the corresponding stereoscopic image is (η ═ Zc 2-Zc 1)/(Z2-Z1) ═ ZD×T)÷(A×F×t)=(ZD/ZConv). The formula shows that the stereo depth magnification η is proportional to the distance between the eye and the screen.
According to gauss's law and the definition of the lateral magnification of the camera lens:
m=x′/x=y′/y=s′/s
where s' F × (1-m) is the image distance and s ═ F × (1/m-1) is the object distance. The lateral magnification of a stereoscopic image of an object of interest in the screen is mxa (in both the x and y directions).
Defined in terms of the longitudinal magnification of the camera lens:
Figure GDA0002270695170000082
Figure GDA0002270695170000093
in the above formula, s1 and s2 are depth coordinates of front and rear end faces of an object of interest in the real scene in the longitudinal direction, m1And m2The lateral magnifications of the lens at the front end face and the rear end face of an object of interest in the real scene are respectively. In a linear space, the lateral magnification is independent of the position of the object of interest, or m ═ m, according to the definition of the image magnification1=m2. The above formula also indicates; longitudinal magnification of camera lens
Figure GDA0002270695170000091
Regardless of the screen magnification a, m × a is used instead of m in the formula.
Letting:
Figure GDA0002270695170000092
to obtain ZD×[T÷(A×F×t)]=(ZD/Zconv)=m2
Formula η ═ ZD/Zconv)=m2Or ZD=m2xZConv. Formula surface, when the distance Z between the eyes of a person and the stereo screenD=m2xZConv, the human eye perceives the stereoscopic image of an object of interest as a magnified m × A times (x and y directions) and m2Stereoscopic images with no distortion in magnification (z direction).
A stereoscopic image measurement instruction is based on a geometric relationship and an equivalent convergence principle formed between two cameras which are independent from each other and are arranged in parallel with the same central line and an attention object, and a relationship between the parallax of a left image and a right image of an attention point on the attention object and the space coordinate of the attention point in a real scene is established; and establishing a relation between the area of the surface image of the object of interest and the actual area of the surface of the object of interest in the real scene. A stereoscopic image measurement command can accurately determine the spatial coordinates (X, y, z) of a point of interest depending on whether the left and right images of the point of interest can be accurately located in the X-coordinate abscissas of a left and right image screenshot in a left and right format or a left and right independent image screenshotLAnd XR. The left and right images of a point of interest are located on the same horizontal line or Y in the left and right image screenshots of a left and right format image screenshot or two independent left and right image screenshotsL=YRWherein Y isLAnd YRThe vertical coordinates of the point of interest in the left and right image shots, respectively. The left and right images collected by the left and right cameras in a stereo camera have parallax along the horizontal direction, and have no parallax in the vertical direction. The horizontal parallax of the left and right images of one point of interest is P ═ X (X)R-XL) The vertical parallax is zero V ═ Y (Y)R-YL) 0. Left image of left and right format image screenshot or left image of left and right independent image screenshotThe original points of the left and right coordinate systems in the image screenshot and the right image screenshot are respectively positioned at the centers of the left image screenshot and the right image screenshot. The coordinate symbols are specified as; xLAnd XRThe positive coordinate system is positioned on the right half part of the central vertical axis of the left and right coordinate systems respectively, the negative coordinate system is positioned on the left half part of the central vertical axis of the left and right coordinate systems respectively, and the zero coordinate system is positioned on the center of the left and right coordinate systems respectively.
For a moving-core left-right format image and a traditional left-right format image, the parallax of the left and right images of a focus point in a real scene in a left-right format image screenshot is P ═ X (X ═ X)R-XL) The spatial coordinates of the point of interest are;
x=t×(XL+T/2)÷[T-(XR-XL)]-t/2
y=YL÷(m×A)=YR÷(m×A)
z=(A×F×t)÷[T-(XR-XL)]
for the left and right two independent core-shifted images and the traditional left and right images, the parallax of the left and right images of a focus point in the real scene in the left and right two independent image screenshots is P ═ X (X ═ X-R-XL) The spatial coordinates of the point of interest are;
x=t×(XL+T/2)÷[T-(XR-XL)]-t/2
y=YL÷(m×A)=YR÷(m×A)
z=(A×F×t)÷[T-(XR-XL)]
in the following description of the measurement process and method of a stereo image measurement command, the positioning and measurement process and method of the left and right images of a point of interest in an image screenshot in left and right format are only taken as an example. The positioning and measuring process and method of the left and right images of a point of interest in the left and right independent image screenshots are completely the same as the positioning and measuring process and method in the left and right format image screenshots.
Three-dimensional image measurement instruction determines empty of focus point according to left and right images of focus point on focus objectThe process of the inter-coordinate (x, y, z) is; the method comprises the steps of firstly, obtaining a left and right format image screenshot of a left and right image comprising a focus point; second, click and determine the abscissa X of the left image of the point of interest on the screen in the left image screenshot using a stylusL(ii) a Third, when the left image of the point of interest is located on a reference object image with geometric features, such as a non-horizontal line, a curve, a geometric discontinuity on the object surface or a geometric feature, in the left image screenshot, the right image of the point of interest is located on the X-coordinate of the abscissa in the right image screenshotRIn a path XLAnd the horizontal straight line crossing the left and right image screenshots and the left image of the focus point are positioned at the intersection point of the reference object image with the same geometric characteristics in the left image screenshot. Clicking and determining the abscissa X of the right image of the point of interest in the right image screenshot using a stylusR. Abscissa X of left and right images of a point of interest in a left and right format image screenshotLAnd XRAfter the positioning, the parallax of the two images of the focus point is P ═ X (X)R-XL) And the spatial coordinates (x, y, z) are determined.
A stereoscopic image measurement process starts with the following two steps. The method comprises the steps of firstly, obtaining an image screenshot of a left format and a right format from an image, wherein the image screenshot comprises one or more attention points, attention surfaces, attention volumes, surface cracks or damaged surface concave-convex parts on the surface of an attention object; second, the point-camera, point-point, point-line, point-plane, surface area, volume, surface crack area, surface crack cross section, surface damage parameter, surface damage area, surface damage cross section, and maximum depth are selected from the menu for the purpose of the measurement (not limited).
A process and method for measuring the distance from a point of interest a to the camera lens: the method comprises the following steps that firstly, a left-right format image screenshot is obtained from an image; secondly, selecting a point-camera in a menu; thirdly, clicking and determining the abscissa X of the left image of the attention point a in the left image screenshot by using the touch screen penLaA pass X will automatically appear on the screenLaA horizontal line at the coordinate and crossing the left and right image screenshots; fourthly, clicking on the horizontal line of the right image screenshot by using a touch screen pen and determining the abscissa X of the right image of the attention point a in the right image screenshotRa. The distance from a point of interest a to the camera is;
Dc=√[xa2+ya2+(za-c)2]
wherein c is the distance from the center of the camera to the center of the outer surface of the objective lens.
Procedure and method for measuring the distance of two points of interest a and b: the method comprises the following steps that firstly, a left-right format image screenshot is obtained from an image; secondly, selecting 'point-point' in the menu; thirdly, respectively determining the abscissa X of the left image and the right image of the two points of interest a and b in the left image and the right image screenshotsLa,XRa,XLbAnd XRb. The distance between the two points of interest a and b is;
Dab=√[(xb-xa)2+(yb-ya)2+(zb-za)2]
a process and method for measuring the distance of a point of interest a from a spatial line: the method comprises the following steps that firstly, a left-right format image screenshot is obtained from an image; secondly, selecting 'point-line' in the menu; thirdly, respectively determining the abscissa X of the left image and the abscissa X of the right image of the point of interest a in the left image screenshot and the right image screenshotLaAnd XRa(ii) a Fourthly, respectively determining the horizontal coordinates X of the left image and the right image of the two characteristic points b and c on a straight line in the space in the left image and the right image screenshotsLb,XRb,XLcAnd XRc. The distance from a point of interest a to a straight line passing through two feature points b and c is;
Da-bc=√{[xa-λ(xc-xb)-xb]2+[ya-λ(yc-yb)-yb]2+[za-λ(zc-zb)-zb)]2}
wherein λ ═ [ (xb-xa) x (xc-xb) + (yb-ya) x (yc-yb) + (zb-za) x (zc-zb)]÷[(xc-xb)2+(yc-yb)2+(zc-zb)2]
Measurement of the distance of a point of interest a from a spatial planeThe process and method are as follows: the method comprises the following steps that firstly, a left-right format image screenshot is obtained from an image; secondly, selecting 'point-plane' in the menu; thirdly, respectively determining the abscissa X of the left image and the abscissa X of the right image of the point of interest a in the left image screenshot and the right image screenshotLaAnd XRa(ii) a Fourthly, respectively determining the abscissa X of the left image and the right image of the three feature points b, c and d which are positioned on a space plane but not on a straight line in the screenshot of the left image and the right imageLb,XRb,XLc,XRc,XLdAnd XRd. The distance from a point of interest a to a plane containing three feature points b, c and d that are not on a straight line is;
Da-(bcd)=[I Axa+Bya+Cza+D I]÷√(A2+B2+C2)
wherein A, B, C are obtained from the following determinant, D ═ Axb + Byb + Czb
Figure GDA0002270695170000101
Moving the touch screen pen on the touch screen, wherein three different paths from one pixel point to the next adjacent pixel point of a finger or a mouse are respectively along the horizontal direction, the vertical direction and the hypotenuse direction of a triangle with the horizontal pixel point and the vertical pixel point as right-angle sides. A curve on a touch screen can be approximately regarded as a spliced curve formed by splicing a plurality of horizontal straight lines between pixels adjacent to each other, vertical straight lines, and triangular hypotenuses of which horizontal and vertical lines between two adjacent pixels are right-angled sides. The greater the resolution PPI of the touch screen, the closer the actual length of a curve is to the length of a stitching curve. Similarly, the closer the area enclosed in a closed loop curve is to the sum of the areas of all pixel cells enclosed in a closed loop tiling curve. The horizontal distance between two adjacent pixels is a, the vertical distance is b, and the sum of all pixel areas enclosed by one closed-loop stitching curve is Ω ═ Σ (a × b) + Σ (a × b) ÷ 2. The actual surface area of the object of interest is Q ═ Ω ÷ (m ÷)2×A×B)。
A process and method for measuring surface area of interest: the method comprises the following steps that firstly, a left-right format image screenshot is obtained from an image; secondly, selecting the area in the menu, and automatically keeping one of the screenshot of the image by the system and amplifying the screenshot to the full screen; and thirdly, drawing a closed-loop splicing curve along the image edge of the attention surface in the screen by using the touch screen pen, wherein the image area surrounded by the closed-loop splicing curve is the area of the attention surface image. The area of the surface of interest is the area of the image of the surface of interest divided by (m)2×A×B)。
Above [0059]The area of the surface of interest in (1) is simply the area of the actual area of the surface of interest projected on a plane perpendicular to the centerline (Z-axis) of the stereo camera. Fourthly, returning to the left and right format image screenshots, when the object of interest surface is a plane or a curve with a radius of curvature much larger than the surface length, as described above [0057 ]]The method in (1) determines the abscissa X of the left and right images of the three feature points b, c and d not on the same straight line on the plane surface in the left and right image screenshotsLb,XRb,XLc,XRc,XLdAnd XRd. The actual area of a surface of interest is equal to [0059 ] above]The area of the surface of interest obtained by the method described in (1) divided by the normal vector to the surface of the object of interestNCosine of an included angle between the central line (Z axis) of the stereo camera and the central line (Z axis).
A process and method for measuring volume of a plate of interest: the method comprises the following steps that firstly, a left-right format image screenshot is obtained from an image; secondly, selecting the volume in the menu; the third step, according to [0060 ] described above]Obtaining the actual area of the surface of the flat plate of interest; fourthly, when the focus flat plate is a curved surface with a curvature radius which is much larger than the surface length, determining the abscissa X of the left image and the right image of the two characteristic points a and b with typical thicknesses on the focus flat plate in the screenshots of the left image and the right imageLa,XRa,XLbAnd XRb. The thickness of a plate of interest is equal to the distance between two feature points a and b multiplied by a vectorabNormal vector to the surface of the plate of interestNThe rest of the included angleAnd (4) a chord. The actual volume of a plate of interest is equal to the actual area of the surface of the plate of interest obtained in the above-mentioned third step multiplied by the thickness of the plate obtained in the above-mentioned fourth step.
A process and method for measuring the cross-section of a crack on the surface of an object: the method comprises the steps that firstly, the position and the direction of a center line of a stereo camera are adjusted to be consistent with the longitudinal direction of a crack and parallel to the surface of an object, and when a crack cross section opening with typical characteristics and interest is seen in a touch screen, a left-right format image screenshot is collected; secondly, respectively determining the abscissa X of the left image and the right image of the two intersection points a and b at the left edge and the right edge of the crack cross section opening on the surface of the object of interest by using a touch screen pen in the left image and the right image screenshotsLa,XRa,XLbAnd XRb(ii) a And thirdly, selecting 'crack cross section' in the menu, and automatically keeping one image screenshot in the system and enlarging the screenshot to the full screen. Determining the abscissa X of a plurality of characteristic points with inflection points, turning points and peak points on the left edge and the right edge of the crack cross section opening by using a touch screen penL1,XL2,XL3… … and XR1,XR2,XR3… …. Feature point X on the left edge of the crack openingL#And a characteristic point X on the right edge of the crack openingR#There is no relationship between them. Each feature point XL#And XR#The abscissa of the point b is on the same crack cross section as the two intersection points a and b, the parallax of the feature points on the left and right opening edges of all the crack cross sections is the same as the parallax of the point a and the parallax of the point b, or the convergence depth coordinates Zc of the point a and the point b are the same as the stereoscopic image depth coordinates Zc of all the feature points on the left and right opening edges of the crack cross sections. The opening left edge of the crack cross section is formed by connecting all the characteristic points X adjacent to each other on the opening left edge of the crack cross section in sequence by taking the point a as a starting pointL#Is formed by the straight lines of (a). The opening right edge of the crack cross section is formed by connecting all the characteristic points X adjacent to each other on the opening right edge of the crack cross section in sequence by taking the point b as a starting pointR#Is formed by the straight lines of (a). The left edge and the right edge of the cross section of the crack form a V-shaped cross sectionAnd (4) a mouth. The more feature points are selected, the closer the edge of the crack cross section is to the edge of the actual crack cross section. Point a and each characteristic point X on the left edge of the crack cross section openingL#Perpendicular distance Y therebetweenL#And point b and each characteristic point X on the right edge of the crack cross-section openingR#Perpendicular distance Y therebetweenR#The distance between points a and b or the crack cross-sectional width will be listed on the cross-sectional view.
A process and method for measuring the cross section and maximum depth of a surface irregularity of an object: the description is given only by taking the case of the recess caused by the damage or corrosion of the surface of the object. The method comprises the steps that firstly, the position and the direction of a center line of a stereo camera are adjusted to be parallel to the surface of an object, when a part with typical characteristics and interest in a depression on the surface of the object is seen in a screen, a left image screenshot and a right image screenshot are collected, one of the left image screenshot and the right image screenshot is reserved, and the left image screenshot and the right image screenshot are enlarged to the full screen; secondly, respectively determining the abscissa X of the left image and the right image of two intersection points a and b of the surface of the object and the edge of the damaged cross section in the left image and the right image screenshotsLa,XRa,XLbAnd XRb(ii) a And thirdly, selecting a damaged cross section in the menu, and inputting the curvature radius + R (convex surface) or-R (concave surface) of the damaged surface in a next layer of command in the menu. A curve with radius of curvature R will appear on the screen through point a and point b. If the radius of curvature of the damaged surface is not available, a stitching curve is drawn between the two intersection points a and b using a stylus. The stitching curve is smoothly linked with the left surface curve of point a and the right surface curve of point b. And fourthly, drawing a splicing curve between the two intersection points a and b along the edge of the damaged part in the cross-sectional image by using a touch screen pen. The closed loop stitching curve of the damaged cross section is composed of a curve with curvature R between point a and point b and a stitching curve. Fifthly, returning to the left and right image screenshots, clicking on the splicing curve and determining the abscissa X of the lowest point C of the damaged sectionLcAnd XRc. The area of the damaged cross section of the surface of an object, the distance between the point a and the point b and the vertical distance Yc from the lowest point c of the cross sectionIn a cross-sectional view.
In the actual measurement process, when different situations are met, wherein the measurement purpose and the requirement are different from the basic measurement method, different and reasonable measurement methods and solutions need to be proposed according to different situations. The new measurement method and solution may be a combination of the basic measurement methods described above or other new methods.
A three-dimensional image positioning and tracking instruction is based on the equivalent convergence principle, after the position of a left image or a right image in a left image screenshot or a right image screenshot in a left-right format image screenshot or a left-right two independent image screenshots is positioned, the position of the right image or the left image of the focus point or the focus line in the right-left format image screenshot or the right-left two independent image screenshots is positioned and tracked. A three-dimensional image positioning and tracking instruction comprises three different processes of image positioning, image matching and image tracking. Firstly, in the positioning process, a focus point or a focus point is surrounded by a rectangular box, the four peripheries of the rectangular box are respectively parallel to two coordinate axes in the left and right image screenshots, and the center of the rectangular box is the same-name point of the rectangular box. The positioning process is to determine the positions of the homonymous points of the rectangular box in the left and right image screenshots respectively. The rectangular box surrounding a point of interest is a square box, and the point of interest is also the homologous point of the square box. The rectangular box surrounding a line of interest is a rectangular box. The center of the rectangular box is the midpoint or homonym of the line of interest, and one diagonal of the rectangular box is the line of interest. Second, the matching process is a process of searching, comparing and matching features and gray levels of images limited to a limited rectangular box, mainly by combining the instructions of feature matching with a simplified gray level matching. The matched content comprises the relations between the left image and the right image and a reference object, an angular point, an edge line and other geometric features, and color features, surface grains, color and grain change modes and rules in the rectangular square. Thirdly, the tracking process is that when the left and right images of the focus point or the focus straight line are positioned and the focus point or the focus straight line is moved to a new position, the coordinates, the parallax and the distance between the new position, the coordinates, the parallax and the stereo camera of the same-name point of the rectangular frame of the focus point or the left and right images of the focus straight line which are positioned and surrounded by the rectangular frame at any time in the screenshot of the left and right images are automatically tracked. The reason for the movement of the image of one attention point or one attention straight line may be that the position of the attention point or the attention straight line is changed and the position or angle of the stereo camera is changed.
Image localization procedure of a point of interest or a line of interest: in a first step, for a point of interest a, the screen is clicked at the left image of the point of interest a using a stylus. A square box surrounds the attention point a, the center of the square box is a left image of the attention point a, or a homonymous point, and the coordinate is (X)La,YLa). For a line of interestbcUsing stylus from straight linebcOne end point b of the left image slides to a straight line along the screenbcThe other end c of the left image. A rectangular box surrounds the left image of the attention straight line, and the center of the rectangular box is the attention straight linebcThe midpoint or homonym of the left image of (a). Line of interestbcThe left image of (2) is a diagonal of the rectangular box. Line of interestbcThe coordinates of the two end points b and the point c of the left image are (X)Lb,YLb) And (X)LC,YLC). Second, the matching process begins by searching and locating the same features in the right image screenshot as the left image in the left image screenshot. The homonym points have the following characteristics in the left screenshot and the right screenshot; the first characteristic is that a left image of a focus point or a focus straight line is positioned on a reference object, an angular point, an edge line and other geometric characteristics in a left image screenshot, and a point with the same name in a right image screenshot is also positioned on the reference object, the angular point, the edge line and the same geometric characteristics of the same geometric characteristics; the second characteristic is a switchThe positions of the note point and the homonymous point of a concerned straight line in the left and right image screenshots are both positioned on a horizontal line crossing the left and right image screenshots; the third feature is that the two end points b and c of a line of interest are of equal ordinate, YLb=YLC(ii) a The fourth characteristic is that the color, surface texture, color and texture change modes and rules in a rectangular frame surrounding a focus point or a focus straight line have consistency; the fifth feature is pattern and feature matching, and the search, comparison and comparison process is limited to a limited rectangular box. After matching is completed, the coordinates of the attention point and the coordinate of the same-name point of the right image of the attention straight line in the right image screenshot are determined to be (X)Ra,YRa)、(XRb,YRb) And (X)RC,YRC) The corresponding parallaxes of the homonyms are (X)Ra-XLa),(XRbc-XLbc)。
According to the above [0046 ]]The above-mentioned requirement for the coincidence of the focal plane of the eye and the image plane of the stereoscopic image is that A ═ T ÷ (F × T)]X Z is k x Z and a qualitative explanation of the visual effect of the same screen principle is given. According to the above [0049 ]]In how to obtain the parallax and the process of the two images of one attention point, a quantitative calculation is performed on the same-screen principle to obtain a quantitative result that the screen magnification needs to be changed. When the three-dimensional depth Z of a concerned object in the real scene changes, a three-dimensional image positioning and tracking instruction automatically tracks the position change of the concerned object and substitutes the parallax change of the same-name point into the formula (1) to obtain the three-dimensional image positioning and tracking instruction; Δ Z { (a × F × T) ÷ { [ T- (X) } { (a × F × T) }R2-XL2)]-1-[T-(XR1-XL1)]-1}. Substituting the result of the delta Z in the formula (1) into the formula (2) to obtain the result; Δ A ═ T/div (F × T)]×ΔZ=[T÷(F×t)]×ΔZ=(A×T)×{[T-(XR2-XL2)]-1-[T-(XR1-XL1)]-1}. In the formula (1), the screen magnification a (═ W/W) is a constant. In equation (2), the screen magnification a is derived from equation (1) independent of Δ a. The enlargement and reduction of the stereoscopic image in the stereoscopic playing screen according to the same screen principle will not be rightThe position of a point of interest in the real world has any effect. This is [0028] above]The meaning of "equivalent" in equivalent variations, equivalent processes, and equivalent results described in (a) and (b). A stereoscopic image on-screen command causes an image played in a screen to vary in synchronization with a stereoscopic depth Z of an object of interest in a real scene according to a delta A obtained by a formula. At this time, the convergence points of the left and right images of the object of interest are directly implemented on the screen, and the focal plane position of the eyes coincides with the image plane position of the stereoscopic image. The distance between an object of interest and the stereo camera can be measured in real time by externally arranging a laser or infrared distance meter, or by arranging a same-screen chip in an image processor. Compared with a peripheral device, the same-screen chip has the advantages of being faster, higher in efficiency, smaller in delay, more convenient to operate, smaller in size, lower in cost and more humanized.
An equivalent convergence point resetting instruction is that after an object in a screen is set as a new attention object through a stereo image of the object in the process of playing a stereo image, the equivalent convergence point of a stereo camera is reset on the new attention object through the stereo image of the new attention object. Changing the screen magnification a changes the position Zconv of the equivalent convergence point M of an object of interest according to the formula Zconv ═ a × F × T ÷ T described in [0047] above. In fact, an equivalent convergence point reset command in combination with other commands perfectly solves the three application requirements and problems existing at present. The first application is that the stereo player can become a healthy stereo player; the second application is that the viewer can interact with the content being played in a stereoscopic player; the third application is that when a subject shot by the lens of the stereo camera is transferred from an object of interest to another new object of interest during shooting, the equivalent convergence point of a core-shifting stereo camera needs to be transferred from the originally set object of interest to the new object of interest.
A healthy stereoscopic player is defined as a stereoscopic player in which a stereoscopic image convergence point of an object of interest in a stereoscopic image played in the stereoscopic player appears on a screen. First, in a stereo broadcastA chip with the same screen is arranged in the player, and most of the three-dimensional players can become healthy three-dimensional players. Secondly, after a same-screen chip is arranged in a three-dimensional player, the audience can perform a brand-new interventional interaction, feeling and participating effect with the content being played in the three-dimensional player. First, the image of a plurality of different characters or objects of interest appearing in the screen, surrounded by boxes, is displayed, and the viewer uses the remote control to determine a new object of interest or a new character of interest among the characters. In fact, the image of the new object of interest determined by the viewer on the screen is a stereoscopic image in which the left and right images of the new object of interest are converged. Secondly, a same-screen chip obtains an image screenshot from an input left and right format image or two independent left and right images, and the screenshot is obtained according to the value [0065 ]]And [0066]The process and method described in (1) determine coordinates of the homonymous points of the boxes surrounding the left and right new attention object images in the left and right two image screenshots, respectively, thereby obtaining the parallax P ═ X (X) of the left and right homonymous pointsR-XL) And substituted into the formula Z ═ A × F × T ÷ [ T- (X)R-XL)]And formula ZC=ZD×[T÷(A×F×t)]×(A×F×t)÷[T-(XR-XL)]=(ZD×T)÷[T-(XR-XL)]In (1) obtaining ZC. Let Z obtainedCThe position or the solid depth Zconv of the equivalent convergence point M of the new object of interest and the distance h to be moved are determined as (F × T) ÷ (2 Zconv). Summarizing the above process; firstly, a same-screen chip arranged in a stereo player acquires a left and right format or two independent left and right image screenshots from an input stereo image, positions, matches and tracks the left and right images of a newly determined object of interest, and acquires a stereo depth Z of a stereo image convergence point of the newly determined object of interestCLet Z beCZconv; and secondly, correcting the core moving amount of the object of interest by h. If the content being played comes from a stereo camera designed according to the principle of equivalent convergence, the core shift h represents the correction of the core shift of a newly set object of interest. If the content is from oneThe core shift amount h represents that the stereo camera is changed into a stereo camera satisfying the equivalent convergence principle by using the stereo camera shot by the parallel method. If the content comes from a stereo camera using the convergence method, the focal plane of the eye and the image plane of the stereo image still do not coincide perfectly. And thirdly, a co-screen chip locates, matches and tracks the left and right images of the newly concerned object through the process and the method, including the position, the coordinate, the parallax and the distance between the co-name point and the stereo camera, changes the screen magnification in real time and ensures that the convergence point of the stereo image of the newly concerned object is positioned on the stereo player screen.
Most of the pre-made images downloaded through a database or the internet emphasize the universality and do not have a unified standard, and the diversity of content sources makes the possibility of mutual conflict between the equivalent convergence point M' of each stereoscopic pre-made image and the equivalent convergence point M of a superimposed stereoscopic image. If the stereo image and the convergence point obtained from one core-shifting stereo camera on the stereo glasses are taken as the main stereo image and the main equivalent convergence point M, the stereo depth difference between the equivalent convergence point M' of the stereo pre-manufactured image and the main equivalent convergence point M determines the front-back position relation of the main stereo image and the stereo pre-manufactured image in the screen. When the relative position relationship conflicts or does not conform to daily experience, the position of the equivalent convergence point M' of the three-dimensional pre-fabricated image needs to be corrected. According to the process and method described in [0069], the position of the equivalent convergence point M' of the stereoscopic pre-image can be reset. Because the main stereo image and the stereo pre-image are two different images independent from each other, enlarging or reducing the stereo pre-image does not have any influence on the main stereo image. The reasonable relative position between the main equivalent convergence point M and the equivalent convergence point M' of the prefabricated three-dimensional image can enable a user to feel more natural, accords with the habit and daily experience of observing the world by two eyes of the user, and has better sense of reality and more comfortable experience.
The basic measurement methods described above appear inconvenient in use, lack efficiency, and are not easy to accurately determine the position of the right image of a point of interest in the right image screenshot. The same-screen chip simplifies the basic measurement process to one step or two steps, so that the position of the right image of a focus point in the right image screenshot can be accurately positioned, and the real-time measurement process of the three-dimensional image becomes simpler, more efficient, more humanized and more accurate. Meanwhile, the menu is added with straight line/diameter/height, graphic matching and volume.
The measurement process and method of the same screen chip are that; first, the abscissa X of the left image of a point of interest in a left image screenshot of a left-right format image screenshot is manually determinedL. A same-screen chip matches the left and right images of the point of interest around the same features of the same-name point to obtain the abscissa X of the same-name point in the right image screenshotRThen, the parallax P ═ X (X) of the point of interest is calculatedR-XL) And a measurement result.
A process and method for measuring the distance from a point of interest a to the lens of a camera: the method comprises the following steps that firstly, a left-right format image screenshot is obtained from an image, one image screenshot is reserved and is amplified to a full screen; secondly, selecting a point-camera in a menu; and thirdly, clicking and determining the position of the point a by using a touch screen pen. The same-screen chip calculates the distance from a focus point a to the midpoint of a midpoint connecting line on the outer surfaces of the two camera objective lenses as follows;
Dc=√[xa2+ya2+(za-c)2]
procedure and method for measuring the linear distance between two points of interest a and b: the method comprises the following steps that firstly, a left-right format image screenshot is obtained from an image, one image screenshot is reserved and is amplified to a full screen; secondly, selecting 'straight line/diameter/height' in a menu; third, click and determine the location of point a using the stylus and keep the stylus sliding on the screen to the location of point b. A same-screen chip calculates the distance between two focus points a and b as;
Dab=√[(xb-xa)2+(yb-ya)2+(zb-za)2]
a process and method for measuring the distance of a point of interest a from a spatial line: the method comprises the following steps that firstly, a left-right format image screenshot is obtained from an image, one image screenshot is reserved and is amplified to a full screen; secondly, selecting 'point-line' in the menu; thirdly, clicking and determining the position of the point a by using a touch screen pen; and fourthly, clicking and determining the position of the point b on the straight line by using the touch screen pen and keeping the touch screen pen to slide to the position of the point c on the screen. The same-screen chip calculates the distance from a focus point a to a straight line passing through two characteristic points b and c;
Da-bc=√{[xa-λ(xc-xb)-xb]2+[ya-λ(yc-yb)-yb]2+[za-λ(zc-zb)-zb)]2}
a process and method for measuring the distance of a point of interest a from a spatial plane: the method comprises the following steps that firstly, a left-right format image screenshot is obtained from an image, one image screenshot is reserved and is amplified to a full screen; secondly, selecting 'point-plane' in the menu; thirdly, clicking and determining the position of the point a by using a touch screen pen; and fourthly, clicking and determining the position of the point b by using the stylus and keeping the stylus continuously sliding on the screen to the positions of a point c and a point d, wherein the point b, the point c and the point d are three points which are not all on a straight line. A same-screen chip calculates the plane distance from a focus point a to a plane including three feature points b, c and d which are not all on a straight line;
Da-(bcd)=[I Axa+Bya+Cza+D I]÷√(A2+B2+C2)
one on-screen chip can be applied not only to a tilt-shift stereo camera but also to all stereo cameras which are independent of each other, identical and have their center lines arranged parallel to each other, and enables the stereo images acquired by the stereo cameras to have the same stereo effect as that of the stereo images obtained using the equivalent convergence method.
The three-dimensional glasses and the device provided by the invention not only solve the problems of AR and MR glasses and three-dimensional players which are mainstream at present, but also have a highly integrated structural design and an intelligent and humanized operation method. The method has the characteristics of simple operation, high efficiency, high image reduction degree, small delay, low cost and easy popularization.
Drawings
FIG. 1 is a schematic view of a pair of small-pitch stereoscopic eyewear and a system thereof;
FIG. 2 is a schematic view of an orthogonal view-spacing stereoscopic eyewear and system;
FIG. 3-1 is a schematic front view of an ophthalmic lens of a conventional lens;
FIG. 3-2 is a schematic cross-sectional view A of an ophthalmic lens of a conventional lens;
3-3 a schematic cross-section B of a conventional ophthalmic lens;
FIG. 4-1 is a schematic front view of an ophthalmic lens for a vision correcting lens;
fig. 4-2 a schematic cross-section a of an ophthalmic lens for a vision correcting lens;
fig. 4-3 a schematic diagram of an ophthalmic lens cross-section B of a vision correcting lens;
FIG. 5-1 is a schematic front view of an ophthalmic lens of a separate corrective vision lens;
FIG. 5-2 a schematic cross-section A of an ophthalmic lens of a separate corrective vision lens;
fig. 5-3 a schematic cross-section B of an ophthalmic lens of a separate corrective vision lens;
FIG. 6-1 is a schematic front view of a screen module of a curved screen;
FIG. 6-2 is a schematic cross-sectional view of a screen module of a curved screen;
FIG. 6-3 is a schematic cross-sectional view B of a screen module of a curved screen;
FIG. 7-1 is a schematic front view of a flat screen module;
FIG. 7-2 is a schematic view of a cross-section A of a flat screen panel;
FIG. 7-3 is a schematic view of a screen module cross-section B of a flat screen;
FIG. 8 is a schematic view of a pair of small-pitch stereoscopic eyewear;
FIG. 9 is a schematic view of an orthogonal view pitch pair of glasses;
FIG. 10-1 is a schematic view of a stereoscopic image acquisition space;
FIG. 10-2 is a schematic view of a playing space of a stereoscopic image;
FIG. 11-1 is a schematic view of the image sensor in position relative to the minimum imaging circle before core shifting;
FIG. 11-2 is a schematic diagram showing the relative positions of the image sensor and the minimum imaging circle after core shifting;
FIG. 12-1 is a schematic view of a convergent stereoscopic image capturing principle;
FIG. 12-2 is a schematic view of a parallel method stereo image capture principle;
FIG. 12-3 is a schematic view of the principle of equivalent convergence shooting of stereoscopic images;
FIG. 13 is a schematic view of the principle of parallax error of the moving-core equivalent convergence method;
FIG. 14-1 is a schematic diagram with the image plane on the screen;
FIG. 14-2 is a schematic view with the image plane in front of the focal plane;
FIG. 14-3 is a schematic diagram in which the image plane is located behind the focal plane;
FIG. 14-4 is a schematic view of the principle that the image plane and the focal plane are on the same screen;
FIG. 15 is a schematic diagram of the positions of the left and right images of a point of interest in a left and right format screenshot;
FIG. 16 is a schematic diagram of the principle of parallax between the coordinates of any point in space and the image sensor after the axes are shifted;
FIG. 17 is a schematic view of measuring the distance of a point of interest from a stereo camera;
FIG. 18 is a schematic diagram of measuring the distance between two points of interest;
FIG. 19 is a graph illustrating the measurement of the distance of a point of interest from a line;
FIG. 20 is a schematic diagram of measuring the distance of a point of interest to a plane;
FIG. 21 is a schematic view of measuring the surface area of a planar object;
FIG. 22 is a schematic view of measuring the volume of a flat object;
FIG. 23-1 is a schematic cross-sectional view of a surface crack taken;
FIG. 23-2 is a schematic cross-sectional view of a surface crack measured;
FIG. 24-1 is a schematic cross-sectional view of a surface-damaged depression taken.
FIG. 24-2 is a schematic cross-sectional view of a surface damage recess measured.
Detailed Description
The embodiments of the present invention are merely examples of embodying the present invention, and correspond to specific items in the claims and the summary of the invention. The present invention is not limited to the embodiments, and can be embodied in various different embodiments without departing from the scope of the present invention. The illustration in all the figures is an example of an embodiment of the described solution.
Fig. 1 shows a schematic view of small-pitch stereoscopic glasses and a system thereof. In the figure, a pair of small visual space stereo glasses is composed of an eye frame 1, a left glasses leg and a right glasses leg 2, a left glasses lens and a right glasses lens 3, and a stereo camera, wherein a left camera and a right camera 4 of the small visual space stereo camera are arranged in the middle of the eye frame 1. The left and right glasses legs 2 are respectively provided with a left and right wireless module 5 and a left and right jack 6 and 8. An image processor 10 is connected to the socket 6 via a data line and the plug 7. An external battery is connected to the outlet 8 via a power cord and plug 9 and provides power to the stereoscopic eyewear. In the figure, an image processor 10 is incorporated with an external battery.
Fig. 2 is a schematic diagram of an orthogonal view-spacing stereoscopic eyewear and system. In the figure, an orthogonal visual distance stereo glasses is composed of an eye frame 1, a left glasses leg 2, a right glasses leg 2, a left glasses lens 3, a right glasses lens 3 and a stereo camera. The left and right two cameras 4 of an orthogonal view-interval stereo camera are respectively arranged on the left and right sides of the spectacle frame 1. The center lines of the left and right cameras 4 are parallel to each other. The left and right glasses legs 2 are respectively provided with a left and right wireless module 5 and a left and right jack 6 and 8. An image processor 10 is connected to the socket 6 via a data line and the plug 7. An external battery is connected to the outlet 8 via a power cord and plug 9 and provides power to the stereoscopic eyewear. In the figure, an image processor 10 is incorporated with an external battery.
Fig. 3-1 is a front view schematically showing an eyeglass lens of a conventional lens. The spectacle lens 3 shown in the figure is a conventional lens of conventional design and is a complete conventional lens. The upper part of the spectacle lens 3 is an eye lens 12 and the lower part is a screen lens 13. A screen module 14 is fixed to the inner surface of the screen lens 13. C is the lens center of the ophthalmic lens 12.
Fig. 3-2 shows a cross-section a of an ophthalmic lens of a conventional lens. Shown is a view of a cross section a-a of the spectacle lens 3.
Fig. 3-3 show a cross-section B of an ophthalmic lens of a conventional lens. A cross section B-B of the spectacle lens 3 is shown. The eyes see the front real scene through the glasses 12, and the eyeball rotates downwards to see the content played in the screen module 14 through the screen module lens set 17 along the direction of the screen module center line 16. A straight line 15 passes through the center of the pupil of the eye and the center C of the spectacle lens. The included angle alpha between the straight line 15 and the central line 16 of the screen module is the view field conversion angle of the stereo glasses.
Figure 4-1 shows a schematic front view of an ophthalmic lens of a vision correcting lens. The spectacle lens 3 shown in the figure is a conventional lens of conventional design and is a complete conventional lens. The upper part of the spectacle lens 3 is an eye lens 12 and the lower part is a screen lens 13. The shape and radius of curvature of the inner surface of the portion of the ophthalmic lens 12 is the same as the shape and radius of curvature of the back surface of a vision correcting lens 18. The vision correcting lens 18 is bonded along a back surface to an inner surface of the eyeglass lens 12.
Figure 4-2 shows a schematic cross-section a of an ophthalmic lens of a vision correcting lens. Shown is a view of a section a-a of the spectacle lens 3.
Fig. 4-3 show a cross-section B of an ophthalmic lens of a vision correcting lens. The eyes see the front real scene through the vision correction lens 18 and the common eye lens 12, and the eyeball rotates downwards to see the contents played in the screen module 14 through the screen module lens group 17 along the direction of the screen module center line 16. A straight line 15 passes through the center of the pupil of the eye and the center C of the spectacle lens. The included angle alpha between the straight line 15 and the central line 16 of the screen module is the view field conversion angle of the stereo glasses.
Figure 5-1 is a schematic front view of a split vision correcting ophthalmic lens. The spectacle lens 3 shown in the figure is a conventional and complete ordinary lens, consisting of a vision correction lens 18 and a screen lens 13. The vision correcting lens 18 is bonded to the upper edge of the screen lens 13 along the lower edge.
Figure 5-2 shows a cross-section a of a split vision correcting ophthalmic lens. Shown is a view of a section a-a of the spectacle lens 3.
Fig. 5-3 show a cross-section B of a split vision correcting ophthalmic lens. The eyes see the front real scene through the vision correction lens 18, and the eyeball rotates downwards to see the content played in the screen module 14 through the screen module lens group 17 along the direction of the screen module center line 16. A straight line 15 passes through the center of the pupil of the eye and the center C of the spectacle lens. The included angle alpha between the straight line 15 and the central line 16 of the screen module is the view field conversion angle of the stereo glasses.
FIG. 6-1 is a schematic front view of a screen module of a curved screen. A screen module 14 and a screen lens module 17 are shown.
FIG. 6-2 is a cross-sectional view A of a curved screen module. A screen module 14 is shown comprised of a base 19, a curved screen 20, a module housing 21 and screen module lens assembly 17.
Fig. 6-3 is a schematic cross-sectional view B of a screen module of a curved screen. The eyes see the contents played in the curved screen 20 through the screen module lens group 17 along the direction of the screen module center line 16. The screen center line 22 passes through the center of the curved screen 20 and is perpendicular to a tangent plane passing through the center. In the figure, the screen center line 22 coincides with the screen module center line 16.
FIG. 7-1 is a schematic front view of a flat screen module. A screen module 14 and a screen lens module 17 are shown.
FIG. 7-2 is a cross-sectional view A of a flat screen. A screen module 14 is shown comprised of a base 19, a flat screen 20, a module housing 21 and screen module lens assembly 17.
Fig. 7-3 is a schematic diagram of a screen module cross-section B of a flat screen. The eyes see the contents played in the flat screen 20 through the screen module lens group 17 in the direction along the screen module center line 16. The screen centerline 22 passes through the center of the planar screen 20 and is perpendicular to the planar screen 20. In the figure, the angle θ between the screen center line 22 and the screen module center line 16 is the screen tilt angle.
Fig. 8 shows a schematic view of a pair of small-pitch stereoscopic spectacles. In the figure, two left and right cameras 4 in a small-visual-interval stereo camera on a small-visual-interval stereo glasses are arranged in the middle of the glasses frame 1. The centre lines of the two cameras 4 are parallel to each other. Shown in the figure is a pair of conventional stereoscopic spectacles having spectacle lenses 3 of two conventional lenses on the left and right. The inner surfaces of the left and right screen lenses 13 are respectively fixed with a left and right screen module 14. The pupil distance Teye of the two eyes of the user is equal to the distance between the centers of the left and right spectacle lenses Tlens. The left and right glasses legs 2 are respectively provided with a left and right wireless module 5.
Fig. 9 is a schematic view of an orthogonal view pitch stereoscopic eyewear. In the figure, two left and right cameras 4 of an orthogonal view pitch stereo camera on an orthogonal view pitch stereo glasses are respectively arranged at the left and right sides of the glasses frame 1 in front of the hinge connection parts of the left and right glasses legs 2. Shown is a conventional design of stereoscopic eyewear. Two screen modules 14 are fixed on the inner surfaces of the left and right screen lenses 13, respectively. The pupillary distance Teye of the left and right eyes of the user is equal to the distance between the centers of the left and right spectacle lenses Tlens. The left and right glasses legs 2 are respectively provided with a left and right wireless module 5.
FIG. 10-1 is a schematic view of a stereoscopic image acquisition space. In the figure, the left and right cameras 23 and 24 are simultaneously rotated inward about the camera lens centers until the center lines of the two cameras 23 and 24 converge on an object of interest 27 in the real scene. Such a method of taking a stereoscopic image is called a convergence method. The scene in front of the object of interest 17 is referred to as a front scene 28 and the scene behind is referred to as a rear scene 29.
Fig. 10-2 is a schematic view of a playing space of a stereoscopic image. In the figure, left and right images 33 and 34 captured by the left and right cameras 23 and 24 are projected onto a flat screen 32 having a width W at the same time, and the horizontal distance between the projections of the left and right images 33 and 34 on the screen is the parallax P of the left and right images 33 and 34. When the left eye 30 and the right eye 31 of the human being can only see the projection of the left image 33 and the right image 34 on the screen 32, respectively, the human brain fuses the projection of the two images 33 and 34 seen by the left eye and the right eye to perceive stereoscopic images 35, 36, and 37 corresponding to the objects of interest 27, 28, and 29.
The following relationship is obtained from the geometric relationship shown in figure 10-2,
ZC=ZD×T÷(T-P) (1)
wherein Z isCDistance from the midpoint of the line connecting the two eyes to the convergence point of the two left and right images 33 and 34 on the screen
ZDDistance of the midpoint of the line of eyes to the screen
Distance between T-eyes
P-parallax, the horizontal distance between the projections of the two left and right images 33 and 34 on the screen
ΔP=Pmax-Pmin=T×ZD(1/Zcnear-1/Zcfar) (2)
Wherein: pmaxMaximum parallax of the two left and right images 33 and 34 on the screen
PminMinimum parallax for the two left and right images 33 and 34 on the screen
ZcnearDistance between the eyes and the closest convergence point of the left and right images 33 and 34, (P)<0 negative parallax, audience space)
ZcfarThe distance between the eyes and the convergence point of the left and right images 33 and 34, which is the farthest distance, (P)>0 positive parallax, screen space)
Definition, Prel=ΔP/W
Wherein: prelParallax variation of unit width of flat screen
Horizontal length of W-plane screen
FIG. 11-1 is a schematic diagram showing the relative position of the image sensor to the minimum imaging circle before core shifting. In the figure, an image sensor 25 is fully covered by an imaging circle of radius r. The center of the image sensor 25 coincides with the center of the imaging circle. The image sensor 25 has a horizontal length w and a vertical height v.
Fig. 11-2 is a schematic diagram showing the relative position of the image sensor and the minimum imaging circle after core shifting. When the core is shifted, the image sensor 25 is shifted by a distance h in the horizontal direction toward the left direction, and the imaging circle is kept stationary. After the coring, the distance between the center of the image sensor 25 at the new position and the center of the imaging circle is h. The minimum diameter of the imaging circle is;
Dmin=2R=2√[(w/2+h)2+(v/2)2]
FIG. 12-1 is a schematic view illustrating the principle of the convergent stereoscopic image capturing. In the figure, when the left and right cameras 23 and 24 capture an object of interest 38 located on the center line of the stereo camera by the convergence method, the object of interest 38 is imaged at the centers of the left and right image sensors 25 and 26.
Fig. 12-2 is a schematic view illustrating the principle of parallel stereoscopic image capturing. In the figure, when the left and right cameras 23 and 24 photograph an object of interest 38 located on the center line of the stereo camera by the parallel method, the images of the object of interest 38 on the left and right image sensors 25 and 26 are both shifted from the centers of the two image sensors 25 and 26.
Fig. 12-3 is a schematic view illustrating the principle of the equivalent convergence method for stereoscopic images. In the figure, the two cameras 23 and 24 are parallel to each other and capture an object of interest 38 located on the centerline of the stereo camera. Before shooting, the left and right two image sensors 25 and 26 are respectively shifted in the horizontal direction by a distance h toward the opposite directions to each other. An object of interest 38 located on the centerline of the stereo camera is imaged in the center of the image sensors 25 and 26.
Fig. 13 is a schematic diagram of the moving-core equivalent convergence method and the principle of parallax. In the figure, two left and right cameras 23 and 24 capture a point of interest 27 in space.
From the geometrical relationship shown in figure 13 we obtain the following relationship,
d=t×F×(1/ZC-1/Z)=2h-(t×F)÷Z (3)
where one point 27 in d-space is parallax on both left and right image sensors
h-translation of an image sensor in horizontal direction
t-distance between the centerlines of the two cameras 23 and 24, apparent spacing
Equivalent focal length of F-camera lens
Stereoscopic depth of a point 27 in Z-space
ZCStereoscopic depth of convergence points of the two left and right images 33 and 34
The following is derived from equation (3);
Δd=dmax-dmin=t×F×(1/Znear-1/Zfar) (4)
wherein: dmaxMaximum parallax of two images 33 and 34 on the left and right image sensors
dminMinimum parallax of two images 33 and 34 on the left and right image sensors
ZnearStereoscopic depth of foreground object 28 in space
ZfarStereoscopic depth definition of the rear scene 29 in space, drel=Δd/w
Wherein: dreParallax variation per unit width of image sensor
Horizontal length of effective imaging surface of w-image sensor, Prel=drel
And (3) obtaining: t ═ Z [ (Z)D÷A×F)×(1/Zcnear-1/Zcfar)÷(1/Znear-1/Zfar)]X T (5) wherein: A-Screen magnification W/W
Equation (5) shows that the apparent separation of the two cameras and the distance between the eyes of the person are unequal.
Letting: p ═ a × d and equation (3) are substituted into equation (1):
ZC=(ZD×T)÷(T-P)=(ZD×T)÷(T-A×d)
=(ZD×T×Z)÷[A×t×F-(2A×h-T)×Z] (6)
formula (6) shows that ZCAnd Z is not a linear relationship. The ideal imaging is any point in the stereoscopic image acquisition space, and a straight line and a plane correspond to a unique point, a straight line and a plane in the stereoscopic image playing space. The sufficiency and requirement of ideal imaging is that the stereoscopic depth Z of an object of interest in a real scene and the stereoscopic depth Z of a stereoscopic image convergence point of the object of interestCThe relationship between is linear. As seen in the formula (6), ZCA sufficient requirement for a linear relationship with Z is
(2A) X h-T ═ 0 or h ═ T ÷ (2A)
The formula (6) is simplified to the following formula after being linearized,
ZC=ZD×[T÷(A×F×t)]×Z (7)
formula (7) shows that the relationship between the stereo depth of an object of interest in a real scene and the stereo depth of two image convergence points of the object of interest is a linear relationship.
Fig. 14-1 is a schematic diagram showing the image plane on the screen. In the figure, when the projections of the left and right two images 33 and 34 are superimposed on the screen, the parallax P of the left and right two images 33 and 34 is 0, and one stereoscopic image 35 after brain fusion appears on the screen 32.
Fig. 14-2 is a schematic view showing the image plane positioned in front of the screen. In the figure, when the positions of the left and right images 33 and 34 projected on the screen 32 are oppositely crossed, the parallax P <0 of the left and right images 33 and 34, and the convergence point of a brain-merged stereoscopic image 36 appears between the screen and the viewer.
Fig. 14-3 is a schematic view showing the image plane located behind the screen. In the figure, when the positions of the left and right images 33 and 34 projected on the screen 32 intersect in the forward direction, the parallax P of the left and right images 33 and 34 is >0, and the convergence point of one stereoscopic image 37 after brain fusion appears in the rear of the screen.
Fig. 14-4 is a schematic diagram illustrating the principle that the image plane and the focal plane are on the same screen. In the figure, by changing the screen magnification a, the positions of the left and right images 33 and 34 projected on the screen 32 are always kept coincident. The position of the convergence point of one stereoscopic image 35, 36 and 37 after brain fusion is always maintained on the screen 32.
Fig. 15 is a schematic diagram showing the positions of left and right images of a point of interest in a left and right format screenshot. In the figure, the abscissa of the left image 41 of a point of interest a in the left image section 39 of a left-right format image section is XLAccording to the sign rule, XL<0. The right image 42 of point of interest a has an X-coordinate abscissa in the right image screenshot 40 of one left and right format image screenshotR,XR>0. The left image 41 of point of interest a is located in the left image shot 39 and the right image 42 is located in the right image shot 40 on the same horizontal line 43 across the screen. Ordinate Y of left image 41 of point of interest a in left image screenshot 39LAnd the ordinate Y of the right image 42 in the right image screenshot 40RAre equal. The parallax between the left image 41 and the right image 42 of the attention point a is P ═ X (X)R-XL)。
For a shifted left-right format and a conventional left-right format, the disparity of the left and right images of a point of interest a in the left and right format image shots 39 and 40 is P (X)R-XL) Substituting the obtained product into a formula (1);
ZC=ZD×T÷(T-P)=(ZD×T)÷[T-(XR-XL)] (8a)
substituting the formula (7) into the formula (8a) to obtain the product after simplification,
Z=(A×F×t)÷[T-(XR-XL)] (9a)
for two independent core-shifting images and two traditional independent images, the two left and right image screenshots are two independentAnd (6) screenshot of the image. The parallax of the two left and right videos of one point of interest a in the two independent video screenshots is P ═ X (X)R-XL) Substituting the obtained product into a formula (1);
ZC=ZD×T÷(T-P)=(ZD×T)÷[T-(XR-XL)] (8b)
substituting the formula (7) into the formula (8b), and obtaining the formula after simplification:
Z=(A×F×t)÷[T-(XR-XL)] (9b)
fig. 16 is a schematic diagram illustrating the principle of the coordinates of a point in space after the movement of the core and the parallax of the image sensor. From the geometric relationship shown in fig. 16, the following relationship is obtained,
d1+h=F×(x+t/2)÷Z;d2-h=F×(x-t/2)÷Z
the formula for coordinates x and Z is found:
x=[(d1+h)×Z÷F]-t/2 (10)
for a core-shifted left-right format image and a traditional left-right format image, d1=XL/AH is T/2A and formula (9a) are substituted into formula (10), and are obtained after simplification,
x=t×(XL+T/2)÷[T-(XR-XL)]-t/2 (11a)
the spatial coordinates a (x, y, z) of a point of interest a are;
x=t×(XL+T/2)÷[T-(XR-XL)]-t/2
y=YL÷(m×A)=YR÷(m×A)
z=(A×F×t)÷[T-(XR-XL)]
for two independent core-shifting images and two traditional independent images, d1 is XL/A, h is T/2A and the formula (9b) is substituted into the formula (10) for simplification to obtain the image;
x=t×(XL+T/2)÷[T-(XR-XL)]-t/2 (11b)
the spatial coordinates a (x, y, z) of a point of interest a are;
x=t×(XL+T/2)÷[T-(XR-XL)]-t/2
y=YL÷(m×A)=YR÷(m×A)
z=(A×F×t)÷[T-(XR-XL)]
FIG. 17 is a schematic diagram illustrating measuring the distance of a point of interest from a stereo camera. According to the above [0110]The process and method described in (1) determine the abscissa X of the left and right images 41 and 42 of a point of interest a in the left and right image shots 39 and 40, respectivelyLaAnd XRa. The distance from a point of interest a to the midpoint of the line connecting the centers of the outer surfaces of the objective lenses of the stereo cameras 23 and 24 is;
Dc=√[xa2+ya2+(za-c)2]
where c is the distance between the center of the lens group of the camera 23 or 24 and the center of the objective lens surface.
FIG. 18 is a schematic diagram illustrating measuring the distance between two points of interest. According to the above [0110]The process and method described in (1) determine the abscissa X of the two left and right images 41 and 42 of the two points of interest a and b in the two left and right image shots 39 and 40, respectivelyLa,XRa,XLbAnd XRb. The distance between the two points of interest a and b is;
Dab=√[(xb-xa)2+(yb-ya)2+(zb-za)2]
FIG. 19 is a graph illustrating the measurement of the distance from a point of interest to a line passing through two feature points. The first step is according to the above [0110]The process and method described in (1) determine the abscissa X of the left and right images 41 and 42 of a point of interest a in the left and right image shots 39 and 40, respectivelyLaAnd XRa. Second, the abscissa X of the two left and right images 41 and 42 of the two feature points b and c on a straight line in the two left and right image shots 39 and 40, respectively, is determinedLb,XRb,XLcAnd XRc. The distance of a concern a to a straight line passing through two feature points b and c is;
Da-bc=√{[xa-λ(xc-xb)-xb]2+[ya-λ(yc-yb)-yb]2+[za-λ(zc-zb)-zb)]2}
wherein λ ═ [ (xb-xa) x (xc-xb) + (yb-ya) x (yc-yb) + (zb-za) x (zc-zb)]÷[(xc-xb)2+(yc-yb)2+(zc-zb)2]
FIG. 20 is a schematic diagram illustrating measuring the distance of a point of interest from a plane. The first step is according to the above [0110]The process and method described in (1) determine the abscissa X of the left and right images 41 and 42 of a point of interest a in the left and right image shots 39 and 40, respectivelyLaAnd XRa. Second, the abscissa X of the two left and right images 41 and 42 of the three feature points b, c and d, which are not all on the same straight line, in the two left and right image shots 39 and 40, respectively, is determined on a plane 44Lb,XRb,XLc,XRc,XLdAnd XRd. The distance from a point of interest a to a plane 44 containing the three feature points b, c and d is;
Da-(bcd)=[I Axa+Bya+Cza+D I]÷√(A2+B2+C2)
wherein A, B, C are obtained from the following determinant, D ═ Axb + Byb + Czb
Figure GDA0002270695170000211
FIG. 21 is a schematic view showing the measurement of the surface area of a planar object. A method and step for measuring the surface area of a plane of interest 46 enclosed by a closed loop curve 45; first step, according to the above [0059 ]]And [0060]The process and method described in (1) draw a closed loop curve 45 on the touch screen using a stylus that includes the surface area of a plane of interest 46. The area enclosed by a closed loop curve 45 is obtained. Second step, according to [0057 ] above]The process and method described in (1) determine the abscissa X of the two left and right images 41 and 42 in the two left and right image shots 39 and 40, respectively, comprising three feature points b, c and d on the surface of the plane of interest 46, which are not all in a straight lineLb,XRb,XLc,XRc,XLdAnd XRd. The actual area of the surface of a plane of interest 46 is equal to the forward projected area obtained in the first step divided by a normal vector determined by the three feature points b, c and d on the surface of the plane of interest 46NThe cosine of the angle between the Z axis and the Z axis.
Fig. 22 is a schematic view showing the measurement of the volume of a flat object. A method and step for measuring the volume of a plate of interest; first, according to [0059 ] above]And [0060]The process and method described in (1), the actual area of the surface 48 of one of the flat plates 47 of interest is obtained. Second step, according to [0055 ] above]The process and method described in (1), obtaining the actual thickness at two feature points a and b having a thickness on the flat plate 47 of interest equal to the length of the two feature points a and b multiplied by the vector formed by the two feature pointsabNormal vector to the surface of the flat plate 47 of interestNThe cosine of the angle therebetween. The actual volume of one of the plates 47 of interest is equal to the actual area of the surface 48 of the plate 47 times the actual thickness.
FIG. 23-1 shows a cross-sectional view of a surface crack taken. In the figure, a crack 49 appears on the surface of an object of interest. Method and steps for measuring the shape and depth of the opening at the surface crack cross section 50: according to the procedure and method described in [0062] above, in a first step, the stereo camera centre line is adjusted to coincide with the longitudinal direction of the crack 49 and to be parallel to the object surface. Left and right format image shots 39 and 40 are taken when a representative location in the object surface crack cross-section 50 is seen on the screen.
FIG. 23-2 shows a cross-sectional view of measuring a surface crack. In a second step, the distance V between the two left and right edges of the crack 49 at the crack cross-section 50 and the two intersection points a and b of the surface of the object of interest is determined, V being the surface crack width of the crack 49 at the crack cross-section 50. Third, the characteristic points X on the left edge of the crack 49 are determined using a stylus pen, respectivelyL1,XL2,XL3… … and feature point X on the right edgeR1,XR2,XR3,........ The left and right edges of the crack 49 are connected to the feature points X adjacent to each other on the left and right edges of the crack 49 in order from the point a and the point b, respectivelyL#And XR#Consists of straight line segments. Each feature point XL#And XR#The vertical heights yL # and yR # from the point a and b represent the depths of the feature points from the surface of the object of interest, respectively.
FIG. 24-1 shows a schematic cross-sectional view of a surface damage recess being collected. In the figure, a concave portion 51 appears on the surface of an object of interest. Method and steps for measuring the cross section 52 of a concave portion of the surface of an object: according to the process and method described in [0063] above, in a first step, the stereo camera centerline is adjusted to be parallel to the object surface and a representative one of the object surface depressions 51 is seen in the touch screen while capturing a left and right format of image shots 39 and 40.
FIG. 24-2 is a schematic cross-sectional view illustrating measurement of a surface damage pit. In a second step, the distance U between the two points of intersection a and b of the cross-section 52 with the surface of the object is determined. Third, a 'damaged cross section' is selected in a menu of the touch screen and a curvature radius + R (convex curved surface) or-R (concave curved surface) of the object surface at the cross section of the damaged portion is input. A curve 53 through point a and point b and with a radius of curvature R will appear on the touch screen. Fourth, using a stylus, a finger or mouse, a curve 54 is drawn between the two points of intersection a and b along the edge of the recessed portion of the image capture. A closed loop curve on a concave cross-section 52 on the object surface is composed of a curve 53 with a radius of curvature R and a curve 54 of the image edge of the concave portion. Fifth, the location of the lowest point c of the cross-section 52 is determined in one image shot. Points a and b are spaced from the depths ya and yb between points c and the area of cross-section 52 (shaded in the figure), respectively.

Claims (5)

1. Stereoscopic eyewear comprising; the system comprises a left lens, a right lens, a left eye lens, a right eye lens, a left screen lens, a right screen lens, a left screen module, a right screen module, a core-shifting stereo camera and an image processor; the left eyeglass consists of a left eye lens and a left screen lens, and the right eyeglass consists of a right eye lens and a right screen lens; the left eyeglass or the right eyeglass is arranged with the screen lens in an up-and-down mode; the eye lens is a common lens or a vision correction lens, and the user can see the front real scene through the eye lens; the screen lens is a common lens, and the left screen module and the right screen module are respectively fixed on the inner surfaces of the left screen lens and the right screen lens; two different contents are played in a screen of the screen module, the first content is an image of a front real scene collected by a stereo camera on the stereo glasses, and the second content is an image of the front real scene collected by the stereo camera on the stereo glasses and a prefabricated image which are overlapped together; the user converts the front real scene in the eye lens into the content played in the screen of the screen module on the screen lens or converts the content played in the screen of the screen module on the screen lens into the front real scene in the eye lens by rotating the eyeball upwards or downwards; the core-shifting stereo camera consists of two independent and identical lens groups with parallel central lines and one or two identical image sensors CCD or CMOS, the two image sensors can be respectively translated in opposite directions along a straight line direction which is positioned on a plane formed by the central lines of the two lens groups and is vertical to the central lines of the two lens groups by a distance of h ═ T ÷ (2A), and the two lens groups are kept still when the two image sensors are translated; for a core-shifting stereo camera provided with an image sensor, two images collected by two lens groups are imaged on the left half part and the right half part of the imaging surface of the image sensor respectively, and an image in a core-shifting left-right format is output; for a core-shifting stereo camera provided with two independent image sensors, two images collected by two lens groups are imaged on the imaging surface of one image sensor in each lens group respectively, and two independent core-shifting images are output; in the above formula, T is the distance between the eyes of the person, and a is the screen magnification; the image processor is a device which is provided with one or two image processing chips ISP, a touch screen and a data memory, and also comprises a same-screen chip which is integrated with and stores a plurality of instructions and is loaded and executed by the processor;
when the screen magnification A and the stereoscopic depth Z of an attention object in a real scene are changed according to a formula A [ T ÷ (F × T) ] × Z, a convergence point of a stereoscopic image of the attention object in the real scene collected by a stereoscopic camera consisting of two lens groups or cameras which are independent of each other, identical and have central lines arranged in parallel is always kept on a screen; wherein F is the focal length of the lens groups or camera lenses, and t is the distance between the center lines of the two lens groups or camera lenses;
a stereoscopic image measuring instruction is to establish the relation between the parallax of the left and right images of a focus point on a focus object and the space coordinate of the focus point in a real scene according to the geometric relation and the equivalent convergence principle formed between two lens groups or cameras which are independent from each other and are arranged with the same central line in parallel and the focus object in the real scene; establishing a relation between the area of the surface image of the attention object and the actual area of the surface of the attention object in the real scene;
for an image with a shifted left-right format, the spatial coordinates of a focus point are;
x=t×(XL+T/2)÷[T-(XR-XL)]-t/2
y=YL÷(m×A)=YR÷(m×A)
z=(A×F×t)÷[T-(XR-XL)]
for the left and right independent core-shifting images, the spatial coordinates of a focus point are;
x=t×(XL+T/2)÷[T-(XR-XL)]-t/2
y=YL÷(m×A)=YR÷(m×A)
z=(A×F×t)÷[T-(XR-XL)]
wherein, XL, XR, YL and YR are respectively the abscissa and ordinate of the left image screenshot and the right image screenshot of a left image and a right image of a focus point in a left image screenshot and a right image screenshot of a left image and a; m is the magnification of the lens group or camera lens;
a stereoscopic image positioning and tracking instruction is based on an equivalent convergence principle, after a left image or a right image in a left image screenshot or a right image screenshot in a left-right format image screenshot or a left-right two independent image screenshots of a focus point or a focus straight line in a real scene collected by two lens groups or cameras which are independent of each other, the same and the center lines of which are parallel to each other are positioned, the position of the right image or the left image of the focus point or the focus straight line in the right image screenshot or the left image screenshot in the same left-right format image screenshot or the left-right two independent image screenshots is positioned and tracked;
an equivalent convergence point resetting instruction is that after an object in a screen is set as a new attention object through a stereo image of the object in the process of playing a stereo image, the equivalent convergence point of a stereo camera is reset on the new attention object through the stereo image of the new attention object.
2. The stereoscopic eyewear of claim 1 wherein the screen module comprises a screen base, a screen, a housing, a lens assembly and a modular corrective lens.
3. The stereoscopic eyewear of claim 2, wherein the one screen is an OLED or Micro LED screen, which is flexible or inflexible, and the screen surface is in a planar or curved shape.
4. A pair of stereoscopic spectacles according to claim 2, wherein a lens group is composed of one or more spherical or aspherical lenses, and may be a Fresnel lens.
5. A pair of stereoscopic spectacles according to claim 2, wherein a modular corrective lens is a vision corrective lens affixed to the outer surface of the eyepieces of the lens set in a screen module, which vision corrective lens is not required for a sighted user.
CN201911086487.9A 2019-11-08 2019-11-08 Stereo glasses Active CN110780455B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201911086487.9A CN110780455B (en) 2019-11-08 2019-11-08 Stereo glasses
PCT/CN2020/116604 WO2021088540A1 (en) 2019-11-08 2020-09-21 Stereoscopic spectacles
PCT/CN2020/116603 WO2021088539A1 (en) 2019-11-08 2020-09-21 Tilt-shift stereo camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911086487.9A CN110780455B (en) 2019-11-08 2019-11-08 Stereo glasses

Publications (2)

Publication Number Publication Date
CN110780455A CN110780455A (en) 2020-02-11
CN110780455B true CN110780455B (en) 2021-06-22

Family

ID=69389740

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911086487.9A Active CN110780455B (en) 2019-11-08 2019-11-08 Stereo glasses

Country Status (1)

Country Link
CN (1) CN110780455B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021088540A1 (en) * 2019-11-08 2021-05-14 彭波 Stereoscopic spectacles
CN111025580B (en) * 2019-12-27 2021-11-02 诚瑞光学(常州)股份有限公司 Image pickup optical lens
CN112969060A (en) * 2021-02-23 2021-06-15 毛新 Shaft-shifting stereo camera
CN112995640A (en) * 2021-02-23 2021-06-18 毛新 One-screen stereo camera

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004017120A1 (en) * 2002-08-12 2004-02-26 Scalar Corporation Image display device
CN201974160U (en) * 2011-01-20 2011-09-14 沈阳同联集团高新技术有限公司 Device for measuring three-dimensional shape of structured light
CN102256151A (en) * 2011-07-14 2011-11-23 深圳市掌网立体时代视讯技术有限公司 Double-optical path single-sensor synthesis module and three-dimensional imaging device
CN107290853A (en) * 2017-06-30 2017-10-24 福州贝园网络科技有限公司 Wearable display
CN107333036A (en) * 2017-06-28 2017-11-07 驭势科技(北京)有限公司 Binocular camera

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004017120A1 (en) * 2002-08-12 2004-02-26 Scalar Corporation Image display device
CN201974160U (en) * 2011-01-20 2011-09-14 沈阳同联集团高新技术有限公司 Device for measuring three-dimensional shape of structured light
CN102256151A (en) * 2011-07-14 2011-11-23 深圳市掌网立体时代视讯技术有限公司 Double-optical path single-sensor synthesis module and three-dimensional imaging device
CN107333036A (en) * 2017-06-28 2017-11-07 驭势科技(北京)有限公司 Binocular camera
CN107290853A (en) * 2017-06-30 2017-10-24 福州贝园网络科技有限公司 Wearable display

Also Published As

Publication number Publication date
CN110780455A (en) 2020-02-11

Similar Documents

Publication Publication Date Title
CN110780455B (en) Stereo glasses
CN207516638U (en) The eyepiece system, optical amplification system, display system of the view of object are provided for eyes
CN110830784B (en) Shaft-shifting stereo camera
Hua et al. An ultra-light and compact design and implementation of head-mounted projective displays
KR101916079B1 (en) Head-mounted display apparatus employing one or more fresnel lenses
US10890695B2 (en) Compound lens and display device having the same
CN109259717B (en) Stereoscopic endoscope and endoscope measuring method
US10602033B2 (en) Display apparatus and method using image renderers and optical combiners
CN107462994A (en) Immersive VR head-wearing display device and immersive VR display methods
CN110447224A (en) The method of the virtual image is controlled in the display
CN205195880U (en) Watch equipment and watch system
Qian et al. AR-Loupe: Magnified augmented reality by combining an optical see-through head-mounted display and a loupe
US20220078392A1 (en) 2d digital image capture system, frame speed, and simulating 3d digital image sequence
US20240155096A1 (en) 2d image capture system &amp; display of 3d digital image
CN114040185B (en) Self-focusing camera and stereo camera
CN112969060A (en) Shaft-shifting stereo camera
CN112995640A (en) One-screen stereo camera
CN113454989A (en) Head-mounted display device
WO2021088540A1 (en) Stereoscopic spectacles
CN118401875A (en) Goggles comprising a non-uniform push-pull lens assembly
US20210297647A1 (en) 2d image capture system, transmission &amp; display of 3d digital image
CN109151273B (en) Fan stereo camera and stereo measurement method
Zhang Design of Head mounted displays
CN214335377U (en) Full-color high-definition (5 k-8 k) high-brightness double-vertical-screen stereo image viewing device
CN107121785A (en) 3-dimensional image transformational structure

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant