WO2020042796A1 - 一种立体内窥镜及内窥镜测量方法 - Google Patents

一种立体内窥镜及内窥镜测量方法 Download PDF

Info

Publication number
WO2020042796A1
WO2020042796A1 PCT/CN2019/096292 CN2019096292W WO2020042796A1 WO 2020042796 A1 WO2020042796 A1 WO 2020042796A1 CN 2019096292 W CN2019096292 W CN 2019096292W WO 2020042796 A1 WO2020042796 A1 WO 2020042796A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
endoscope
stereo
interest
fan
Prior art date
Application number
PCT/CN2019/096292
Other languages
English (en)
French (fr)
Inventor
彭波
杨玉珍
Original Assignee
彭波
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 彭波 filed Critical 彭波
Publication of WO2020042796A1 publication Critical patent/WO2020042796A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • A61B1/05Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances characterised by the image sensor, e.g. camera, being in the distal end portion
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00043Operational features of endoscopes provided with output arrangements
    • A61B1/00045Display arrangement
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B17/00Surgical instruments, devices or methods, e.g. tourniquets
    • A61B17/00234Surgical instruments, devices or methods, e.g. tourniquets for minimally invasive surgery
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/30Measuring arrangements characterised by the use of optical techniques for measuring roughness or irregularity of surfaces
    • G01B11/303Measuring arrangements characterised by the use of optical techniques for measuring roughness or irregularity of surfaces using photoelectric detection means
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8806Specially adapted optical and illumination features
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/95Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined
    • G01N21/954Inspecting the inner surface of hollow bodies, e.g. bores
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/95Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined
    • G01N21/954Inspecting the inner surface of hollow bodies, e.g. bores
    • G01N2021/9548Scanning the interior of a cylinder

Definitions

  • the invention relates to a stereo endoscope, a light fan stereo camera, a left and right light fan format, a dual-instrument channel medical endoscope and an endoscope operation handle, an endoscope measurement method, an endoscope device and a system.
  • Two mainstream two-lens single-image sensor stereo imaging technologies are that two independent optical lens modules project two independent and acquired images with different viewing angles into one image through two imaging circles. The left and right halves of the sensor are imaged.
  • the second type is that two independent and different images with different viewing angles collected by two independent optical lens modules are projected on an image sensor by an imaging circle after secondary imaging by a lens group.
  • the two left and right images in one left and right format image obtained by the above two imaging circle imaging technologies have problems of small horizontal viewing angle, low image efficiency, and a small image playback format.
  • the traditional tube medical endoscope technology and products have the following (but not limited to) major shortcomings.
  • the first is that there is no depth information in the acquired image.
  • the second is the inability to make high-precision real-time measurements of suspicious masses, mucous membranes and diseased tissues found during surgery.
  • the third is that the medical endoscope has only one endoscope instrument channel, and the endoscope doctor can only operate one instrument with one hand.
  • the fourth is that doctors still suffer from hand-eye coordination and hand-eye separation when operating endoscopes and endoscopes.
  • the fifth is the lack of stability of the endoscope lens and instrument exit during surgery. Especially when performing mucosal dissection.
  • the first is that there is no depth information in the acquired image.
  • the second is that the binocular measurement technology cannot measure the shape of the surface cracks and cross-sections of the object, the surface unevenness caused by impact or corrosion on the surface, and the shape of the cross-section.
  • the third is that some problems found in the field inspection cannot be processed in real time.
  • the present invention provides a stereo endoscope and an endoscope measurement method.
  • the purpose of the present invention is to propose a stereo endoscope and an endoscope measurement method to solve the following technical problems (not limited to): the first is that there is no depth information in the image collected by the tube endoscope; the second One is that traditional medical endoscopes have only one endoscope instrument channel. Endoscope doctors rely on one-handed operation of the instruments; the third is that doctors still suffer from hand-eye coordination and hand-eye separation when performing endoscopic and minimally invasive surgery.
  • the fourth problem is that traditional medical endoscopes cannot perform high-precision real-time measurement of suspicious masses, mucous membranes and diseased tissues found during surgery; binocular stereo industrial endoscopes cannot detect surface cracks and damaged Surface irregularities and cross-sectional shapes are measured; the fifth is the problem of the stability of endoscopic images and endoscopic instrument channel during surgery.
  • a stereo endoscope includes a light fan stereo camera, a dual instrument channel medical endoscope, a dual instrument channel medical endoscope operation handle, a medical endoscope stabilizer, a medical endoscope table, and a Stereo image processor, a stereo image translation method, an endoscope measurement method, and an operating system.
  • a stereo endoscope is a stereo endoscope using a light fan stereo camera as an endoscope camera.
  • An optical fan stereo camera includes two identical optical lens modules, an image sensor (CCD or CMOS), and a stereo image processor.
  • the center lines of two identical optical lens modules in the optical fan stereo camera are symmetrical to and parallel to the center line of the optical fan stereo camera.
  • a light fan is provided in each optical lens module, and the light fan faces the optical lens module along a straight line located on a plane where the center lines of the two optical lens modules are located and perpendicular to the center lines of the two optical lens modules.
  • the images collected by the group are compressed, and the images remain unchanged along a straight line perpendicular to the plane where the center lines of the two optical lens modules are located.
  • a conventional dual-lens or dual-lens module projects the images collected by a single imaging circle onto the left and right halves of the imaging surface of the same image sensor, respectively.
  • two optical lens modules project the images collected by an imaging ellipse onto the left half and the right half of the imaging surface of the same image sensor, respectively.
  • the image compression ratio of a light fan along a straight line that is located on the plane where the center lines of the two optical lens modules are perpendicular to the center line of the two optical lens modules can be from zero (0%) to five percent Ten (50%), the compression ratio of the image along a straight line perpendicular to the plane where the center lines of the two optical lens modules are located is equal to zero (0%).
  • the image compression ratio refers to a certain direction: [(image length before compression-image length after compression) ⁇ (image length before compression)] ⁇ 100%.
  • Light fans are also called light fans.
  • a light fan is composed of two cylindrical lenses whose axes are perpendicular to each other.
  • a lenticular lens is a positive cylindrical lens or a positive cylindrical lens.
  • the cylindrical surface with a curvature can be cylindrical or non-cylindrical.
  • the axis of one cylindrical lens in the light fan is located on the plane where the center lines of the two optical lens modules are located and is perpendicular to the center line of the two optical lens modules.
  • the axis of the other cylindrical lens is located on the center line of the two optical lens modules.
  • the plane is vertical.
  • the centers of the two lenticular lenses in the light fan are located on the center line of the optical lens module.
  • the compression ratio of the light fan in two different main meridional planes is different.
  • the format of the image output by the light fan stereo camera is a left and right light fan format.
  • the left and right images in the left and right format images of the fan are two corresponding optical lens modules.
  • the two fans in the respective modules are along a plane located at the center line of the two optical lens modules.
  • the image is compressed in half in a straight direction perpendicular to the center lines of the two optical lens modules, and the image remains unchanged along a straight line perpendicular to the plane where the center lines of the two optical lens modules are located.
  • the image compression ratio of the left and right two images in the left and right format of the fan along the horizontal direction is 50% (50%)
  • Traditional three-dimensional image acquisition technology uses two independent cameras to synchronously shoot the object of interest. Two independent and different images with different perspectives are downsampled and stitched together in a left-right arrangement to become a tradition.
  • Left and right format images This traditional stereo image acquisition technology and method has become a standard for stereo image acquisition technology.
  • the traditional left and right format images meet the current major market, national and industry image transmission standards, stereo players and stereo video playback format standards.
  • Traditional left and right format images have high-quality stereo image effects, larger image viewing angles, higher image efficiency, and a half-pixel stereo playback format.
  • the left and right images in the left and right format images not only have the same horizontal viewing angle, resolution, image efficiency, half-pixel standard playback format, and are compatible with all stereo players and stereo video playback formats.
  • Standard and has the advantages of synchronous imaging, smaller delay, simple structure and lower cost.
  • An optical fan in each optical lens module of the optical fan stereo camera transforms a traditional imaging circle into an imaging ellipse.
  • the maximum inscribed rectangle of the imaging ellipse has a horizontal length of w / 2 and a vertical height of v.
  • the area It is (w ⁇ v) / 2, which is equal to half of the imaging surface area of an image sensor.
  • the present invention relates to three different left and right format image formats; the first is the light fan left and right format described in the above [0012].
  • the light fan left and right format is an image format output by a light fan stereo camera.
  • the second is left and right format.
  • the left and right format is an image format output by a stereo camera with a dual lens and single image sensor.
  • the third is the above
  • the fan left and right format has the following characteristics: For the left and right optical lens modules placed horizontally, the first and the two fan left and right formats are unrolled to expand the two images along the horizontal direction. After doubled in the direction, it will become two standard half-pixel images in the left and right. The two images in the left and right formats are expanded by downsampling the two images separately. The two downsampled images have half pixels and a smaller non-standard playback format in the horizontal direction. Second, the horizontal viewing angle, resolution and image utilization of the two images in the left and right formats are larger than those in the left and right formats. Third, the imaging method of the left and right format of the fan is an optical imaging process, and no algorithm is required during the expansion. The downsampling process of the two images in the left and right formats during expansion is a process of image algorithm.
  • the fan left and right format has the following characteristics: For the two left and right optical lens modules placed horizontally, the first two images of the fan left and right format and the two images of the traditional left and right format are being used. The resolution is the same before expansion and after expansion. Second, the two images of the left and right formats of the fan and the two images of the traditional left and right formats are expanded separately. The two independent images have the same horizontal viewing angle, resolution, image efficiency, and a half-pixel standard playback format. Third, during the shooting process, the two images in the left and right formats of the fan are precisely synchronized. The two images in traditional left and right formats need to use third-party synchronization technology and equipment or post-synchronize the images after shooting is complete.
  • the two images in the left and right formats rely on optical methods to directly image. Traditional two left and right images need to be downsampled and stitched together. Downsampling and stitching are a process of image algorithm.
  • the imaging technology of the left-right fan format requires an image sensor. Traditional left and right format imaging technology requires two image sensors. The above comparison shows that the optical fan left and right formats have the same horizontal viewing angle, resolution, image efficiency and standard playback format as the traditional left and right format imaging technologies, but the two images of the fan right and left formats have synchronous imaging, with less delay and simple structure. And lower cost advantages.
  • the distance between the center lines of the two optical lens modules in the optical fan stereo camera is t, and the apparent distance of the optical fan stereo camera.
  • the apparent distance t is between 3 mm and 200 mm.
  • the focal length, viewing angle, aperture, optical lens, number of lenses, lens center position, lens material, surface coating on each corresponding lens, optical design, structural design and All parameters are exactly the same.
  • optical fan stereo cameras There are three different models of optical fan stereo cameras.
  • the three different models of optical fan stereo cameras use the following three optical lens modules with different optical designs and structural designs.
  • the first, second, and third different types of optical fan stereo cameras use the first, second, and third optical lenses with different optical designs and structural designs corresponding to the above-mentioned order, respectively. Module.
  • the design of the first optical lens module includes a lens group, a light fan, and a right-angle prism.
  • An oblique flat lens is provided in the lens group.
  • the plane where the center lines of the two optical lens modules are located is parallel to the imaging surface of an image sensor.
  • a right-angle prism on the rear surface of an optical lens module totally reflects the image from the front and bends it downward by 90 °, and then projects it onto the left or right half of an image sensor imaging surface for imaging.
  • the design of the second optical lens module includes two lens groups, two right-angle prisms or an oblique prism, a light fan, and a right-angle prism.
  • Two right-angle prisms or one bevel prism are located between the two lens groups.
  • the plane where the center lines of the two optical lens modules are located is parallel to the imaging surface of an image sensor.
  • a right-angle prism on the rear surface of an optical lens module totally reflects the image from the front and bends it downward by 90 °, and then projects it onto the left or right half of an image sensor imaging surface for imaging.
  • the third optical lens module design includes two lens groups, two right-angle prisms, and a light fan.
  • two right-angle prisms in one optical lens module are located between the two lens groups.
  • the position of one right-angle prism is fixed and the position of the other right-angle prism. It can be fixed or can be moved along a straight line that is located on a horizontal plane and perpendicular to the center lines of the two optical lens modules.
  • the centerline of the right-angle exit surface of the movable right-angle prism coincides with the center-line of the right-angle incidence surface of the fixed-angle right-angle prism.
  • the center line of the right-angle incident surface of the movable right-angle prism coincides with the center line of a lens group arranged in front of the right-angle prism, and the relative position between the two is unchanged and can be located along the same horizontal plane with two optical lenses
  • the module center line moves synchronously in a straight line perpendicular to the center line.
  • the center lines of the left and right optical lens modules are perpendicular to the imaging surface of an image sensor and pass through the centers of the left half and the right half of the imaging surface of an image sensor, respectively.
  • the lens groups in the three different optical lens module designs described in the above [0021], [0022], and [0023] are composed of a group of lenses, and the lenses may be spherical or aspherical lenses, or they may all be non-spherical lenses. Spherical lenses.
  • a right-angled triangular surface of the right-angle prism provided on the rear surface of the optical lens module is coated with a coating.
  • the coating is opaque, absorbs light projected onto the coating surface, and is non-reflective.
  • the two right-angle prisms disposed on the rearmost side are placed together or bonded together along the coated surface.
  • the light fan stereo camera of the three different models described in the above [0020] is provided with a light blocking plate.
  • the light barrier is a thin flat plate with a polygon.
  • the surface of the light-shielding plate is coated or coated with a material. Both the coating and the material have the characteristics of absorbing light and non-reflective light projected on the surface of the coating or material.
  • the light barrier is disposed on the center line of the optical fan stereo camera and is perpendicular to the plane where the center lines of the two optical lens modules are located.
  • one straight edge of the light barrier and the two right-angle prisms on the back of two optical lens modules are coated with two corresponding right-angled edges on the triangular surface An overlapping straight line formed after being placed or glued together.
  • the light blocking plate and the imaging surface of the image sensor are perpendicular to each other, and a straight edge of the light blocking plate is parallel to the image sensor imaging surface and is very close to but not intersecting with the imaging surface.
  • the stereo image processor is a device integrating an image processing chip (ISP), a wireless communication module, a sensing module and a positioning module, a stereo image translation method, a stereo measurement method and an operating system.
  • ISP image processing chip
  • wireless communication module a wireless communication module
  • sensing module a positioning module
  • stereo image translation method a stereo measurement method and an operating system.
  • An image processing chip corrects, processes, and optimizes the images in the left and right formats of a fan stereo camera, including (not limited to) white balance, color interpolation, saturation, brightness, sharpness, contrast, and other parameters.
  • the stereo image panning method is a method in which the left and right images in the left and right format images of the light fan output from the light fan stereo camera are located along a plane on the plane where the center lines of the two optical lens modules are located, and the two optical lens modules The center line of the group is shifted towards the other in a straight line.
  • h ′ T ⁇ (4A ⁇ e) ⁇ 1 pixel.
  • T is the distance between the eyes of a person
  • A is the screen magnification
  • e is the horizontal distance between two adjacent pixels in the image sensor.
  • the first method is to use the vertical centerline of the left and right format of the fan as the reference.
  • the right image is cropped on a vertical straight line of 1 pixel, and the part of the image on the right side of the vertical line after cropping is retained.
  • the second step is to align the right edge of the retained left image with the left edge of the retained right image, and stitch them together to form a new light fan left and right format image. This method will result in two vertical image blank areas with widths h and h 'at the left edge of the left image and the right edge of the right image in the new left and right format images.
  • the second method is to enlarge the left and right images in the left and right format images in the horizontal direction and double them to become two left and right independent images.
  • the right image is cropped on the vertical straight line of, and the part of the image on the right side of the vertical line after cropping is retained.
  • the left and right images are cut twice in the horizontal direction after being cut.
  • the fourth step is to align the right edge of the left image with the left edge of the right image in the horizontal direction, and stitch them together to form a new light fan format image. This method will cause the left edge of the left image and the right edge of the right image in the new image format of the new fan to have two vertical image blank areas with widths h / 2 and h '/ 2, respectively.
  • the third method use different image post-production tools to post-process the images in the left and right formats of the fan to obtain the results of image translation. This method cannot perform live broadcast of the collected stereo images.
  • a wireless communication module will be corrected and processed by the stereo image processor to optimize and translate the images, pictures, voice, and text in real time to the stereo player, stereo touch screen, remote control center, database, and other
  • the three parties can also conduct multimedia interaction and communication with third parties in real time.
  • a sensing module will detect and perceive the left and right two images of one fan left and right format image or the left and right two image screenshots output by the light fan stereo camera.
  • the algorithm attached to the module will Calculate the position of each detected and perceived image of the object of interest in the left and right images or the left and right image screenshots and the distance to the center of the left and right images or the left and right image screenshots.
  • a positioning module will locate the actual position of the image of one or more objects of interest determined by the sensing module.
  • the functions of a sensing module and a positioning module can be directly applied in a stereo measurement method.
  • the operating system provides human-computer interactive interface, operation instructions, program control and management, page management, image management and storage, operating system compatibility, third-party application software and APP compatible, and will be revised, processed, optimized and translated through wired or wireless means.
  • the left and right images of the rear fan are output to a stereo player, a stereo touch screen, a remote control center and a database.
  • Other third parties can also interact with and exchange multimedia with other third parties in real time.
  • the input and operation methods supported by the operating system are stylus, finger, mouse, keyboard, and voice.
  • a stereo touch screen is the human-computer interaction, input and operation interface of the stereo image processor. Input and operation methods are stylus, finger, mouse, keyboard and voice.
  • the stereo touch screen can be integrated with the stereo image processor or two different devices separated from each other. If the stereo touch screen and the stereo image processor are two different devices, a separate wireless communication module may be provided in the stereo touch screen.
  • a stereo medical endoscope and system include a light fan stereo camera having the first model described in the above [0021], a dual instrument channel medical endoscope, and a dual instrument channel medical endoscope operation handle. , A stereo image processor, a stereo touch screen, a medical endoscope stabilizer, and a medical endoscope table. An image format output by a stereo medical endoscope is a left and right light fan format.
  • a dual instrument channel medical endoscope is a medical endoscope with two independent instrument channels.
  • a dual instrument channel medical endoscope operation handle is a medical endoscope operation handle having two independent instrument channels and two independent instrument channel access ports.
  • the two instrument channels in a dual-instrument channel medical endoscope are respectively connected with two corresponding instrument channels and instrument channel access ports on a dual-instrument channel medical endoscope operation handle.
  • the diameters of two instrument channels in a dual instrument channel medical endoscope may be the same or different.
  • the diameters of the two instrument channels and the instrument channel access port in the dual instrument channel medical endoscope operating handle are respectively equal to the diameters of the two instrument channels in the dual instrument channel medical endoscope connected to each other.
  • a dual-instrument medical endoscope and dual-instrument medical endoscope operating handle allows an endoscope doctor to simultaneously operate two endoscopic instruments with his own hands for endoscopic examination, treatment, and surgery.
  • the technique and method of simultaneously operating two instruments with both hands not only make the operation of the endoscope doctor more coordinated, natural and humane, but also can make the operation of the endoscope doctor more accurate, stable, efficient and obtain better surgical results.
  • Dual-instrument channel endoscope technology and operation modes can also be used in other minimally invasive surgery.
  • the medical endoscope stabilizer is a device with two semi-circular retaining rings.
  • the two retaining rings clamp the part of the endoscope tube that is still outside the patient's body during work, so that the entire endoscope tube can no longer move and rotate forward and backward, while stabilizing the part of the endoscope that is already inside the patient's body.
  • Endoscope stabilizer not only allows doctors to switch one hand used to control the hose to operate instruments or other tasks during the operation of the tube endoscope, but also greatly improves endoscopic surgery
  • the stability of the tube-type endoscope lens and the endoscope instrument during the process reduces the doctor's eye fatigue, improves the efficiency, accuracy and obtains better surgical results.
  • the medical endoscope workbench is a device that can fix together a three-dimensional touch screen, a medical endoscope operation handle, and a medical endoscope stabilizer.
  • the position and angle of the stereo touch screen, medical endoscope operating handle and medical endoscope stabilizer on the workbench can be adjusted at any time.
  • the distance between the doctor's eyes and the stereo touch screen fixed on the workbench is the stereo viewing distance Zs
  • what the doctor sees on the stereo touch screen is a magnified m ⁇ A times (x and y directions) Top) and m 2 times (in the z direction) a stereo image without distortion.
  • m is the lateral magnification of the optical fan module of the optical stereo camera.
  • the doctor can adjust the position and angle of the stereoscopic touch screen on the workbench and the position and angle of the dual-instrument channel medical endoscope operation handle to their customary and comfortable working position, the doctor will not only be able to perform inspections and operations.
  • the operation of endoscope, laparoscopy or minimally invasive surgery is more accurate, stable and efficient, and better results are obtained, which greatly reduces the doctor's trouble for hand-eye coordination and separation.
  • the medical endoscope table is connected to one or more pedals of the switch, and the doctor can control the device fixed on the table through the switches on the pedal.
  • a stereo industrial endoscope includes an endoscope having a light fan stereo camera of the first or second model described in the above [0021] and [0022], a stereo touch screen, and a stereo image processing Device.
  • a gas-liquid channel and an instrument channel can be added to the stereoscopic industrial endoscope.
  • the stereoscopic industrial endoscope not only has the functions of stereoscopic imaging, inspection and measurement, but also can use instruments to enter equipment and systems through the instrument channel to directly determine, repair and solve problems.
  • An image format output by a stereoscopic industrial endoscope is a left and right light fan format.
  • a stereo medical endoscope and system are usually equipped with one or more stereo players and a stereo touch screen. Doctors perform endoscopic operations through stereo images played on stereo players or stereo touch screens.
  • Stereo touch screen is a man-machine interactive interface for system input and operation. Input and operation methods include touch screen pen, finger, mouse, keyboard, and voice. The operator can switch the content being played in the stereo player to the stereo touch screen at any time.
  • An independent wireless communication module set in the stereo touch screen can realize the wireless connection between the stereo touch screen and the stereo image processor, and the multimedia content including images, pictures, voice and text can be passed through the stereo image processor at any time. Wired or wireless output in real time to the stereo player, stereo touch screen, remote medical center, control center, other third parties, and can perform multimedia interaction and communication with third parties in real time.
  • a portable stereo medical endoscope and a stereo industrial endoscope are equipped with a stereo touch screen.
  • a stereo touch screen is integrated with a stereo image processor, so the stereo touch screen does not need to be provided with an independent infinite communication module.
  • Input and operation methods include touch screen pen, finger, mouse, keyboard, and voice.
  • a stereo image processor can output multimedia content, including images, pictures, voice and text, to a stereo touch screen, remote medical center, control center, other third parties in real time through wired or wireless at any time and can communicate with third parties in real time. For multimedia interaction and communication.
  • the origin (0,0,0) of the spatial coordinate system (x, y, z) of the stereo image collection is located at the midpoint of the line connecting the center of the two camera lenses.
  • the left and right images of an object of interest are projected onto a flat screen at the same time.
  • the origin (0,0,0) of the stereo image playback space coordinate system (x, y, z) is located at the midpoint of the line connecting the eyes of a person.
  • the result of the three-dimensional image translation described in the above [0029] ensures that when the left and right two images of an object of interest are projected on a flat screen, the parallax P on the screen of the left and right images corresponds to the only stereo virtual image of the object of interest. .
  • the formula shows that the relationship between the two variables Zc (stereo image playback space) and Z (stereo image collection space) in the stereo image collection space and the stereo image playback space is a linear relationship.
  • Z D is the distance from the coordinate origin in the stereo image playback space to the flat screen
  • Z is the Z coordinate of an object of interest in the stereo image collection space.
  • the parameter h or A in the stereo image translation formula h T ⁇ (4A ⁇ e) described in the above [0029] can be used to determine the position of the equivalent convergence point M of a stereo camera. Because the two optical lens modules in the stereo camera are arranged in parallel, the equivalent convergence point M of a stereo camera is a virtual point.
  • the parallax on the screen of the left and right images of the object of interest is zero.
  • a stereo virtual image corresponding to the object of interest appears behind the screen in the human brain.
  • the parallax on the screen of the left and right images of the object of interest is positive.
  • an object of interest is located between the point of convergence M and the stereo camera, when the left and right images of the object of interest collected by the stereo camera are projected on the screen, a stereo virtual image corresponding to the object of interest appears on the screen in the human brain.
  • the parallax on the screen of the left and right images of the object of interest at this time is negative.
  • L ′ F ⁇ (1-m) is the image distance
  • L F ⁇ (1 / m-1) is the object distance.
  • the horizontal magnifications of an image of the object of interest on the screen in the x and y directions are m ⁇ A.
  • m 1 and m 2 are the lateral magnifications of the lens pair and two different objects of interest in space, respectively.
  • magnification is the ratio of the difference between two different object distances on the object of interest to the difference between the corresponding image distances, so the magnification is independent of the position of the object of interest.
  • the above formula also shows; the vertical magnification of the camera lens It has nothing to do with the screen magnification A (m ⁇ A is used instead of m in the formula).
  • the stereo image is a stereo image that has been enlarged by m ⁇ A times (x and y directions) and m 2 times (z directions) without distortion.
  • This ideal viewing distance Zs is a stereo viewing distance Zs in a linear space. This result will bring practical meaning to many applications.
  • the endoscope measurement method is based on the geometric relationship and mathematical principles formed between two independent and parallel cameras and an object of interest.
  • the left and right images of a point of interest on an object of interest are created in a left and right format.
  • Endoscope measurement technology can be used to measure (not limited to) the distance from a point of interest to an endoscope, to another point of interest, a straight line and a plane, the surface area of the object of interest, the volume of the object of interest, the surface of the object of interest Cracks, cross-sectional shapes and characteristics of crack openings, attention to the surface unevenness of the object surface after corrosion or impact, and the shape and characteristics of the cross-section.
  • An endoscope measurement method described in the above [0045] can be applied not only to a light fan stereo camera, but also to all other stereo cameras having two independent and parallel cameras. Similarly, the endoscope measurement method is not only applied to images in the left and right formats of the light fan, but also to the mainstream image formats currently output by dual-lens stereo cameras, including left and right formats, traditional left and right formats, and two independent images.
  • the endoscope measurement method needs to meet the following three conditions at the same time: the first condition is that the two cameras are set independently and in parallel.
  • the second condition is that the stereo player and the stereo touch screen are a flat screen or a curved screen with a curvature radius that is much larger than the screen length.
  • the third condition is that there is a linear spatial relationship between the stereo image collection space and the stereo image playback space.
  • An endoscopic measurement method can accurately determine the spatial coordinates (x, y, z) of a point of interest depending on whether it is possible to accurately determine the left and right images of the point of interest in a left and right format image screenshot Horizontal positions X L and X R.
  • X L and X R are two vertical straight lines at the left and right images passing the point of interest in the two left and right image screenshots, respectively. The horizontal distance to the center points of the left and right image screenshots.
  • X L and X R are defined as that the left and right images of the point of interest are respectively located at the right half of the center of the left and right image screenshots, and the left half of the center of the left and right image screenshots are negative, respectively. , Located at the center of the left image screenshot and the right image screenshot, respectively. Both left and right images of a point of interest are located on the same horizontal line in the left and right image screenshots.
  • Y L and Y R are the vertical coordinates of the left and right images of a point of interest respectively in the left and right image screenshots.
  • An endoscopic measurement method provides three methods for accurately positioning the left and right two images of a point of interest in the horizontal position X L and X R in a left and right format image screenshot.
  • the first method is when a point of interest is located on a reference object with geometric features, for example, a non-horizontal straight line, a curve, a geometric abrupt change on the surface of the object, or a left image of the point of interest.
  • geometric features for example, a non-horizontal straight line, a curve, a geometric abrupt change on the surface of the object, or a left image of the point of interest.
  • the second method is that the perception module and the attached algorithm in the stereo image processor will automatically detect and perceive one or more objects of interest in the left and right image screenshots simultaneously, and detect and perceive different The object of interest is surrounded by different "boxes" and displayed on the screen.
  • the perception module uses its own algorithm to calculate the position of each of the objects of interest surrounded by different "boxes" in the left and right image screenshots and the distance to the center of the two images or image screenshots.
  • the stereo measurement method will obtain the actual coordinates of each object of interest surrounded by a different "box” according to the relationship described in [0048] above.
  • the algorithm in the perception module detects, simulates, compares, corrects, identifies, and calculates the distance to the center of the left and right image screenshots from each pixel related to the object of interest.
  • the algorithm that comes with the perception module simulates the object of interest in pixels, compares and corrects the results, so the accuracy of the final result is high and satisfactory results can be automatically obtained.
  • the user When multiple different objects of interest appear on the screen, the user only needs to click on one of the objects of interest surrounded by a “box” on the screen, and the operating system will only display the one that is ultimately selected by the user on the screen. Information about the object, and all other "selected" boxes of the object of interest disappear from the screen.
  • the perception module and accompanying algorithms have departed from the scope of the invention.
  • the invention patent will use this technology and method and directly apply this technology and method to the stereo measurement method.
  • the third method is progressive.
  • the position of the left image X L of the point of interest is first determined in the left image screenshot, and then in the right image screenshot
  • One of the horizontal lines passing through X L "reasonably" assumes the position X R of the right image of the point of interest.
  • X L and the hypothetical X R are used to obtain the actual spatial coordinates (x, y, z) of the point of interest, and a stereo virtual image of the point of interest is displayed on the stereo touch screen.
  • the stereoscopic virtual image of the point of interest does not coincide with the stereoscopic image in the background, it indicates that the position X R of the right image of the point of interest that is “reasonably” assumed in the right image screenshot is not accurate.
  • the screenshot of the right image re-assume a new position X R of the right image of the point of interest, and repeat the above steps until the two stereo images completely coincide or a satisfactory result is obtained.
  • An endoscopic measurement method starts with the following two steps.
  • the first step is to obtain a left and right format image capture from the image that includes one or more points of interest on the surface of the object, the surface of interest, the volume of interest, the surface cracks or the bumps on the damaged surface.
  • the second step is to select the measurement destination (not limited to) in the stereo touch screen menu, point-endoscope, point-point, point-line, point-plane, surface area, volume, surface crack, surface crack Area, surface crack cross section, surface damage parameters, surface damage area, surface damage cross section, and maximum depth.
  • the endoscope measurement method displays the calculation results directly on the stereoscopic touch screen.
  • the first step is to obtain a screenshot of the left-right format image from the image.
  • select "point-endoscope" in the stereo touch screen menu is to use the stylus pen, finger or mouse to determine the position X La of the left image of the point of interest a on the left image screenshot.
  • a horizontal line appears on the stereoscopic touch screen that passes through the X La position and spans the left and right image screenshots.
  • the fourth step is to use a stylus, a finger or a mouse to determine the position X Ra of the right image of the point of interest a on the horizontal line of the right image screenshot.
  • the endoscope measurement method will calculate the distance from the point of interest a to the intersection between the centerline of the stereo camera in the endoscope and the outer surface of the front surface of the endoscope;
  • c is the distance from the center of the optical lens module to the outer surface of the front end surface of the endoscope.
  • the first step is to obtain a screenshot of the image in left and right format from the image.
  • the second step select "point-point" in the stereo touch screen menu.
  • the third step is to determine the positions X La , X Ra , X Lb and X Rb of the two left and right images of the two points of interest a and b on the surface of the object in the left and right image screenshots.
  • the endoscope measurement method will calculate the distance between two points of interest a and b on the surface of the object of interest as:
  • the first step is to obtain a screenshot of the image in left and right format from the image.
  • the second step select "dot-line" in the stereo touch screen menu.
  • the positions X La and X Ra of the left and right images of the point of interest a in the left and right image screenshots are determined respectively.
  • the fourth step is to determine the positions X Lb , X Rb , X Lc and X Rc of the left and right images of the two feature points b and c on a straight line in the space respectively.
  • the endoscope measurement method will calculate the distance from a point of interest a on the surface of the object of interest to a straight line passing through two characteristic points b and c.
  • the first step is to obtain a screenshot of the left-right format image from the image.
  • the second step select "point-plane" in the stereo touch screen menu.
  • the positions X La and X Ra of the left and right images of the point of interest a in the left and right image screenshots are determined respectively.
  • the fourth step is to determine the positions X Lb , X Rb , X Lc , and X Rc of the three feature points b, c, and d that are not on a straight line in a spatial plane in the left and right image screenshots.
  • X Ld and X Rd The endoscope measurement method will calculate the distance from a point of interest a on the object of interest to a plane including three feature points b, c, and d that are not on a straight line;
  • a curve on a stereoscopic touch screen can be roughly regarded as a horizontal straight line between two pixels next to each other.
  • the vertical straight line and the horizontal and vertical lines between two adjacent pixels are right-angled.
  • the area enclosed by a closed-loop curve and the sum of the area of all pixel units enclosed by a closed-loop stitching curve are closer.
  • the horizontal distance between two adjacent pixels is a
  • the vertical distance is b
  • the area of a pixel unit is The sum of the area of all pixel units surrounded by a closed-loop stitching curve in a stereo touch screen is
  • the first step is to obtain a screenshot of the image in left and right format from the image.
  • the system will automatically retain one of the image screenshots and enlarge the retained one image screenshot to the full screen.
  • the third step using a touch pen, a finger or a mouse, on the screen, draw a closed-loop stitching curve that includes all the images of the surface of interest along the edge of the image of the surface of interest.
  • the endoscopic measurement method will calculate the area enclosed in the closed-loop stitching curve.
  • the area enclosed by the closed loop stitching curve obtained in the above [0056] is just the area where the actual area of the surface of interest is projected on a plane perpendicular to the center line (Z axis) of the stereo camera.
  • the fourth step when the surface of the object of interest is a flat surface or a surface with a large curvature can be regarded as a flat surface, according to the method described in [0054] above, it is determined that three on the flat surface are not on the same straight line.
  • the left and right images of the feature points b, c, and d are respectively at the positions X Lb , X Rb , X Lc , X Rc , X Ld, and X Rd in the left and right image screenshots.
  • the endoscope measurement method calculates the normal vector N of the surface of the object of interest and the actual area of the surface of the object of interest equal to the area obtained by the method described in [0056] above divided by the normal vector N of the surface of the object of interest and the center of the stereo camera Yu Xuan at the angle of the line (Z axis).
  • the epidermis or mucosa of tissues of interest that are often detected in medical endoscopes, laparoscopy, and minimally invasive surgery are, but are not limited to, lesions of the gastric mucosa and organ epidermis. If the approximate tissue area of the gastric mucosa and organ epidermal lesions can be quickly obtained, it can help doctors quickly make a diagnosis, design surgery and operation plans. Adjust the direction of the centerline of the endoscope terminal. When the centerline of the endoscope terminal is perpendicular to the epidermis or mucosal surface of the organ to be measured, take a screenshot of the left-right format image. Keep one of the image screenshots and zoom in to the full screen.
  • the first step is to obtain a left and right format image screenshot from the image.
  • the second step select Volume in the stereo touch screen menu, the system will automatically retain one of the image screenshots and enlarge the retained one image screenshot to the full screen.
  • the third step the actual area of the surface of the object of interest is obtained according to the methods described in [0057] and [0058] above.
  • the fourth step is to return to the left and right image screenshots.
  • the object of interest is a flat plate or the curvature can be regarded as a flat plate
  • the endoscope measurement method calculates the thickness of the flat plate of interest equal to the distance between the two characteristic points a and b obtained by the calculation multiplied by the angle between the vector ab and the normal vector N of the surface of the flat plate.
  • the actual volume of the flat plate of interest is equal to the actual area of the flat plate obtained in the third step above multiplied by the thickness of the flat plate obtained in the fourth step above.
  • Tissues of interest often detected in medical endoscopes, laparoscopy, and minimally invasive surgery include, but are not limited to, polyps, tumors, organs, and masses attached to the surface of organs. If the shape of these polyps, tumors, organs, and lumps can be approximated as a sphere or ellipsoid and the approximate value of the volume of the tissue of interest can be quickly obtained, it can help doctors quickly make a diagnosis, design surgery and operation plans. For a sphere of interest, adjust the direction of the centerline of the endoscope terminal.
  • a stylus, a finger or a mouse to draw a circular or oval closed-loop stitching curve along the edge of the image of interest on the stereo touch screen.
  • a touch screen pen to draw a straight line across the circular closed-loop stitching curve on the screen and determine the left and right images of the two points a and b that the line intersects with the circular closed-loop curve.
  • the endoscopic measurement method will calculate the diameter D and volume of the sphere-shaped tissue of interest, and the major and minor axes of the ellipsoid-shaped tissue of interest B, C, and the volume of the ellipsoid-shaped tissue of interest, respectively;
  • the first step is to adjust the position and direction of the centerline of the endoscope end, so that the centerline is consistent with the longitudinal direction of the crack and parallel to the surface of the object.
  • a screenshot of the left-right format image is taken.
  • the second step use a touch screen pen, finger or mouse to determine the two intersection points a and b of the left and right edges of the surface of the object of interest and the left and right edges of the crack cross-section opening respectively Positions X La , X Ra , X Lb and X Rb in the image screenshot.
  • the system will automatically retain one of the image screenshots and enlarge the retained image screenshot to the full screen.
  • a touch screen pen a finger or a mouse to determine the positions of multiple characteristic points X L1 , X L2 , X L3 , ... and X R1 on the left and right edges of the crack cross section opening, respectively.
  • each feature point X L # and X R # are on the same crack cross section with the two intersection points a and b described above, the feature points on the left and right two opening edges of all crack cross sections have a point a
  • the same parallax as point b, or the convergence depth coordinate Zc of points a and b is equal to the convergence depth coordinate Zc of all feature points on the left and right crack opening edges of the crack cross section.
  • the endoscopic measurement technology will calculate the vertical distance Y L # between point a and each feature point X L # on the left edge of the crack cross-section opening and point b and each feature point X on the right edge of the crack cross-section opening. the vertical distance between the R # Y R #.
  • the left edge of the opening of the crack cross section is composed of a straight line that successively connects adjacent feature points X L # on the left edge of the crack cross section opening.
  • the right edge of the crack cross-section opening is composed of a straight line that successively connects adjacent feature points X R # on the right edge of the crack cross-section opening.
  • the left and right edges of the crack cross section formed by a plurality of straight lines form a V-shaped cross-sectional opening. The more feature points are selected, the closer the edge of the crack cross section is to the edge of the actual crack cross section.
  • the measuring method of the cross section and the maximum depth of the concave-convex part of the surface of the object is to adjust the position and direction of the centerline of the endoscope terminal, and make the centerline parallel to the surface of the object.
  • you see the most representative part of the depression on the surface of the stereo touch screen you can take a screenshot of the left and right format images.
  • the second step determine the positions X La , X Ra , X Lb and X Rb of the left and right images of the two intersection points a and b where the surface of the object intersects the edge of the damaged cross section in the left and right image screenshots.
  • select "damaged cross section" in the stereo touch screen menu retain one of the image screenshots and enlarge the retained one image screenshot to the full screen. Enter the radius of curvature + R, (convex surface) or -R (concave surface) of the damaged surface in the next level of the menu. A curve with a radius of curvature R passing through points a and b appears on the stereoscopic touch screen.
  • the fourth step is to return to the left and right image screenshots to determine the positions X Lc and X Rc of the lowest point c of the damaged section on the stitching curve.
  • the endoscope measurement method will calculate the area of the damaged cross-section on the surface of the object.
  • the points a and b are the vertical distances Y c from the lowest point c of the cross-section.
  • New solutions and measurement methods can be a combination of the basic measurement methods described above or other new methods.
  • the advantages of the present invention include (not limited to): a stereoscopic image provided by a stereoscopic endoscope with depth, combined with a dual-instrument channel endoscopic operation technology, an endoscope stabilizer and a table device greatly improve the doctor's operation Accuracy, stability, quality, and efficiency, and solve the problem of doctors' hand-eye separation; an endoscopic measurement method enables doctors to perform real-time detection of masses, mucous membranes and diseased tissues found in endoscopic and minimally invasive surgery Measurements are made; the fan left and right format images output by the fan stereo camera have the same horizontal angle of view, resolution, image efficiency, standard playback format and high quality image effects as the traditional left and right format images.
  • the invention has a highly integrated structural design and an intelligent and user-friendly operation method, and has the characteristics of simple operation, high efficiency, small image delay, low cost, and easy promotion.
  • 1-1 is a schematic diagram of the imaging principle of the first optical fan stereo camera of the present invention.
  • Figure 1-2 is a view from the direction of Figure 1-1A;
  • 2-1 is a schematic diagram of the imaging principle of the second optical fan stereo camera of the present invention.
  • Figure 2-2 is a view from the direction of Figure 2-1A;
  • Figure 3-1 is a schematic diagram of the imaging principle of a third type of optical fan stereo camera according to the present invention.
  • FIG. 3-1A is a view from the direction of FIG. 3-1A;
  • Figure 5-1 is a schematic diagram of a conventional imaging circle imaging principle
  • 5-2 is a schematic diagram of the elliptical imaging principle of the optical fan compression imaging of the present invention.
  • FIG. 6 is a schematic diagram of an image of a left-right format of a light fan according to the present invention.
  • FIG. 7 is a schematic diagram of an image in left and right formats
  • FIG. 8 is a schematic diagram of a conventional left and right format image
  • FIG. 9 is a schematic diagram of a comparison between the left and right formats of the fan of the present invention and the traditional left and right formats;
  • FIG. 10 is a schematic view of a single-instrument stereoscopic medical endoscope according to the present invention.
  • FIG. 11 is a schematic diagram of a dual-instrument stereoscopic medical endoscope according to the present invention.
  • FIG. 12 is a schematic view of a dual-instrument channel medical endoscope operating handle of the present invention.
  • FIG. 13 is a schematic diagram of a medical endoscope workbench of the present invention.
  • FIG. 14 is a schematic diagram of a medical endoscope stabilizer of the present invention.
  • FIG. 15 is a schematic diagram of a three-dimensional image acquisition space
  • 16 is a schematic diagram of a three-dimensional video playback space
  • 17 is a schematic diagram of the equivalent principle of the convergence method
  • FIG. 18 is a schematic diagram of positions of two left and right images of one focus point in a left and right format image screenshot
  • FIG. 19 is a schematic diagram of the corresponding principle of a pair of left and right images and a set of spatial coordinates of a focus point of the present invention.
  • 20 is a schematic diagram of measuring a distance from a point of interest to an endoscope according to the present invention.
  • 21 is a schematic diagram of measuring a distance between two points of interest according to the present invention.
  • 22 is a schematic diagram of measuring a distance from a point of interest to a straight line according to the present invention.
  • FIG. 23 is a schematic diagram of measuring a distance from a point of interest to a plane according to the present invention.
  • FIG. 24 is a schematic diagram of measuring a surface area of a planar object according to the present invention.
  • 25 is a schematic diagram of measuring the volume of a flat object according to the present invention.
  • Figure 26-1 is a schematic diagram of a cross-section of a surface crack measured by the present invention
  • Figure 26-2 is a schematic diagram of the shape and depth of an open portion at a cross-section of a surface crack measured by the present invention
  • FIG. 27-1 is a schematic diagram of the cross section of a surface damaged depression measured by the present invention
  • FIG. 27-2 is a schematic diagram of the shape of the cross section of the depression measured by a surface damaged depression of the present invention.
  • Figure 1 shows the imaging principle of the first optical fan stereo camera.
  • the distance between the center lines of the two optical lens modules is t.
  • An oblique flat lens 2 is provided in the lens group 1.
  • the oblique flat lens 2 generates a translation of the image from the front lens in the lens group 1 in the horizontal direction toward the center line of the optical fan stereo camera, and enters the light fan after being corrected by the rear lens in the lens group 1.
  • the lenticular lens 3 and the lenticular lens 4 in the light fan compress the image in the horizontal direction by half and enter a right-angle reflecting prism 6 at the back.
  • Figure 1-2A shows the oblique inner surface of a right-angle prism 6 that totally reflects the image from the front and bends it downward by 90 °, and then projects it to the left or right half of the imaging surface 8 of an image sensor 9 Imaging.
  • the images collected by the left and right optical lens modules placed horizontally are imaged on the left half and the right half of the imaging surface 8, respectively.
  • One right-angled triangle-shaped surface 7 of the right and left right-angle reflective prisms 6 is respectively plated with a coating and is placed or bonded together along the coated triangular surface 7.
  • a light-shielding plate 5 disposed vertically is located on the center line of the light fan stereo camera.
  • Figure 2 shows a schematic diagram of the imaging principle of the second optical fan stereo camera.
  • the distance between the center lines of the two optical lens modules is t.
  • the two right-angle prisms 11 and 12 provided on the rear of the lens group 10 generate a translation of the image from the lens group 10 in the horizontal direction toward the centerline of the stereo fan camera and enter the light fan after the correction by the lens group 13 .
  • the lenticular lens 3 and the lenticular lens 4 in a light fan compress the image in the horizontal direction by half and enter a right-angle reflecting prism 6 at the back.
  • Figure 2-2A shows the oblique inner surface of a right-angle prism 6 that totally reflects the image from the front and bends it downward 90 ° before projecting it onto the left or right half of the imaging surface 8 of an image sensor 9 Imaging.
  • the images collected by the left and right optical lens modules placed horizontally are formed on the left half and the right half of the imaging surface 8, respectively.
  • One right-angled triangle-shaped surface 7 of the right and left right-angle reflective prisms 6 is respectively plated with a coating and is placed or bonded together along the coated triangular surface 7.
  • a light-shielding plate 5 disposed vertically is located on the center line of the light fan stereo camera.
  • Figure 3 is a schematic diagram of the imaging principle of a third type of optical fan stereo camera.
  • the distance between the center lines of the two optical lens modules is t.
  • the two right-angle prisms 11 and 12 provided at the rear of the lens group 10 generate a translation of the image from the lens group 10 in the horizontal direction toward the centerline of the stereo fan stereo camera and enter the light fan after being corrected by the lens group 13 .
  • the lenticular lens 3 and the lenticular lens 4 of the light fan compress the image by half in the horizontal direction. The compressed image is projected onto the left half or the right half of the imaging surface 8 of an image sensor 9.
  • the position of the right-angle prism 12 is fixed.
  • the lens group 10 and the right-angle prism 11 can move synchronously along a horizontal straight line perpendicular to the center line of the optical lens module and change the viewing distance t of the optical fan stereo camera.
  • the images collected by the left and right optical lens modules placed horizontally are formed on the left half and the right half of the imaging surface 8 respectively.
  • a light-shielding plate 5 disposed vertically is located on the center line of the light fan stereo camera. In the view in the direction of FIG. 3-2A, one vertical straight edge of the light blocking plate 5 is parallel to the imaging surface 8 of the image sensor 9, and is very close to but not intersecting with the imaging surface.
  • Figure 4 shows the principle of a light fan deformation system.
  • a light fan is composed of two cylindrical lenses 3 and 4.
  • the axes of the lenticular lens 3 and the lenticular lens 4 are perpendicular to each other.
  • a beam of light A (shaded part in the figure) passing through the main meridional plane of the cylindrical lens 3 in FIG. 4 enters the light fan.
  • 4 pair of light A refracts light A like a spherical lens.
  • a beam of light B is refracted when passing through the other main meridian of the cylindrical lens 3, and the cylindrical lens 4 is equivalent to a parallel plate.
  • G 0 is the optical power in the main meridional plane of the lenticular lens.
  • the lenticular lens 3 and the lenticular lens 4 in the optical fan are at 90 ° to each other, one of the meridional surfaces in the lenticular lens 3 is an ⁇ angle, and then the lenticular lens 4 is an (90 ° - ⁇ ) angle, and
  • Figure 5 is a schematic diagram showing the principle of imaging circle imaging and light fan imaging ellipse imaging. An equation of the outer edge of the imaging circle 14 in FIG. 5-1;
  • An imaging surface 8 of an image sensor having a length w and a width v is inscribed in the outer edge of the imaging circle 14.
  • the diameter of the smallest circumscribed imaging circle 14 is:
  • the inscribed rectangular area of the outer edge of the ellipse 15 is:
  • the maximum rectangular area inscribed on the outer edge of the ellipse 15 is:
  • magnifications of the two principal meridians of the light fan to the imaging circle are respectively;
  • a camera projects an acquired image on an imaging surface 8 of an image sensor 16 through an imaging circle 14.
  • an imaging circle 14 and an image 16 are compressed in half along a horizontal straight line parallel to the imaging surface of the image sensor. After being compressed, the imaging circle 14 is deformed into an elliptical imaging circle 15, image 16 and become image 17.
  • Figure 6 shows a schematic image of the left and right format of a fan.
  • the left and right images collected by two left and right independent optical lens modules in a light fan stereo camera are projected on the left and right halves of the imaging surface 8 of the same image sensor through the left and right imaging ellipses 15L and 15R, respectively.
  • Image 17L and 17R respectively.
  • the stereo image processor corrects, processes, optimizes, and pans an image composed of the images 17L and 17R, and outputs an image in the left-right format composed of two images 18L and 18R.
  • the two images 18L and 18R in the fan-right format are respectively doubled in the horizontal direction and become two independent and half-pixel images 19L and 19R in the standard playback format.
  • Figure 7 shows a schematic image of left and right formats.
  • the left and right images captured by the left and right two independent lenses in a dual-lens single image sensor stereo camera are projected through the left and right imaging circles 20L and 20R onto the left and right halves of the imaging surface 8 of the same image sensor Image 21L and 21R respectively.
  • the stereo image processor corrects, processes, and optimizes an image composed of images 21L and 21R, and outputs a left-right format image composed of left and right images 22L and 22R.
  • the two images 22L and 22R in the left and right formats are down-sampled into two independent images 23L and 23R in a non-standard playback format with half a pixel, respectively.
  • FIG. 8 is a schematic diagram of a side-by-side image.
  • the two left and right images collected by two independent cameras are used to form 25L and 25R on the left and right independent image sensors through the left and right traditional imaging circles 24L and 24R, respectively.
  • the stereo image processor corrects, processes, and optimizes the left and right independent images 25L and 25R, and outputs the left and right independent images 26L and 26R, respectively.
  • the two images 26L and 26R are down-sampled into images 27L and 27R with half pixels, respectively.
  • the two images 27L and 27R are spliced together in a left-right manner to form a traditional left-right format image 28.
  • the left and right two images 28L and 28R of a conventional left and right format image 28 are horizontally expanded and become two left and right independent and half-pixel standard playback format images 27L and 27R. .
  • FIG. 9 is a schematic diagram of the comparison between the left and right formats of a fan and the traditional left and right formats.
  • the two independent cameras described in the above [0072] image 25L and 25R on two independent image sensors through two imaging circles 24L and 24R, respectively.
  • the image 25L or 25R becomes the image 17L or 17R in the left-right format with the fan described in [0070] above, and the imaging circle 24L or 24R becomes the imaging ellipse 29L or 29R, respectively.
  • the imaging ellipse 29L or 29R is the same as the image imaging ellipse 15L or 15R of the left and right formats of the fan.
  • the shades 30 and 32 in the figure are parts of an imaging circle 24L or 24R and an imaging ellipse 15L or 15R that have not been received or imaged by the image sensor.
  • the shadow 31 is a result of the shadow 30 being compressed in the horizontal direction. Shadow 31 is equal to shadow 32, indicating that the image efficiency of the two different image formats is equal.
  • FIG. 10 is a schematic diagram of a single-instrument stereo medical endoscope. Shown in FIG. 10 is a front surface 33 of a stereo medical endoscope, including two optical lens modules 34 in a stereo camera, an endoscope instrument channel 35, a gas-liquid channel 36, and three different wavelengths. Light fixture 37 and three LED lights 38.
  • FIG. 11 is a schematic diagram of a dual-instrument stereo medical endoscope. Shown in FIG. 11 is the front end face 39 of a stereo medical endoscope, including two optical lens modules 34 in a stereo camera, two endoscope instrument channels 35, a gas-liquid channel 36, and three different wavelengths. Light fixture 37 and three LED lights 38.
  • Figure 12 is a schematic diagram of a dual-instrument channel medical endoscope operating handle.
  • a medical endoscope operating handle 40 with two instrument channels is provided with two different instrument channel access ports 41 and 42.
  • the diameters of the two instrument channel access ports 41 and 42 may be the same or different.
  • FIG. 13 is a schematic diagram of a medical endoscope table.
  • a stereoscopic touch screen 44 is fixed to a medical endoscope table 43 shown in FIG. 13, a medical endoscope operation handle 40 having dual instrument channel access ports 41 and 42, and a medical endoscope stabilizer 46 .
  • the operation handle 40 is fixed on the table by a holder 45.
  • the doctor can control the start and stop of the device fixed on the work table 43 by using his own foot to control the plurality of foot switches 48 provided on one foot pedal 47.
  • FIG 14 is a schematic diagram of a medical endoscope stabilizer.
  • a medical endoscope stabilizer 46 includes a lower snap ring 49, an upper snap ring 50, an upper electromagnet 51, a lower electromagnet 52, a return spring 53, a fixed base 54, a shockproof soft gasket 55, and an upper and lower snap ring gasket 56. , Slide the guide 57 and the snap ring pressure adjustment knob 58.
  • the upper and lower retaining rings 49 and 50 in a medical endoscope stabilizer 46 are in an open state.
  • the medical endoscope stabilizer is in a working state.
  • the upper electromagnet 50 is attracted and moved downward by the lower electromagnet 49 and clamps the endoscope hose 59 so that the endoscope hose 59 is in the The middle of the upper and lower snap rings 49 and 50 cannot move forward and backward.
  • Figure 15 is a schematic diagram of a stereo image collection space.
  • the left and right cameras 60 and 61 are rotated around the center of the camera lens toward a subject of interest 62 at the same time until the center lines of the two cameras 60 and 61 converge until the subject of interest 62 is shot.
  • This is a traditional method of stereo shooting-the convergence method. This method of shooting is the same as the way people look at the world with their eyes.
  • the distance between the lens centers of the left and right cameras 60 and 61 is t.
  • the scene in front of the object of interest 62 is called the foreground 63, and the scene behind is called the back scene 64.
  • the origin 0 (0,0,0) of the stereo image collection space coordinate system is located at the midpoint of the line connecting the center of the left and right camera lenses.
  • FIG. 16 is a schematic diagram of a stereo video playback space.
  • the left and right images captured by the left and right cameras 60 and 61 in the above [0079] are respectively projected onto a flat screen 67 having a horizontal length W.
  • the horizontal distance between the left and right images on the screen is the parallax P of the left and right images.
  • a person's left eye 65 and right eye 66 can only see the left image and the right image on the screen 67, respectively, the human brain feels after merging two images with different perspectives obtained by the left eye 65 and the right eye 66.
  • the virtual image 68 corresponding to the object of interest 62 appears on the screen.
  • the object of interest 62 seen by the viewer's eyes 65 and 66 on the flat screen 67 is a virtual image 68 in which two left and right images are superimposed together.
  • a virtual image 69 corresponding to the foreground object 63 appears in the audience space.
  • a virtual image 70 corresponding to the background object 64 appears in the screen space.
  • the origin 0 (0,0,0) of the stereo image playback space coordinate system is located at the midpoint of the line connecting the eyes of a person.
  • FIG. 17 is a schematic diagram showing the equivalent principle of the convergence method and the equivalent convergence method.
  • the left and right cameras 60 and 61 are another shooting method used when shooting the same object 62 of interest-the parallel method or equivalent convergence method.
  • the center lines of the left and right cameras 60 and 61 are parallel to each other and at a distance of t.
  • the image sensors 71 and 72 in the two cameras 60 and 61 are respectively translated by a distance h in the horizontal direction toward the opposite direction to each other before shooting.
  • the object of interest 62 is imaged on the centers of the image sensors 71 and 72 in two different shooting methods, respectively.
  • the equivalent convergence method not only solves the problem of trapezoidal distortion in the convergence method, but also can obtain some practical stereoscopic image effects through a series of mathematical relations established through geometric relations and optical theory. According to the geometric relationship shown in Figure 17-2, we get the following relationship,
  • Equation (5) shows that the apparent distance between the two cameras is not equal to the distance between the eyes of a person.
  • Equation (6) shows that Zc and Z are not linear.
  • Ideal imaging is any point in the stereo image collection space. A straight line and a surface correspond to the only point in the stereo image playback space, a straight line and a surface. Ideal imaging can make two images obtained in a three-dimensional image acquisition space corresponding to a fused three-dimensional image in the three-dimensional image playback space without distortion and deformation. Its sufficient and necessary condition is to allow the corresponding points in the two spaces to be between The mathematical relationship becomes linear. Equation (6) shows that the necessary and sufficient condition for the linear relationship between Zc and Z to be true is
  • Equation (7) shows that two images with different perspectives obtained at any point in the stereo image collection space correspond to a single point in the stereo image playback space, and convergence is achieved at that point.
  • the left and right images obtained by the equivalent convergence method not only can obtain a more ideal three-dimensional image effect than the convergence method, which conforms to the way and habits of people's eyes to see the world, but also has no trapezoidal distortion.
  • the image in the left and right format of the fan is processed or post-processed.
  • FIG. 18 is a schematic diagram of positions of a left and right image for determining a point of interest in a left and right format image screenshot.
  • One includes left and right format image screenshots of a focus point a on the surface of the object of interest, left image screenshot 73 and right image screenshot 74.
  • the horizontal distance of the left image 75 of the point of interest a in the left image screenshot 73 from the center of the left image screenshot 73 is X L. According to the symbol rule described in [0048] above, X L ⁇ 0.
  • the horizontal distance between the right image 76 of the point of interest a in the right image screenshot 74 and the center of the right image screenshot 74 is X R > 0.
  • the position of the left image 75 of the point of interest a in the left image screenshot 73 and the position of the right image 76 in the right image screenshot 74 are located on the same horizontal line 77 across the screen.
  • the left and right two images of a focus point a are on one left
  • a left and right image of a focus point a is in a left and right format image screenshot
  • the left and right image screenshots are two independent image screenshots.
  • FIG. 19 is a schematic diagram showing the principle that two left and right images of a point of interest correspond to one spatial coordinate. According to the geometric relationship shown in FIG. 19, the following relationship is obtained,
  • the spatial coordinates a (x, y, z) of a point of interest a are;
  • the spatial coordinates a (x, y, z) of a point of interest a are;
  • the spatial coordinates a (x, y, z) of a point of interest a are;
  • FIG. 20 is a schematic diagram of measuring the distance from a point of interest a on the surface of the object of interest to the endoscope. According to the process and method described in [0051] above, the positions X La and X Ra of the left and right images of the focus point a in the left and right image screenshots 73 and 74 are determined, respectively.
  • the endoscope measurement method will calculate the distance from the point of interest a to the center of the outer surface of the front surface of the endoscope 59 as:
  • c is the distance from the origin of the coordinate system to the outer surface of the front end surface of the endoscope.
  • FIG. 21 is a schematic diagram of measuring the distance between two points of interest a and b on the surface of the object of interest. According to the process and method described in [0052] above, the positions X La , X Ra , X Lb and X Rb of the left and right images of the points of interest a and b in the left and right image screenshots 73 and 74 are determined, respectively.
  • the endoscope measurement method will calculate the distance between two points of interest a and b on the surface of the object of interest as:
  • FIG. 22 is a schematic diagram of measuring the distance from a point of interest a on the surface of an object of interest to a straight line passing through two characteristic points b and c.
  • the positions X La and X Ra of the left and right images of the point of interest a in the left and right image screenshots 73 and 74 are determined.
  • the positions X Lb , X Rb , X Lc and X Rc of the two left and right images of the two feature points b and c located on a straight line in the left and right image screenshots 73 and 74 are respectively determined.
  • the endoscope measurement method will calculate the distance from a point of interest a on the surface of the object of interest to a straight line that passes through two characteristic points b and c;
  • FIG. 23 is a schematic diagram of measuring the distance from a point of interest a to a plane 78 on the surface of the object of interest.
  • the first step is to determine the positions X La and X Ra in the left and right image screenshots 73 and 74 of the left and right images of the point of interest a according to the process and method described in [0054] above.
  • the positions X Lb , X Rb of the three left and right images of the three feature points b, c, and d that are not all on the same straight line in the two left and right image screenshots 73 and 74 are determined on the plane 78.
  • X Lc , X Rc , X Ld and X Rd are determined on the plane 78.
  • the endoscope measurement method will calculate the distance from a point of interest a on the surface of the object of interest to a plane 78 including three characteristic points b, c, and d;
  • Figure 24 is a schematic diagram of measuring the surface area of a flat object.
  • a closed-loop stitching curve 79 including the surface area of the plane of interest 80 is drawn on the stereoscopic touch screen.
  • the endoscopic measurement method will calculate the area enclosed by a closed-loop stitching curve 79. This area is just the area where the actual area of the surface of the plane 80 is orthographically projected on a plane perpendicular to the center line (Z axis) of the stereo camera.
  • the endoscopic measurement method calculates the actual area of the surface of the plane of interest 80 equal to the projection area obtained in the first step divided by the normal vectors N and Z determined by the three feature points b, c, and d on the surface of the plane of interest 80 Yu Xuan with an angle between the axes.
  • Figure 25 shows a schematic diagram of measuring the volume of a flat object.
  • a method and step for measuring the volume of the flat plate of interest 82 surrounded by the closed-loop stitching curve 81 In the first step, according to the process and method described in [0088] above, the surface of the flat plate of interest 82 surrounded by a closed-loop stitching curve 81 is obtained. Actual area.
  • the two left and right images with thickness feature points a and b on the flat plate 82 of interest are respectively determined in the left and right image screenshots 73 and 74. Positions X La , X Ra , X Lb and X Rb .
  • the stereo measurement method will calculate the cosine of the angle between the actual thickness of the flat plate 82 of interest equal to the length of the two feature points a and b times the vector ab formed by the two feature points and the normal vector N on the surface of the flat plate 82 of interest.
  • the actual volume of a flat plate of interest 82 surrounded by a closed loop curve 81 is equal to the actual area of the surface of the flat plate 82 times the actual thickness.
  • Figure 26 is a schematic diagram showing the cross-section measurement of a surface crack on a flat object.
  • a crack 83 appears on the surface of an object of interest.
  • the measurement contents of the crack 83 include the crack width, the longitudinal length, the surface crack cracking area, and the shape and depth of the opening at the surface crack cross section 84.
  • the width, longitudinal length, and surface crack area of the crack 83 were obtained according to the procedures and methods described in [0052], [0056], and [0057] above, respectively.
  • the first step is to adjust the centerline of the endoscope to the longitudinal direction of the crack 83 and to match The surface of the object is parallel.
  • screenshots 73 and 74 of a left-right format image are collected.
  • Figure 26-2 shows the shape and depth of the opening portion 85 at the crack cross section 84.
  • the distance V between the left and right edges of the opening portion 85 at the crack cross section 84 and the two intersection points a and b of the surface of the object of interest is determined, where V is the surface crack width of the crack 83 at the cross section 84.
  • the third step only one of the image screenshots 73 or 74 is retained and the remaining image screenshot is enlarged to the full screen. Using a touch screen pen, a finger or a mouse, the feature points X L1 , X L2 , X L3 ,... And the feature points X R1 , X R2 on the right edge of the opening portion 85 at the crack cross section 84 are determined, respectively. X R3 , ).
  • the endoscopic measurement method will calculate the position of each feature point on the left and right edges of the opening portion 85 at the crack cross section 84.
  • the left and right edges of the opening portion 85 at the crack cross section 84 are straight lines connecting the adjacent feature points X L # and X R # on the left and right edges of the opening portion 85 at the crack cross section 84 in sequence, starting from points a and b respectively.
  • composition The vertical coordinates y L # and y R # between each feature point X L # and X R # and the points a and b represent the depth of the feature point from the surface of the object of interest.
  • Figure 27 shows a schematic cross-section measurement of a surface-damaged depression.
  • a recessed portion 86 appears on the surface of an object of interest.
  • the measured contents of the recessed portion 86 include the width, length, area, shape of the cross section 87 and the maximum depth of the recessed portion.
  • the width, length, and surface recessed area of the surface recessed portion 86 of the object of interest are obtained according to the processes and methods described in [0052], [0056], and [0057] above.
  • the first step is to adjust the centerline of the endoscope parallel to the surface of the recessed part of the object and in the stereoscopic touch screen 44
  • a screenshot 73 and 74 of the left and right format images were collected.
  • Figure 27-2 shows the shape of the recessed cross section of the cross section 87.
  • the distance U between the cross section 87 and two intersection points a and b of the object surface is determined.
  • the fourth step is to retain one of the image screenshots 73 or 74 and enlarge the retained image screenshot to the full screen. Using a stylus, a finger or a mouse, draw a splicing curve 89 between the two intersections a and b along the edge of the recessed part in the image screenshot.
  • a closed-loop stitching curve on a concave cross-section 87 on the surface of the object is composed of a curve 88 having a radius of curvature R and a stitching curve 89 on the edge of the concave image.
  • the fifth step is to determine the position of the lowest point c on the cross section 87 in an image screenshot.
  • the endoscopic measurement technique will calculate the depths ya and yb and the area of the cross section 87 between the points a and b, respectively, from the point c.

Abstract

一种立体内窥镜及内窥镜测量方法,立体内窥镜包括光扇立体摄像机、双器械通道医疗内窥镜和操作手柄(40)、医疗内窥镜稳定器(46)、医疗内窥镜工作台(43),光扇立体摄像机采集的立体影像被光扇沿着水平方向上压缩后输出的光扇左右格式的影像与传统左右格式的影像相比拥有相同的水平视角、解析度、影像效率和一半像素的标准播放格式,立体内窥镜结合了光扇立体摄像机的优势和双器械通道内窥镜和内窥镜操作手柄(40)技术,让医生能够使用自己的双手同时操作两个器械进行微创手术,不仅极大地提高了手术的准确性、稳定性、质量和效率,而且解决了医生手眼分离的困扰。内窥镜测量方法能够对微创手术中发现的肿块,粘膜和病变组织实时地进行测量,可应用于医疗内窥镜、工业内窥镜、内窥镜测量和其他立体影像应用领域。

Description

一种立体内窥镜及内窥镜测量方法 技术领域
本发明涉及一种立体内窥镜、以及光扇立体摄像机、光扇左右格式、双器械通道医疗内窥镜和内窥镜操作手柄、内窥镜测量方法、内窥镜装置和系统。
背景技术
两种主流的双镜头单图像传感器立体成像技术;第一种是两个独立的光学镜头模组通过两个成像圆的方式分别将采集的两个独立的和具有不同视角的影像投射在一个图像传感器的左半部和右半部上成像。第二种是两个独立的光学镜头模组采集的两个独立的和具有不同视角的影像经过一个镜头组二次成像后通过一个成像圆的方式投射在一个图像传感器上成像。上述两种成像圆成像技术获得的一个左右格式的影像中的左右两个影像存在着水平视角小,影像效率低和较小的影像播放格式的问题。
传统的软管式医疗内窥镜技术和产品存在着下面几个(不限于)主要缺憾。第一个是获得的影像中没有深度信息。第二个是无法对手术中发现的可疑的肿块,粘膜和病变组织进行高精度的实时测量。第三个是医疗内窥镜只有一个内窥镜器械通道,内窥镜医生只能使用单手操作一个器械。第四个是医生操作内窥镜和腔镜时仍然受到手眼协调性和手眼分离的困扰。第五个是在手术中内窥镜的镜头和器械出口的稳定性不足。尤其是在进行粘膜剥离手术时。
近年来,以ESD为代表的消化内镜治疗技术发展迅速,已经能够完整地切除表浅消化道癌,粘膜下的肿瘤,体表无创伤并达到了与外科手术同样的效果。但是医生仍然还是通过传统的平面影像技术结合单手操作器械的方式进行手术。
自然腔道微创手术NOTES技术和达芬奇(Da Vinci)机械臂腔镜微创手术技术已经证明了如果一个医生能够在一个放大的立体影像环境下使用自己的双手同时操作两个医疗手术器械进行腔镜手术和微创手术,可以极大地提高手术的准确性,稳定性,质量和效率。
到目前为止,工业内窥镜技术和产品仍然存在着的三个(不限于)主要的缺憾。第一个是获得的影像中没有深度信息。第二个是双目测量技术无法对物体表面裂纹和横截面的形状,表面受到冲击或腐蚀造成的表面凹凸及横截面的形状进行测量。第三个是无法对现场检验中发现的部分问题进行实时处理。
为了解决上述传统的医疗内窥镜和工业内窥镜技术中存在的问题,本发明提出的一种立体内窥镜及内窥镜测量方法。
发明内容
本发明的目的是提出一种立体内窥镜及内窥镜测量方法,以解决下面的技术问题(不限于):第一个是软管式内窥镜采集的影像中没有深度信息;第二个是传统的医疗内窥镜只有一个内窥镜器械通道,内窥镜医生依靠单手操作器械;第三个是医生在进行内窥镜和微创手术时仍受到手眼协调性和手眼分离的困扰;第四个是传统的医疗内窥镜无法对手术中发现的可疑的肿块,粘膜和病变组织进行高精度的实时测量;双目式立体工业内窥镜无法对表面裂纹和受损后的表面凹凸横截面形状进行测量;第五个是内窥镜影像和内窥镜终端器械通道在手术中稳定性的问题。
一种立体内窥镜包括一个光扇立体摄像机、一个双器械通道医疗内窥镜、一个 双器械通道医疗内窥镜操作手柄、一个医疗内窥镜稳定器、一个医疗内窥镜工作台、一个立体影像处理器、一种立体影像平移方法、一种内窥镜测量方法和操作系统。一种立体内窥镜是一种使用了光扇立体摄像机作为内窥镜摄像机的立体内窥镜。
一个光扇立体摄像机包括两个完全相同的光学镜头模组,一个图像传感器(CCD或CMOS)和一个立体影像处理器。光扇立体摄像机中的两个完全相同的光学镜头模组的中心线与光扇立体摄像机中心线对称且彼此平行。每一个光学镜头模组中设置有一个光扇,光扇沿着一条位于两个光学镜头模组中心线所在的平面上并与两个光学镜头模组中心线垂直的直线方向上对光学镜头模组采集的影像进行压缩,沿着一条与两个光学镜头模组中心线所在的平面垂直的直线方向上保持影像不变。两个光学镜头模组采集的两个影像分别经过各自模组中的光扇后,在同一个图像传感器成像表面的左半部和右半部上分别成像。传统的双镜头或双镜头模组将各自采集的影像分别通过一个成像圆的方式投射到同一个图像传感器的成像表面的左半部和右半部上分别成像。一个光扇立体摄像机中两个光学镜头模组将各自采集的影像分别通过一个成像椭圆的方式投射到同一个图像传感器的成像表面的左半部和右半部上分别成像。一个光扇沿着一条位于两个光学镜头模组中心线所在的平面上并与两个光学镜头模组中心线垂直的直线方向上的影像压缩率可以从零(0%)至百分之五十(50%)不等,沿着一条与两个光学镜头模组中心线所在的平面垂直的直线方向上的影像压缩率等于零(0%)。影像压缩率指的是在一个确定的方向上:[(压缩前的影像长度-压缩后的影像长度)÷(压缩前的影像长度)]×100%。
光扇又称为光线扇。一个光扇是由两个轴线相互垂直的柱镜组成。柱镜是一个正柱面透镜或正柱面镜片。柱镜带有曲率的表面可以是圆柱面或非圆柱面。光扇中的一个柱镜的轴线位于两个光学镜头模组中心线所在的平面上并与两个光学镜头模组中心线垂直,另一个柱镜的轴线与两个光学镜头模组中心线所在的平面垂直。光扇中的两个柱镜的中心都位于光学镜头模组中心线上。光扇在两个不同的主子午面中的影像压缩率不同,如果光扇的两个主子午面中的影像都处于聚焦状态,则所有的子午面中的影像都是处于聚焦状态。这是一个光扇系统能够获得高质量影像须要满足的条件之一。一个光扇中,一个柱镜中的一个子午面与其主子午面成η角时,该子午面内的光焦度为Gη=G 0×Cos 2η。当光扇中的两个柱镜的轴线彼此互成90°时,则其中的一个是η,另一个是(90°-η),并且Sin 2η+Cos 2η=1。其中G 0是柱镜主子午面内的光焦度。
光扇立体摄像机输出的影像格式是一种光扇左右格式。光扇左右格式的影像中的左右两个影像分别是两个相对应的光学镜头模组采集的两个被各自模组中的光扇沿着一条位于两个光学镜头模组中心线所在的平面上并与两个光学镜头模组中心线垂直的直线方向上压缩了一半,沿着一条与两个光学镜头模组中心线所在的平面垂直的直线方向上保持不变的影像。当一个光扇立体摄像机中的两个光学镜头模组为水平放置时,光扇左右格式的影像中的左右两个影像沿着水平方向上的影像压缩率为百分之五十(50%),沿着垂直方向上的影像压缩率为零(0%)的一种影像格式。
传统的立体影像采集技术使用两个独立的摄像机对着关注物体同步地进行拍摄,获得两个独立的和具有不同视角的影像分别经过下采样后,按照左右排列的方式被拼接在一起成为一个传统左右格式的影像。这种传统的立体影像采集技术和方法已经成为一种立体影像采集技术的标准。传统左右格式的影像满足目前主要市场,国家和行业的影像传输标准,立体播放器和立体影像播放格式标准。传统左右格式的影像具有高质量的立体影像效果,较大的影像视角,较高的影像效率,具有一半像素的立体播放格式。与传统左右格式相比,光扇左右格式的影像中的左右两个影像不仅拥有相同的水平视角,解析度,影像效率,一半像素的标准播放格式,符合所有立体播放器和立体影像播放格式和标准,而且具有同步成像,更小延迟,结构简单和更低成本的优势。
传统的摄像机通过一个成像圆的方式将采集的影像投射到一个水平长度为w,垂直高度为v,面积为w×v的长方形图像传感器成像表面上成像。如果图像传感器的成像表面是成像圆的一个最大内接长方形,成像圆的直径为D=(w 2+v 2) 1/2。光扇立体摄像机中每一个光学镜头模组中的一个光扇将传统的一个成像圆变形成一个成像椭圆,该成像椭圆的最大内接长方形的水平长度为w/2,垂直高度为v,面积为(w×v)/2,等于一个图像传感器成像表面面积的一半。成像椭圆在垂直方向上的长半轴为a=v/√2,水平方向上的短半轴为b=w/2√2。两个光学镜头模组采集的两个影像分别经过各自模组中的一个光扇后,通过一个成像椭圆的方式投射到一个图像传感器的成像表面的左半部和右半部上分别成像。
本发明涉及了三种不同的左右格式的影像格式;第一种是上述[0012]中所述的光扇左右格式。光扇左右格式是光扇立体摄像机输出的一种影像格式。第二种是左右格式。左右格式是一种双镜头单图像传感器的立体摄像机输出的一种影像格式。第三种是上述
中所述的传统左右格式。
光扇左右格式与左右格式的影像相比具有下面的特点:对于水平放置的左右两个光学镜头模组,第一、光扇左右格式的两个影像的展开是将两个影像分别沿着水平方向上放大一倍后成为左右两个拥有一半像素的标准播放格式的影像。左右格式的两个影像的展开是将两个影像分别进行下采样。下采样后的两个影像拥有一半像素和水平方向上更小的非标准播放格式。第二、光扇左右格式的两个影像的水平视角,解析度和影像利用率比左右格式的两个影像都大。第三、光扇左右格式的成像方法是一种光学成像过程,展开时不需要任何算法。左右格式的两个影像在展开时的下采样过程是一个影像算法的过程。
光扇左右格式与传统左右格式的影像相比具有下面的特点:对于水平放置的左右两个光学镜头模组,第一、光扇左右格式的两个影像与传统左右格式的两个影像在被展开前和被展开后的解析度都相同。第二、光扇左右格式的两个影像和传统左右格式的两个影像被分别展开后的两个独立的影像具有相同的水平视角、解析度、影像效率和一半像素的标准播放格式。第三、拍摄过程中,光扇左右格式的两个影像都是精确同步的。传统左右格式的两个影像需要使用第三方同步技术和设备或在拍摄完成后对影像进行后期同步处理。第四、光扇左右格式的两个影像依靠光学方法直接成像。传统左右格式的两个影像需要进行下采样和左右拼接的过程,下采样和拼接是一个影像算法的过程。第五、光扇左右格式的成像技术需要一个图像传感器。传统左右格式的成像技术需要二个图像传感器。上述比较表明:光扇左右格式与传统左右格式的成像技术拥有相同的水平视角,解析度,影像效率和标准播放格式,但是光扇左右格式的两个影像具有同步成像,更小延迟,结构简单和更低成本的优势。
光扇立体摄像机中的两个光学镜头模组中心线之间的距离为t,光扇立体摄像机的视间距。视间距t在3毫米至200毫米之间。
光扇立体摄像机中的两个光学镜头模组的焦距、视角、光圈、光学镜片、镜片数量、镜头中心位置、镜片材料、每一个相互对应的镜片上的表面涂层、光学设计、结构设计以及所有参数都完全相同。
光扇立体摄像机有三种不同的模型,三种不同模型的光扇立体摄像机分别使用下面三种具有不同的光学设计和结构设计的光学镜头模组。第一种,第二种和第三种不同模型的光扇立体摄像机分别使用了与下上述顺序相对应的第一种,第二种和第三种具有不同的光学设计和结构设计的光学镜头模组。
第一种光学镜头模组的设计包括一个镜头组、一个光扇和一个直角棱镜。镜头组中设置有一个斜平板镜片。两个光学镜头模组中心线所在的平面与一个图像传感器成像表面平行。一个光学镜头模组中最后面的一个直角棱镜将来自前方的影像全反射并向下折弯90°后投射到一个图像传感器成像表面的左半部或右半部上成像。
第二种光学镜头模组的设计包括两个镜头组、两个直角棱镜或一个斜方棱镜、 一个光扇和一个直角棱镜。两个直角棱镜或一个斜方棱镜位于两个镜头组之间。两个光学镜头模组中心线所在的平面与一个图像传感器成像表面平行。一个光学镜头模组中最后面的一个直角棱镜将来自前方的影像全反射并向下折弯90°后投射到一个图像传感器成像表面的左半部或右半部上成像。
第三种光学镜头模组的设计包括两个镜头组,两个直角棱镜和一个光扇。对于水平放置的左右两个光学镜头模组,一个光学镜头模组中的两个直角棱镜位于两个镜头组之间,其中的一个直角棱镜的位置是固定不变的,另一个直角棱镜的位置可以是固定的或可以沿着一条位于水平面上并与两个光学镜头模组中心线相垂直的直线方向上移动。可移动的直角棱镜的直角射出面中心线与位置固定不动的直角棱镜的直角入射面中心线重合。可移动的直角棱镜的直角入射面中心线与设置在直角棱镜前面的一个镜头组中心线重合,两者之间的相对位置不变并可以沿着一条位于同一个水平面上并与两个光学镜头模组中心线相垂直的直线方向上同步地移动。左右两个光学镜头模组中心线与一个图像传感器的成像表面垂直并分别通过一个图像传感器的成像表面的左半部和右半部的中心。
上述[0021],[0022]和[0023]中所述的三种不同的光学镜头模组设计中的镜头组是由一组镜片组成,镜片可以是球面或非球面镜片,也可以全部都是非球面镜片。
上述[0021]和[0022]中所述的第一种和第二种光学镜头模组的设计中,设置在光学镜头模组最后面的直角棱镜的一个直角三角形表面上镀有涂层。涂层具有不透光,吸收投射到涂层表面上的光和不反光的特性。第一种和第二种光扇立体摄像机中的两个光学镜头模组中设置在最后面的两个直角棱镜沿着镀有涂层的表面被放置在一起或被粘结在一起。
上述[0020]中所述三种不同模型的光扇立体摄像机中设置有一个隔光板。隔光板是一个具有多边形的薄平板。隔光板表面上镀有涂层或粘贴有一种材料,涂层和材料都具有吸收投射到涂层或材料表面上的光和不反光的特性。隔光板被设置在光扇立体摄像机中心线上并与两个光学镜头模组中心线所在的平面垂直。对于第一种和第二种类型的光扇立体摄像机,隔光板的一个直边与两个光学镜头模组中最后面的两个直角棱镜镀有涂层的三角形表面上两个对应的直角边被放置或粘接在一起后形成的一条重合的直线相交。对于第三种类型的光扇立体摄像机,隔光板与图像传感器的成像表面相互垂直,隔光板的一个直边与图像传感器成像表面平行,并与成像表面非常接近但不相交。
立体影像处理器是一个集成了一个图像处理芯片(ISP)、一个无线通讯模块、一个感知模块和定位模块、一种立体影像平移方法、一种立体测量方法和操作系统的装置。
一个图像处理芯片对一个光扇立体摄像机输出的光扇左右格式的影像进行修正,处理和优化,包括(不限于)白平衡,色彩插值,饱和度,亮度,锐利度,对比度和其它参数。
立体影像平移方法是一种将光扇立体摄像机输出的光扇左右格式的影像中的左右两个影像分别沿着一条位于两个光学镜头模组中心线所在的平面上并与两个光学镜头模组中心线垂直的直线方向上朝向对方进行一个平移,其中一个影像的平移为h=T÷(4A×e)个像素,另一个影像的平移为h’=T÷(4A×e)+1或h’=T÷(4A×e)-1个像素。其中,T为人的双眼之间的距离,A为屏幕放大率,e为图像传感器中两个相邻像素点之间的水平距离。
实现立体影像平移的方法有多种,下面是其中的三种方法;
第一种方法;第一步,以光扇左右格式的影像的垂直中心线为基准。对于光扇左右格式的影像中的左影像,沿着中心线向左方向上的一条距离中心线h=T÷(4A×e)个像素的垂直直线上对左影像进行剪切,保留剪切后垂直直线左边的影像部分。对于光扇左右格式的影像中的右影像,沿着中心线向右方向上的一条距离中心线h’=T÷(4A×e)+1或h’=T÷(4A×e)-1个像素的垂直直线上对右影像进行剪切,保留剪切后垂直直线右边的影像部 分。第二步,将保留的左影像的右边缘与保留的右影像的左边缘对齐,拼接在一起成为一个新的光扇左右格式的影像。这种方法会造成新的光扇左右格式的影像中的左影像的左边缘和右影像的右边缘处分别有两个宽度为h和h’的垂直影像空白区。
第二种方法;第一步,分别将光扇左右格式的影像中的左右两个影像沿着水平方向上放大一倍并成为左右两个独立的影像。第二步,对于放大后的左影像,沿着左影像的右边缘向左方向上的一条距离右边缘h=T÷(2A×e)个像素的垂直直线上对左影像进行剪切,保留剪切后垂直直线左边的影像部分。对于放大后的右影像,沿着右影像的左边缘向右方向上的一条距离左边缘h’=T÷(2A×e)+1或h’=T÷(2A×e)-1个像素的垂直直线上对右影像进行剪切,保留剪切后垂直直线右边的影像部分。第三步,将被剪切后左右两个影像分别沿着水平方向上缩小一倍。第四步,沿着水平方向上将左影像的右边缘与右影像的左边缘对齐,拼接在一起成为一个新的光扇左右格式的影像。这种方法会造成新的光扇左右格式的影像中的左影像的左边缘和右影像的右边缘处分别有两个宽度为h/2和h’/2的垂直影像空白区。
第三种方法;使用不同的影像后期制作工具对光扇左右格式的影像进行后期制作获得影像平移的结果。这种方法无法对采集的立体影像进行现场直播。
一个无线通讯模块将经过立体影像处理器修正,处理,优化和平移后的影像、图片、语音和文字通过无线方式实时地输出到立体播放器、立体触模屏幕、远程控制中心、数据库、其它第三方并可以与第三方实时地进行多媒体互动和交流。
一个感知模块将对光扇立体摄像机输出的一个光扇左右格式的影像中的左右两个影像或左右两个影像截图中一个或多个关注物体的影像进行侦测和感知,模块附带的算法将计算出每一个被侦测和感知到的关注物体的影像分别在左右两个影像或左右两个影像截图中的位置和到左右两个影像或左右两个影像截图中心的距离。一个定位模块将对感知模块确定后的一个或多个关注物体的影像在实际中的位置进行定位。一个感知模块和定位模块的功能可以直接应用在一种立体测量方法中。
操作系统提供人机互动界面,操作指令,程序控制和管理,页面管理,影像管理和储存,操作系统兼容,第三方应用软件和APP兼容,通过有线或无线方式将经过修正,处理,优化和平移后的光扇左右格式的影像输出到立体播放器,立体触模屏幕,远程控制中心和数据库,其他第三方并可以与其他第三方实时地进行多媒体互动和交流。操作系统支持的输入和操作方式有触屏笔、手指、鼠标、键盘和语音。
一个立体触模屏幕是立体影像处理器的人机互动,输入和操作界面。输入和操作方式有触屏笔,手指,鼠标,键盘和语音。立体触模屏幕可以与立体影像处理器集成在一起或是彼此分开的两个不同的装置。如果立体触模屏幕与立体影像处理器是分开的两个不同的装置,立体触模屏幕中可以设置一个单独的无线通讯模块。
一种立体医疗内窥镜及系统包括一个拥有上述[0021]中所述的第一种模型的光扇立体摄像机、一个双器械通道的医疗内窥镜和一个双器械通道医疗内窥镜操作手柄、一个立体影像处理器、一个立体触摸屏幕、一个医疗内窥镜稳定器和一个医疗内窥镜工作台。一种立体医疗内窥镜输出的影像格式是光扇左右格式。
一个双器械通道医疗内窥镜是一个拥有两个独立的器械通道的医疗内窥镜。一个双器械通道医疗内窥镜操作手柄是一个拥有两个独立的器械通道和两个独立的器械通道接入口的医疗内窥镜操作手柄。一个双器械通道医疗内窥镜中的两个器械通道分别与一个双器械通道医疗内窥镜操作手柄上相对应的两个器械通道和器械通道接入口连接在一起。双器械通道医疗内窥镜中的两个器械通道的直径可以相同也可以不相同。双器械通道医疗内窥镜操作手柄中的两个器械通道和器械通道接入口的直径分别与各自连接的双器械通道医疗内窥镜中的两个器械通道的直径相等。一个双器械通道医疗内窥镜和双器械通道医疗内窥镜操 作手柄能够让一个内窥镜医生使用自己的双手同时操作两个内窥镜器械进行内窥镜检查,治疗和手术。双手同时操作两个器械的技术和方式不仅使内窥镜医生的操作更加协调,自然和人性化,而且能够让内窥镜医生的操作更加准确、稳定、高效率及获得更好的手术效果。双器械通道内窥镜技术和操作模式同样可以使用在其他的微创手术中。
医疗内窥镜稳定器是一个拥有两个半圆形卡环的装置。两个卡环在工作时夹紧仍位于患者身体外的部分内窥镜软管使得整个内窥镜软管无法再继续前后移动和转动,同时稳定了已经位于患者身体内的部分内窥镜软管和软管最前端的摄像机镜头和器械通道出口的位置、方向和角度。内窥镜稳定器不仅能够让医生在操作软管式内窥镜的过程中可以将用于控制软管的一只手转而用于操作器械或其他工作,而且极大地提高了内窥镜手术过程中软管式内窥镜镜头和内窥镜器械的稳定性,减小了医生的眼睛疲劳,提高了效率,精确度和获得更好的手术结果。
医疗内窥镜工作台是一种可以将包括立体触摸屏幕、医疗内窥镜操作手柄和医疗内窥镜稳定器固定在一起的装置。立体触摸屏幕、医疗内窥镜操作手柄和医疗内窥镜稳定器在工作台上的位置和角度可以随时被调整。当医生的双眼与固定在工作台上的立体触模屏幕之间的距离为立体视距Zs时,医生在立体触模屏幕上看到的是一个被放大了m×A倍(x和y方向上)和m 2倍(z方向上)没有变形的立体影像。其中,m为光扇立体摄像机光学镜头模组的横向放大率。如果医生能够将工作台上的立体触模屏幕的位置和角度和双器械通道医疗内窥镜操作手柄的位置和角度都调整到自己习惯和舒适的工作位置时,医生将不仅能够在检查和手术中操作内窥镜,腔镜或微创手术更准确、稳定、高效率,并且获得更好的结果和极大地减小了医生对于的手眼协调性和分离的困扰。医疗内窥镜工作台连接着一个或多个开关的脚踏板,医生可以通过脚踏板上的开关控制固定在工作台上的装置。
一种立体工业内窥镜包括一个拥有上述[0021]和[0022]中所述的第一种或第二种模型的光扇立体摄像机的内窥镜、一个立体触模屏幕和一个立体影像处理器。为了满足不同应用领域中的用户提出不同的需求和目的,立体工业内窥镜中可以增设一个气液通道和一个器械通道。这样,立体工业内窥镜不仅具有立体影像,检查和测量的功能,而且还能够使用器械通过器械通道进入到设备和系统中直接确定,修复和解决问题。一种立体工业内窥镜输出的影像格式是光扇左右格式。
一个立体医疗内窥镜及系统中通常配备有一个或多个立体播放器和一个立体触模屏幕。医生通过立体播放器或立体触模屏幕中播放的立体影像进行内窥镜操作。立体触模屏幕是系统输入和操作的人机互动界面。输入和操作方式有触屏笔、手指、鼠标、键盘和语音。操作人员可以随时将立体播放器中正在播放的内容切换到立体触模屏幕中。立体触模屏幕中设置的一个独立的无线通讯模块可以实现立体触模屏幕与立体影像处理器之间的无线连接,并通过立体影像处理器将多媒体内容,包括影像、图片、语音和文字随时通过有线或无线方式实时地输出到立体播放器,立体触模屏幕,远程医疗中心,控制中心,其它第三方并可以与第三方实时地进行多媒体互动和交流。
一个便携式立体医疗内窥镜和立体工业内窥镜配备有一个立体触模屏幕。一个立体触模屏幕与一个立体影像处理器集成在一起,所以立体触模屏幕无需额外设置一个独立的无限通讯模块。输入和操作方式有触屏笔、手指、鼠标、键盘和语音。一个立体影像处理器可以将多媒体内容,包括影像、图片、语音和文字随时通过有线或无线方式实时地输出到立体触模屏幕,远程医疗中心,控制中心,其它第三方并可以与第三方实时地进行多媒体互动和交流。
在一个立体影像采集空间中,水平设置的左右两个摄像机分别获得真实场景中一个关注物体的左右两个独立和具有不同视角的影像。立体影像采集空间坐标系(x,y,z)的原 点(0,0,0)位于两个摄像机镜头中心连线的中点处。在一个立体影像播放空间中,一个关注物体的左右两个影像被同时投射到一个平面屏幕上。当人的左眼和右眼分别只能够看到平面屏幕上关注物体的左影像和右影像时,人的大脑中就可以感受到在真实场景中一个具有立体深度信息的关注物体的立体虚像。立体影像播放空间坐标系(x,y,z)的原点(0,0,0)位于人的双眼连线的中点处。上述[0029]中所述的立体影像平移后的结果确保了一个关注物体的左右两个影像被投射到平面屏幕上时,左右两个影像在屏幕上的视差P对应着关注物体唯一一个立体虚像。人的双眼到一个立体虚像的距离为Zc=[Z D×T÷(A×F×t)]×Z。公式表明,立体影像采集空间和立体影像播放空间中的两个变量Zc(立体影像播放空间)和Z(立体影像采集空间)之间的关系是一种线性关系。公式中,Z D为立体影像播放空间中坐标原点到平面屏幕的距离,Z为立体影像采集空间中一个关注物体的Z坐标。
上述[0029]中所述的立体影像平移公式h=T÷(4A×e)中的参数h或A可以被用来确定一个立体摄像机的等效会聚点M的位置。因为立体摄像机中的两个光学镜头模组是平行设置的,所以一个立体摄像机的等效会聚点M是一个虚拟点。一个立体摄像机的等效会聚点M的空间坐标为(0,0,Zconv),Zconv=A×F×t÷T。当一个关注物体位于会聚点M的位置处时,立体摄像机采集的一个关注物体的左右两个影像被投射到屏幕上时,人的大脑中感受到关注物体对应的一个立体虚像出现在屏幕上,这时关注物体的左右两个影像在屏幕上的视差为零。当一个关注物体位于会聚点M的位置后方时,立体摄像机采集的一个关注物体的左右两个影像被投射到屏幕上时,人的大脑中感受到关注物体对应的一个立体虚像出现在屏幕的后面,这时关注物体的左右两个影像在屏幕上的视差为正。当一个关注物体位于会聚点M的位置和立体摄像机之间时,立体摄像机采集关注物体的左右两个影像被投射到屏幕上时,人的大脑中感受到关注物体对应的一个立体虚像出现在屏幕和人的双眼之间,这时关注物体的左右两个影像在屏幕上的视差为负。
将立体影像采集空间的坐标系和立体影像播放空间坐标系放置在一起并让两个坐标系的原点重合时,上述[0033]中所述的公式Zc=[Z D×T÷(A×F×t)]×Z表明,一个关注物体在立体影像采集空间中的深度坐标Z与该关注物体相对应的一个立体虚像在立体影像播放空间中的深度坐标Zc不在坐标系(x,y,z)中相同的位置处。两个线性空间的立体深度放大率η=(Z c2′-Z c1′)÷(Z 2-Z 1)=Z D×T÷(A×F×t)=Z D/Zconv。结果表明,当人的双眼到屏幕的距离Z D一定时,两个线性空间的立体深度放大率η是一个常数。根据高斯定律和摄像机镜头的横向放大率的定义:
m=x′/x=y′/y=L′/L
其中,L′=F×(1-m)为像距,L=F×(1/m-1)为物距。一个关注物体在屏幕中的影像在x和y方向上的横向放大率分别为m×A。
根据摄像机镜头的纵向放大率定义:
Figure PCTCN2019096292-appb-000001
上式中,m 1和m 2分别为镜头对与空间中两个不同的关注物体的横向放大率。根据影像放大率的定义,放大率是关注物体上两个不同物距之差与对应的像距之差的比值,所以放大率与关注物体的位置无关。另外,线性光学理论和光学镜头设计是一个近似接近的过程,没有绝对的最终数学结果。所以将m=m 1=m 2看作是一种近似平均值的结果是合理的。上式同时表明;摄像机镜头的纵向放大率
Figure PCTCN2019096292-appb-000002
与屏幕放大率A无关(公式中使用m×A代替m)。
让:
Figure PCTCN2019096292-appb-000003
得到Z D×T÷(A×F×t)=m 2或Z D=[m 2×(A×F×t)]÷T
公式η=Z D/Zconv=m 2或Z D=m 2×Zconv的物理意义是,当人的双眼与立体屏幕的距离为Zs=m 2×Zconv时,人的双眼感受到的一个关注物体的立体影像是一个被放大了m×A倍(x和y方向)和m 2倍(z方向)没有变形的立体影像。这个理想的观看距离Zs是线性空间的立体视距Zs。这个结果将为很多应用带来实际的意义。
内窥镜测量方法是一种根据两个独立和彼此平行设置的摄像机与一个关注物体之间构成的几何关系和数学原理,建立一个关注物体上的一个关注点的左右两个影像在一个左右格式的影像截图中的视差与该关注点在实际中的空间坐标的关系,建立一个关注物体表面面积在一个影像截图中的影像与该关注物体表面在实际中的表面面积的关系的方法。内窥镜测量技术可以用于测量(不限于)一个关注点到一个内窥镜、到另一个关注点、一条直线和一个平面的距离、关注物体的表面面积、关注物体的体积、关注物体表面裂纹、裂纹开口横截面形状和特征、关注物体表面受到腐蚀或冲击后的表面凹凸部分、横截面的形状和特征。
上述[0045]中所述的一种内窥镜测量方法不仅可以应用于光扇立体摄像机,而且可以应用于所有其他拥有两个独立和彼此平行设置的摄像机的立体摄像机。同样,内窥镜测量方法不仅应用于光扇左右格式的影像,而且可以应用于目前双镜头立体摄像机输出的主流的影像格式,包括左右格式,传统左右格式和两个独立的影像。
内窥镜测量方法在使用时需要同时满足下面的三个条件:第一个条件是两个摄像机是独立和平行设置的。第二个条件是立体播放器和立体触模屏幕是一个平面屏幕或曲率半径与屏幕长度相比大很多的曲面屏幕。第三个条件是立体影像采集空间和立体影像播放空间之间是一种线性空间的关系。
一种内窥镜测量方法能够精确地确定一个关注点的空间座标(x,y,z)取决于是否能够精确地确定该关注点的左右两个影像分别在一个左右格式的影像截图中的水平位置X L和X R。一个包括有关注物体上的一个关注点的左右两个影像的左右格式的影像截图中,X L和X R分别为左右两个影像截图中通过关注点的左右两个影像处的两条垂直直线到左右两个影像截图中心点的水平距离。X L和X R的符号定义为,关注点的左右两个影像分别位于左影像截图和右影像截图中心的右半部为正,分别位于左影像截图和右影像截图中心的左半部为负,分别位于左影像截图和右影像截图中心处为零。一个关注点的左右两个影像都位于 左右两个影像截图中的同一个水平线上。
对于一个光扇左右格式和传统左右格式的影像,一个关注点的左右两个影像在一个左右格式的影像截图中的视差为P=2(X L-X R),关注点在实际中的空间座标(x,y,z)是;
x=t×(2X L+T/4)÷[T-2(X L-X R)]-t/2
y=Y L÷(A×m)=Y R÷(A×m)
z=(A×F×t)÷[T-2(X L-X R)]
对于一个左右格式的影像,一个关注点的左右两个影像在一个左右格式的影像截图中的视差为P=(X L-X R),关注点在实际中的空间座标(x,y,z)是;
x=t×(X L+T/2)÷[T-(X L-X R)]-t/2
y=Y L÷(A×m)=Y R÷(A×m)
z=(A×F×t)÷[T-(X L-X R)]
对于两个独立的摄像机采集的两个独立的影像,一个关注点的左右两个影像在两个独立的影像截图中的视差为P=(X L-X R),关注点在实际中的空间座标(x,y,z)是;
x=t×(X L+T/2)÷[T-(X L-X R)]-t/2
y=Y L÷(A×m)=Y R÷(A×m)
z=(A×F×t)÷[T-(X L-X R)]
其中,Y L和Y R分别为一个关注点的左右两个影像分别在左右两个影像截图中的垂直坐标。
一种内窥镜测量方法提供了三种精确定位一个关注点的左右两个影像分别在一个左右格式的影像截图中水平位置X L和X R的方法。
第一种方法是如果一个关注点位于一个具有几何特征的参照物上时,例如,一条非水平直线上,一条曲线上,物体表面上的几何突变处或具有几何特征处,关注点的左影像在左影像截图中的位置X L一旦被确定后,关注点的右影像在右影像截图中的位置X R位于通过X L的一条水平线与关注点的左影像在左影像截图中具有相同的几何特征的参照物的影像的交点处。
第二种方式是立体影像处理器中的感知模块和附带的算法将自动地对左右两个影像截图中一个或多个关注物体同时进行侦测和感知,并将侦测和感知到的不同的关注物体分别被不同的“方框”包围并显示在屏幕中。感知模块通过自带的算法计算获得每一个被不同的“方框”包围的关注物体分别在左右两个影像截图中的位置和到两个影像或影像截图中心的距离。立体测量方法将根据上述[0048]中所述的关系式获得每一个被不同的“方框”包围的关注物体在实际中的坐标。感知模块中的算法从与关注物体相关的每一个像素中进行侦测、模拟、对比、修正、鉴别和计算出到左右两个影像截图中心的距离。感知模块自带的算法是以像素为单位对关注物体进行模拟,对比和修正后的结果,所以最终结果的精度较高并且可以自动地获得令人满意的结果。当屏幕中出现多个不同的关注物体时,使用者只需点击屏幕中真正感兴趣的一个被“方框”包围的关注物体,操作系统将在屏幕中只显示那个最终被使用者选择的关注物体的信息,并且将所有其他未被选择的关注物体的“方框”消失在屏幕中。感知模块和附带的算法已经脱离了本发明的范围。本发明专利将使用这种技术和方法并直接将这种技术和方法应用在立体测量方法中。
第三种方法是渐进法。当一个关注点附近没有任何明显的几何特征或参照物时,例如,关注点位于一个连续表面上时,首先在左影像截图中确定关注点的左影像的位置X L,然后在右影像截图中的一条通过X L的水平线上“合理”地假设关注点的右影像的位置X R。立体测量方法根据X L和假设的X R得出该关注点在实际中的空间座标(x,y,z)并在立体触模屏幕上显示出该关注点的立体虚像。如果关注点的立体虚像与背景中的立体影像不重合,则表明在右影像截图中“合理”假设的关注点的右影像的位置X R不准确。在右影像截图中重新假设一个新的关注点右影像的位置X R,重复上述步骤直到两个立体影像完全重合或获 得一个满意的结果为止。
一种内窥镜测量方法从下面的两个步骤开始。第一步,从影像中获得一个包括了关注物体表面上的一个或多个关注点,关注表面,关注体积,表面裂纹或受损表面凹凸部分的左右格式的影像截图。第二步,在立体触模屏幕菜单中选择本次测量的目地(不限于),点-内窥镜、点-点、点-直线、点-平面、表面面积、体积、表面裂纹、表面裂纹面积、表面裂纹横截面、表面受损参数、表面受损面积、表面受损横截面和最大深度。内窥镜测量方法将计算结果直接显示在立体触模屏幕中。
关注物体表面上的一个关注点a到内窥镜的距离的测量方法:第一步,从影像中获得一个左右格式的影像截图。第二步,在立体触模屏幕菜单中选择“点-内窥镜”。第三步,使用触屏笔,手指或鼠标在左影像截图上确定关注点a的左影像的位置X La。立体触摸屏幕上自动出现一条通过X La位置处并横跨左右两个影像截图的水平线。第四步,使用触屏笔,手指或鼠标在右影像截图的水平线上确定关注点a的右影像的位置X Ra。内窥镜测量方法将计算出该关注点a到内窥镜中的立体摄像机中心线与内窥镜前端面外表面上交点的距离为;
Dc=[x a 2+y a 2+(z a-c) 2] 1/2
其中,c为光学镜头模组中心到内窥镜前端面外表面的距离。
关注物体表面上的两个关注点a和b之间的距离的测量方法:第一步,从影像中获得一个左右格式的影像截图。第二步,在立体触模屏幕菜单中选择“点-点”。第三步,分别确定物体表面上的两个关注点a和b的左右两个影像在左右两个影像截图中的位置X La,X Ra,X Lb和X Rb。内窥镜测量方法将计算出关注物体表面上两个关注点a和b之间的距离为;
D ab=[(x b-x a) 2+(y b-y a) 2+(z b-z a) 2] 1/2
关注物体表面上的一个关注点a到一条空间直线的距离的测量方法:第一步,从影像中获得一个左右格式的影像截图。第二步,在立体触模屏幕菜单中选择“点-线”。第三步,分别确定关注点a的左右两个影像在左右两个影像截图中的位置X La和X Ra。第四步,分别确定空间中一条直线上的两个特征点b和c的左右两个影像在左右两个影像截图中的位置X Lb,X Rb,X Lc和X Rc。内窥镜测量方法将计算出关注物体表面上的一个关注点a到一条经过了两个特征点b和点c的直线的距离为;
D a- bc ={[x a-λ(x c-x b)-x b] 2+[y a-λ(y c-y b)-y b] 2+
[z a-λ(z c-z b)-z b)] 2} 1/2
其中,λ=[(x b-x a)×(x c-x b)+(y b-y a)×(y c-y b)+(z b-z a)×(z c-z b)]
÷[(x c-x b) 2+(y c-y b) 2+(z c-z b) 2]
关注物体表面上的一个关注点a到一个空间平面的距离的测量方法:第一步,从影像中获得一个左右格式的影像截图。第二步,在立体触模屏幕菜单中选择“点-平面”。第三步,分别确定关注点a的左右两个影像在左右两个影像截图中的位置X La和X Ra。第四步,分别确定位于一个空间平面上不在一条直线上的三个特征点b,c和d的左右两个影像在左右两个影像截图中的位置X Lb,X Rb,X Lc,X Rc,X Ld和X Rd。内窥镜测量方法将计算出关注物体上的一个关注点a到一个包括了不在一条直线上的三个特征点b,c和d的平面的距离为;
D a-(bcd)=[I Ax a+By a+Cz a+D I]÷(A 2+B 2+C 2) 1/2
其中,A,B,C由下面的行列式中获得,D=-(Ax b+By b+Cz b)
Figure PCTCN2019096292-appb-000004
在立体触模屏幕上移动触屏笔,手指或鼠标从一个像素点到下一个相邻像素点的三种不同路径分别是沿着水平方向,垂直方向和一个以水平和垂直像素为直角边的三角形斜边方向。立体触模屏幕上的一条曲线可以近似地看做是由众多个彼此相邻的两个像素之间的水平直线,垂直直线和相邻的两个像素之间的水平和垂直线为直角边的三角形斜边拼接而成的一条拼接曲线。立体触模屏幕的分辨率(PPI)越大,曲线的实际长度与拼接曲线的长度就越接近。同样,一条闭环曲线中包围的面积与一条闭环拼接曲线中包围的所有像素单元面积的总和就越接近。两个相邻像素之间的水平距离为a,垂直距离为b,一个像素单元面积为
Figure PCTCN2019096292-appb-000005
一个立体触模屏幕中的一个闭环拼接曲线包围的所有像素单元面积的总合为
Figure PCTCN2019096292-appb-000006
关注物体实际表面面积为Q=Ω÷(m×A) 2
关注物体上一个关注表面面积的测量方法:第一步,从影像中获得一个左右格式的影像截图。第二步,在立体触模屏幕菜单中选择“面积”,系统将自动地保留其中的一个影像截图并将保留的一个影像截图放大至全屏幕。第三步,使用触屏笔,手指或鼠标在屏幕中沿着关注表面的影像的边缘画出一条包括了全部关注表面的影像的闭环拼接曲线。内窥镜测量方法将计算出闭环拼接曲线中包围的面积。
上述[0056]中所述的获得的闭环拼接曲线包围的面积只是关注表面的实际面积在一个与立体摄像机中心线(Z轴)垂直的平面上投影的面积。第四步,当关注物体表面是一个平面或曲率较大可近似地看作为平面的表面时,跟据上述[0054]中所述的方法,分别确定平面表面上三个不在同一条直线上的特征点b,c和d的左右两个影像分别在左右两个影像截图中的位置X Lb,X Rb,X Lc,X Rc,X Ld和X Rd。内窥镜测量方法将计算出关注物体表面的法向矢量 N和关注物体表面的实际面积等于上述[0056]中所述的方法获得的面积除以关注物体表面的法向矢量 N与立体摄像机中心线(Z轴)夹角的余玄。
医疗内窥镜,腔镜和微创手术中经常检测的关注组织的表皮或粘膜有(不限于)胃黏膜和器官表皮病变组织。如果能够快速地获得胃黏膜和器官表皮病变组织面积的近似值,就可以帮助医生快速地做出诊断,设计手术和操作方案。调整内窥镜终端中心线的方向,当内窥镜终端中心线与需要测量的器官表皮或粘膜表面尽可能垂直的方向上时采集一个左右格式的影像截图。保留其中的一个影像截图并将保留的一个影像截图放大至全屏幕。使用触屏笔,手指或鼠标沿着关注组织上病变的表皮或粘膜的边缘画出一个闭环拼接曲线。内窥镜测量方法将计算出关注组织上病变的表皮和粘膜的面积。
关注物体体积的测量方法:第一步,从影像中获得一个左右格式的影像截图。第二步,在立体触模屏幕菜单中选择体积,系统将自动地保留其中的一个影像截图并将保留的一个影像截图放大至全屏幕。第三步,根据上述[0057]和[0058]中所述的方法获得关注物体表面的实际面积。第四步,回到左右格式的影像截图中,当关注物体是一个平板或曲率较大可近似看作为平板时,分别确定关注平板上两个具有典型厚度的特征点a和b的左右两个影像在左右两个影像截图中的位置X La,X Ra,X Lb和X Rb。内窥镜测量方法将计算出关注平板的厚度等于计算获得的两个特征点a和点b之间的距离乘以矢量 ab与关注平板表面的法向矢量 N之间夹角的余玄。关注平板的实际体积等于上述第三步中获得的平板的实际面积乘以上述第四步中获得的平板的厚度。
医疗内窥镜,腔镜和微创手术中经常检测的关注组织包括(不限于)息肉,肿瘤,器官和附着在器官表面上的肿块。如果能够将这些息肉,肿瘤,器官和肿块的形状近似地看作圆球或椭圆球并快速地获得关注组织体积的近似值,就可以帮助医生快速地做出诊断,设计手术和操作方案。对于一个近似于圆球形状的关注组织,调整内窥镜终端中心线的方向。当内窥镜终端中心线与需要测量的圆球形状的关注组织表皮和粘膜表面尽可能垂直的方向上时采集一个左右格式的影像截图。对于一个近似于椭圆球形状的关注组织,调整内窥镜终 端中心线的位置和方向,对准需要测量的椭圆球形状的关注组织的中心并与关注组织表皮和粘膜表面尽可能垂直的方向上时采集一个左右格式的影像截图。保留其中的一个影像截图并将保留的一个影像截图放大至全屏幕。使用触屏笔,手指或鼠标在立体触摸屏幕上沿着关注肿块影像边缘画出一条圆形或椭圆形的闭环拼接曲线。对于一个圆球形状的关注组织,使用触屏笔在屏幕上画出一条横跨圆形闭环拼接曲线的直线并确定直线与圆形闭环曲线相交的的两个点a和b的左右两个影像在左右两个影像截图中的位置X La,X Ra,X Lb和X Rb。对于一个椭圆球形状的关注组织,使用触屏笔在屏幕上画出一对通过椭圆球关注组织中心并相互垂直的直线,分别代表椭圆形闭环曲线上的长轴和短轴。分别确定椭圆长轴和短轴与闭环拼接曲线的四个交点a,b,c和d的左右两个影像在左右两个影像截图中的位置X La,X Ra,X Lb,X Rb,X Lc,X Rc,X Ld和X Rd。内窥镜测量方法将分别计算出圆球形状关注组织的直径D和体积,以及椭圆球形状的关注组织的长轴和短轴的B,C和椭圆球形状的关注组织体积分别为;
对于圆球形状的关注组织体积为:V=π×D 3/6
对于椭圆球形状的关注组织体积为:V=π×B×C 2/6
注:上述椭球体积的计算公式中假设椭圆球的两个相互垂直的短轴相等。
物体表面裂纹横截面的测量方法:第一步,调整内窥镜终端中心线的位置和方向,使中心线与裂纹的纵向方向一致并与物体表面平行。当在立体触模屏幕中看到了感兴趣的裂纹横截面开口处时采集一个左右格式的影像截图。第二步,使用触屏笔,手指或鼠标在左右两个影像截图上分别确定关注物体表面与裂纹横截面开口的左右两个边缘的两个交点a和b的左右两个影像在左右两个影像截图中的位置X La,X Ra,X Lb和X Rb。第三步,在立体触模屏幕菜单中选择“裂纹横截面”,系统将自动地保留其中的一个影像截图并将保留的一个影像截图放大至全屏幕。使用触屏笔,手指或鼠标在裂纹横截面开口的左右两个边缘上分别确定多个具有拐点,转折点和峰值点的特征点的位置X L1,X L2,X L3,……和X R1,X R2,X R3,……。裂纹开口左边缘上的特征点X L#和裂纹开口右边缘上的特征点X R#之间没有任何关系。因为每一个特征点X L#和X R#的位置与上述的两个交点a和b在同一个裂纹横截面上,所有裂纹横截面的左右两个开口边缘上的特征点分别拥有与点a和点b相同的视差,或者说点a和点b的会聚深度坐标Zc与裂纹横截面的左右两个裂纹开口边缘上所有特征点的会聚深度坐标Zc相等。内窥镜测量技术将分别计算出点a与裂纹横截面开口左边缘上每一个特征点X L#之间的垂直距离Y L#和点b与裂纹横截面开口右边缘上每一个特征点X R#之间的垂直距离Y R#。裂纹横截面的开口左边缘是由点a为起点的依次连接着裂纹横截面开口左边缘上相邻特征点X L#的直线组成。裂纹横截面的开口右边缘是由点b为起点的依次连接着裂纹横截面开口右边缘上相邻特征点X R#的直线组成。裂纹横截面的左右两边多个直线组成的左右两个边缘形成一个“V”字状的横截面开口。选择的特征点愈多,裂纹横截面的边缘与实际裂纹横截面的边缘愈接近。
工业设备和系统中,工业内窥镜经常检测和测量设备表面被腐蚀或受损后产生的表面凹凸部分。物体表面凹凸部分横截面和最大深度的测量方法:这里仅以物体表面受损或腐蚀造成的凹陷为例进行说明。第一步,调整内窥镜终端中心线的位置和方向,并使中心线与物体表面平行。当在立体触模屏幕中看到了物体表面凹陷中最具代表性的部分时采集一个左右格式的影像截图。第二步,确定物体表面与受损横截面边缘相交的两个交点a和b的左右两个影像在左右两个影像截图中的位置X La,X Ra,X Lb和X Rb。第三步,在立体触摸屏幕菜单中选择“受损横截面”,保留其中的一个影像截图并将保留的一个影像截图放大至全屏幕。在菜单中的下一层指令中输入受损表面的曲率半径+R,(凸曲面)或-R(凹曲面)。立体触摸屏幕上出现一个通过点a和点b的一条曲率半径为R的曲线。使用触屏笔,手指或鼠标在两个交点a和b之间沿着横截图中受损部分边缘画出一条拼接曲线。受损横 截面的闭环拼接曲线是由一条包括了点a和点b之间的一条曲率为R的曲线和一条拼接曲线组成。第四步,回到左右两个影像截图中,在拼接曲线上确定受损截面最低点c的位置X Lc和X Rc。内窥镜测量方法将计算出物体表面受损横截面的面积,点a和点b分别距离横截面最低点c的垂直距离Y c
医疗内窥镜和工业内窥镜在实际测量过程中遇到与上述基本测量方法不同的情况或不同的需求时,需根据不同的情况提出不同以及合理的解决方案和测量方法。新的解决方案和测量方法可以是由上述基本测量方法的组合或其它新的方法。
本发明的优势包括(不限于):一种立体内窥镜提供的具有深度的立体影像,结合双器械通道内窥镜操作技术,内窥镜稳定器和工作台装置极大地提高了医生进行手术的准确性,稳定性,质量和效率,并解决了医生手眼分离的困扰;一种内窥镜测量方法使医生能够在内窥镜和微创手术中对发现的肿块,粘膜和病变组织实时地进行测量;光扇立体摄像机输出的光扇左右格式的影像与传统左右格式的影像具有相同的水平视角,解析度,影像效率,标准播放格式和高质量影像效果。本发明拥有高度集成的结构设计以及智能化和人性化的操作方法,并具有操作简单、效率高、影像延迟小、成本较低、易于推广的特点。
附图说明
图1-1是本发明的第一种光扇立体摄像机的成像原理示意图;
图1-2是图1-1A方向视图;
图2-1是本发明的第二种光扇立体摄像机的成像原理示意图;
图2-2是图2-1A方向视图;
图3-1是本发明的第三种光扇立体摄像机的成像原理示意图;
图3-2是图3-1A方向视图;
图4是光扇光学变形系统原理示意图;
图5-1是传统的成像圆成像原理示意图;
图5-2是本发明的光扇压缩成像椭圆成像原理示意图;
图6是本发明的光扇左右格式的影像示意图;
图7是左右格式的影像示意图;
图8是传统左右格式的影像示意图;
图9是本发明光扇左右格式与传统左右格式的影像对比示意图;
图10是本发明的一种单器械通道立体医疗内窥镜示意图;
图11是本发明的一种双器械通道立体医疗内窥镜示意图;
图12是本发明的一种双器械通道医疗内窥镜操作手柄示意图;
图13是本发明的一种医疗内窥镜工作台示意图;
图14是本发明的一种医疗内窥镜稳定器示意图;
图15是一个立体影像采集空间示意图;
图16是一个立体影像播放空间示意图;
图17是会聚法等效原理示意图;
图18是本发明的一个关注点的左右两个影像在一个左右格式的影像截图中的位置示意图;
图19是本发明的一个关注点的左右两个影像与一组空间坐标对应原理示意图;
图20是本发明的测量一个关注点到内窥镜的距离示意图;
图21是本发明的测量两个关注点之间的距离示意图;
图22是本发明的测量一个关注点到一条直线的距离示意图;
图23是本发明的测量一个关注点到一个平面的距离示意图;
图24是本发明的测量一个平面物体表面面积示意图;
图25是本发明的测量一个平板物体体积示意图;
[根据细则26改正31.07.2019] 
图26-1是本发明的测量一个表面裂纹横截面示意图;图26-2是本发明的测量一个表面裂纹横截面处开口部分的形状和深度示意图;
[根据细则26改正31.07.2019] 
图27-1是本发明的测量一个表面受损凹陷横截面示意图;图27-2是本发明的测量一个表面受损凹陷横截面的凹陷横截面的形状示意图。
具体实施方式:
本发明的具体实施方式表示本发明具体化的一个例子,与权利要求书和说明书中的内容和特定事项具有对应关系。本发明不限定实施方式,在不脱离本发明主旨的范围内,能够通过对各种不同的实施方式实现具体化。所有示意图中的说明案例都是所述的多个可实施技术方案中的一个例子。
图1所示的是第一种光扇立体摄像机的成像原理示意图。图1-1俯视图中,两个光学镜头模组中心线相距为t。镜头组1中设置了一个斜平板镜片2。斜平板镜片2将来自镜头组1中前方镜头的影像沿着水平方向上朝着光扇立体摄像机中心线方向上产生一个平移,并经过镜头组1中后面的镜片修正后进入光扇。光扇中的柱镜3和柱镜4将影像沿着水平方向上压缩一半后进入到后面的一个直角反射棱镜6。图1-2A方向视图所示的是一个直角棱镜6的斜面内表面将来自前方的影像全反射并向下折弯90°后投射到一个图像传感器9的成像表面8的左半部或右半部上成像。图1-1中,水平放置的左右两个光学镜头模组采集的影像分别在成像表面8的左半部和右半部上成像。左右两个直角反射棱镜6的一个直角三角形状表面7上分别镀有涂层并沿着镀有涂层的三角形表面7被放置或粘结在一起。一个垂直设置的隔光板5位于光扇立体摄像机中心线上。
图2所示的是第二种光扇立体摄像机的成像原理示意图。图2-1俯视图中,两个光学镜头模组中心线相距为t。镜头组10的后面设置的两个直角棱镜11和12将来自镜头组10的影像沿着水平方向上朝着光扇立体摄像机中心线方向上产生一个平移并经过镜头组13的修正后进入光扇。一个光扇中的柱镜3和柱镜4将影像沿着水平方向上压缩一半后进入到后面的一个直角反射棱镜6。图2-2A方向视图所示的是一个直角棱镜6的斜面内表面将来自前方的影像全反射并向下折弯90°后投射到一个图像传感器9的成像表面8的左半部或右半部上成像。图2-1中,水平放置的左右两个光学镜头模组采集的影像分别在成像表面8的左半部和右半部上成像。左右两个直角反射棱镜6的一个直角三角形状表面7上分别镀有涂层并沿着镀有涂层的三角形表面7被放置或粘结在一起。一个垂直设置的隔光板5位于光扇立体摄像机中心线上。
图3所示的是第三种光扇立体摄像机的成像原理示意图。图3-1俯视图中,两个光学镜头模组中心线相距为t。镜头组10的后面设置的两个直角棱镜11和12将来自镜头组10的影像沿着水平方向上朝着光扇立体摄像机中心线方向上产生一个平移并经过镜头组13的修正后进入光扇。光扇的柱镜3和柱镜4将影像沿着水平方向上压缩一半。压缩后的影像被投射到一个图像传感器9的成像表面8的左半部或右半部上成像。直角棱镜12的位置是固定不变的。镜头组10和直角棱镜11可以沿着一条与光学镜头模组中心线垂直的水平直线方向上同步移动并改变光扇立体摄像机的视间距t。水平放置的左右两个光学镜头模组采集的影像分别在成像表面8的左半部和右半部上成像。一个垂直设置的隔光板5位于光扇立体摄像机中心线上。图3-2A方向视图中,隔光板5的一个垂直直边与图像传感器9的成像表面8平行,并与成像表面非常接近但不相交。
图4所示的是一个光扇变形系统原理示意图。一个光扇是由两个柱镜3和柱镜4组成。柱镜3和柱镜4的轴线相互垂直。图4中的一束通过柱镜3的主子午面的光线A(图中阴影部分)进入光扇后,左边的柱镜3对于图中的光线A相当于一个平行平板,右侧的柱镜4对光线A就像一个球面透镜一样对光线A进行折射。但是,另外一个主子午面内光扇 的情况完全不同,一束光线B通过柱镜3的另一个主子午面时发生折射,柱镜4对光线B相当于一个平行平板。当柱镜3中的一个子午面与其主子午面成η角时,其光焦度为;
Gη=G 0×Cos 2η
其中,G 0是柱镜主子午面内的光焦度。当光扇中的柱镜3和柱镜4彼此互成90°时,柱镜3中的一个子午面为η角,则在柱镜4中为(90°-η)角,并且
Sin 2η+Cos 2η=1
上述公式表明,如果光扇的两个主子午面中的影像都处于聚焦状态,则所有的子午面中的影像都是处于聚焦状态。光扇在两个相互垂直的不同主子午面内对影像有不同的压缩率。一个影像圆14经过一个光扇后变成一个影像椭圆15。
图5所示的是成像圆成像和光扇成像椭圆成像原理示意图。图5-1中一个成像圆14外边缘的方程式;
x 2+y 2=r 2
成像圆14外边缘内接一个长度为w,宽度为v的图像传感器的成像表面8。最小外接成像圆14的直径为;
D=2r=2(w 2/4+v 2/4) 1/2=(w 2+v 2) 1/2
其中,r–成像圆半径
D–成像圆直径,D=2r
w–图像传感器成像表面水平长度
v–图像传感器成像表面垂直宽度
图5-2中椭圆15外边缘的参数方程式;
x=b Sinθ
y=a Cosθ
其中,a–椭圆15的长半轴
b–椭圆15的短半轴
椭圆15外边缘的内接长方形面积为;
∧椭圆=4xy=4abSinθCosθ=2abSin(2θ)
∵0≦Sin(2θ)≦1或0≦2θ≦π/2
椭圆15外边缘的内接最大长方形面积为;
∴∧椭圆max=2ab
让∧椭圆max=2ab=wv/2
将b=wv/4a,x=w/4和y=v/2代入椭圆方程x 2/b 2+y 2/a 2=1中,得到;
a=v/√2,b=w/2√2
光扇的两个主子午面对于与成像圆的放大率分别是;
水平放大率为:Ф h=1-2b/D={1-[w/√2]/(w 2+v 2) 1/2}×100%
垂直放大率为:Ф v=1-2a/D={1-[2v/√2]/(w 2+v 2) 1/2}×100%
图5-1中,一个摄像机通过一个成像圆14将采集的影像投射到一个图像传感器的成像表面8上成像16。图5-2中,在过程“A”中,一个成像圆14和影像16沿着一条与图像传感器的成像表面平行的水平直线方向上压缩一半同时被压缩后,成像圆14变形成为椭圆成像圆15,影像16并成为影像17。
图6所示的是光扇左右格式的影像示意图。一个光扇立体摄像机中左右两个独立的光学镜头模组采集的左右两个影像分别通过左右两个成像椭圆15L和15R投射到同一个图像传感器的成像表面8的左半部和右半部上分别成像17L和17R。在过程“A”中,立体影像处理器对一个由影像17L和17R组成的影像进行修正、处理、优化和平移后,输出一个由左右两个影像18L和18R组成的光扇左右格式的影像。在过程“B”中,光扇左 右格式中的两个影像18L和18R沿着水平方向上被分别放大一倍并成为两个独立的和拥有一半像素的标准播放格式的影像19L和19R。
图7所示的是左右格式的影像示意图。一个双镜头单图像传感器立体摄像机中的左右两个独立的镜头采集的左右两个影像分别通过左右两个成像圆20L和20R投射到同一个图像传感器的成像表面8的左半部和右半部上分别成像21L和21R。在过程“A”中,立体影像处理器对一个由影像21L和21R组成的影像进行修正、处理和优化后,输出一个由左右两个影像22L和22R组成的左右格式的影像。在过程“B”中,左右格式中的两个影像22L和22R分别被下采样后成为两个独立的和拥有一半像素的非标准播放格式的影像23L和23R。
图8所示的是传统左右格式(Side-by-Side)的影像示意图。两个独立的摄像机采集的左右两个影像分别通过左右两个传统成像圆24L和24R在左右两个独立的图像传感器上分别成像25L和25R。在过程“A”中,立体影像处理器对左右两个独立的影像25L和25R进行修正、处理和优化后,分别输出左右两个独立的影像26L和26R。在过程“B”中,两个影像26L和26R分别被进行下采样成为拥有一半像素的影像27L和27R。在过程“C”中,两个影像27L和27R被按照左右方式拼接在一起成为一个传统左右格式的影像28。在过程“D”中,一个传统左右格式的影像28中的左右两个影像28L和28R沿着水平方向上被展开并成为左右两个独立的和拥有一半像素的标准播放格式的影像27L和27R。
图9所示的是光扇左右格式与传统左右格式的影像对比示意图。上述[0072]中所述的两个独立的摄像机通过两个成像圆24L和24R分别在两个独立的图像传感器上成像25L和25R。在过程“A”中,将影像25L或25R沿着水平方向上压缩一半后,影像25L或25R分别成为与上述[0070]中所述的光扇左右格式的影像17L或17R,成像圆24L或24R分别成为成像椭圆29L或29R。根据椭圆最大内接长方形的唯一性原理,成像椭圆29L或29R与光扇左右格式的影像成像椭圆15L或15R相同。图中的阴影30和32分别是一个成像圆24L或24R和一个成像椭圆15L或15R未被图像传感器接收或成像的部分。阴影31是阴影30沿着水平方向上压缩后的结果。阴影31等于阴影32,表明两种不同的影像格式的影像效率相等。
图10所示的是一个单器械通道立体医疗内窥镜示意图。图10中显示的是一个立体医疗内窥镜的前端面33,包括一个立体摄像机中的两个光学镜头模组34,一个内窥镜器械通道35,一个气液通道36,三个不同波长的灯具37和三个LED灯具38。
图11所示的是一个双器械通道立体医疗内窥镜示意图。图11中显示的是一个立体医疗内窥镜的前端端面39,包括一个立体摄像机中的两个光学镜头模组34,两个内窥镜器械通道35,一个气液通道36,三个不同波长的灯具37和三个LED灯具38。
图12所示的是一个双器械通道医疗内窥镜操作手柄示意图。一个拥有双器械通道的医疗内窥镜操作手柄40上设置有两个不同的器械通道接入口41和42。两个器械通道接入口41和42的直径可以相同也可以不相同。
图13所示的是一个医疗内窥镜工作台示意图。图13中显示的一个医疗内窥镜工作台43上固定有一个立体触模屏幕44,一个拥有双器械通道接入口41和42的医疗内窥镜操作手柄40和一个医疗内窥镜稳定器46。操作手柄40通过一个固定器45被固定在工作台上。医生可以使用自己的脚控制一个脚踏板47上设置的多个脚踏式开关48的方式控制固定在工作台43上的装置的启动和停止。
图14所示的是一个医疗内窥镜稳定器示意图。一个医疗内窥镜稳定器46包括一个下卡环49,上卡环50,上电磁铁51,下电磁铁52,回复弹簧53,固定底座54,防震软垫片55,上下卡环垫片56,滑动导杆57和卡环压力调整旋钮58。图14-1中,一个 医疗内窥镜稳定器46中的上下两个卡环49和50是处于开放状态。图14-2中,医疗内窥镜稳定器是处在工作状态,上电磁铁50被下电磁铁49吸引向下移动并将内窥镜软管59夹紧,使得内窥镜软管59在上下两个卡环49和50中间无法前后移动和转动。
图15所示的是一个立体影像采集空间示意图。图15中,左右两个摄像机60和61同时围绕着摄像机镜头中心朝向一个关注物体62的方向转动一直到两个摄像机60和61的中心线会聚到关注物体62为止才开始拍摄。这是一种传统的立体拍摄方法-会聚法。这种拍摄方法与人的双眼看世界的方式相同。左右两个摄像机60和61的镜头中心之间相距为t。关注物体62前方的景物称为前景物63,后方的景物称为后景物64。立体影像采集空间坐标系的原点0(0,0,0)位于左右两个摄像机镜头中心连线的中点处。
图16所示的是一个立体影像播放空间示意图。上述[0079]中的左右两个摄像机60和61采集的左右两个影像被分别投射到一个水平长度为W的平面屏幕67上。左右两个影像之间在屏幕上的水平距离是左右两个影像的视差P。当人的左眼65和右眼66分别只能看到屏幕67上的左影像和右影像时,人的大脑将左眼65和右眼66获得的两个具有不同视角的影像融合后感受的一个包括了上述[0079]中所述的关注物体62,63和64在内的立体影像采集空间的立体虚像。关注物体62对应的虚像68出现在屏幕上,这时观众的双眼65和66在平面屏幕67上看到的关注物体62是一个左右两个影像重叠在一起的一个虚像68。前景物63对应的虚像69出现在观众空间。后景物64对应的虚像70出现在屏幕空间中。立体影像播放空间坐标系的原点0(0,0,0)位于人的双眼之间连线的中点处。
根据图16所示的几何关系得到下面关系式,
Zc=Z D×T÷(T-P)        (1)
其中,Zc–屏幕上左右两个虚像的会聚点的Z坐标
Z D–坐标系原点到屏幕的距离
T–人的双眼之间的距离
P–屏幕上的左右两个影像之间的水平距离-视差
ΔP=Pmax-Pmin=Z D×T×(1/Zcnear-1/Zcfar)      (2)
其中:Pmax–屏幕上左右两个影像的最大视差
Pmin–屏幕上左右两个影像的最小视差
Zcnear–坐标系原点到最近会聚点的距离(P<0负视差,观众空间)
Zcfar–坐标系原点到最远会聚点的距离(P>0正视差,屏幕空间)
定义,Prel=ΔP/W
其中:Prel–平面屏幕单位长度的视差变化
W–屏幕水平长度
图17所示的是会聚法与等效会聚法等效原理示意图。图17-1中,左右两个摄像机60和61拍摄一个关注物体62时使用的一种传统的拍摄方法-会聚法。图17-2中,左右两个摄像机60和61在拍摄同一个关注物体62时使用的另一种拍摄方法-平行法或等效会聚法。在等效会聚法中,左右两个摄像机60和61的中心线彼此平行且相距为t。为了获得与会聚法同样的拍摄效果,拍摄前将两个摄像机60和61中的图像传感器71和72分别沿着水平方向上朝着彼此相反的方向上平移h的距离。这时,关注物体62在两种不同的拍摄方法中都分别成像在图像传感器71和72的中心上。等效会聚法不仅解决了会聚法中出现的梯形畸变的问题,而且通过几何关系和光学理论建立的一系列数学关系式中可以获得一些极具实用意义的立体影像效果。根据图17-2所示的几何关系我们得到下面关系式,
d=t×F×(1/2Zconv-1/Z)=2h-t×F÷Z     (3)
其中,d–空间中的一点在两个图像传感器上的视差
h–一个图像传感器沿着水平方向上的平移的距离
t–两个摄像机中心线之间的距离,立体摄像机的视间距
F–摄像机镜头的等效焦距
Z–空间中任意一点的Z坐标
Zconv–两个摄像机的会聚点的Z坐标
根据公式(3)推得下式;
Δd=dmax-dmin=t×F×(1/Znear-1/Zfar)      (4)
其中:dmax–左右两个图像传感器上的两个影像的最大视差
dmin–左右两个图像传感器上的两个影像的最小视差
Znear–空间中的前景物63的Z坐标
Zfar–空间中的后景物64的Z坐标
定义,drel=Δd/w
其中:drel–图像传感器单位长度的视差变化
w–图像传感器成像表面的水平长度
让,Prel=drel
推得:t=[(Z D÷A×F)×(1/Zcnear-1/Zcfar)÷(1/Znear-1/Zfar)]×T    (5)其中:A–屏幕放大率W/w
公式(5)表明,两个摄像机的视间距与人的双眼之间的距离是不相等的。
让:P=A×d并代入到公式(1)和(3)中得到下式:
Zc=(Z D×T)÷(T-P)=(Z D×T)÷(T-A×d)
=(Z D×T×Z)÷[A×t×F-(2A×h-T)×Z]       (6)
公式(6)表明,Zc与Z之间不是线性关系。理想成像是立体影像采集空间中任意一点,一条直线和一个面对应着立体影像播放空间中唯一的一个点,一条直线和一个面。理想成像能够使一个立体影像采集空间中获得的两个影像在立体影像播放空间中对应的一个融合后的立体影像没有扭曲和变形发生,其充分和必要条件是让两个空间中对应点之间的数学关系成为线性关系。公式(6)表明,Zc与Z之间的线性关系成立的充分必要条件是,
2A×h-T=0或h=T/2A
公式(6)被线性化后简化成为下式,
Zc=[(Z D×T)÷(A×t×F)]×Z        (7)
公式(7)表明,立体影像采集空间中任何一点上获得的两个具有不同视角的影像在立体影像播放空间中对应着唯一的一个点,并在该点上实现了会聚。
说明:使用等效会聚法拍摄前,先将摄像机的图像传感器71和72分别沿着水平方向和朝着彼此相反方向上移动h=T/2A的距离。实际上,更加实用的方法是拍摄完成后,对影像进行处理或后期制作时将获得的左右两个影像分别沿着水平方向上朝向对方的方向上平移一个h=T/2A的距离。等效会聚法拍摄获得的左右两个影像不仅可以获得比会聚法更理想的立体影像效果,符合人的双眼看世界的方式和习惯,而且获得的左右两个影像中没有梯形畸变。
对于光扇立体摄像机,因为两个光学镜头模组中的每一个模组中的一个光扇将成像前的影像沿着水平方向上压缩了一半,所以对于光扇左右格式的影像进行处理或后期制作时需要将左右两个影像分别沿着水平方向上朝向对方的方向上平移的距离为h=T/4A。如果使用像素表示时,其中一个影像的平移为h=T÷(4A×e)个像素,另一个影像平移的距离为h′=[T÷(4A×e)]+1或h′=[T÷(4A×e)]-1个像素。
对于传统左右格式的影像中的左右两个影像的平移是分别将左右两个影像分别沿着水平方向朝向对方的方向上进行平移,平移的距离分别为h=T/4A。如果使用像素表示平移距离时,左右两个影像平移的距离分别为h=T÷(4A×e)个像素。
对于左右格式的影像的平移是将左右两个影像分别沿着水平方向朝向对方的方向上进行平移,平移的距离分别为h=T/2A。如果使用像素表示平移距离时,左右两个影像的平移距离分别为h=T÷(2A×e)个像素。
对于两个独立的影像的平移是将左右两个影像分别沿着水平方向朝向对方的方向上进行平移,平移的距离分别为h=T/2A。如果使用像素表示平移距离时,左右两个影像的平移距离分别为h=T÷(2A×e)个像素。
图18所示的是一个确定关注点的左右两个影像在一个左右格式的影像截图中的位置示意图。一个包括了关注物体表面上一个关注点a的左右格式影像截图,左影像截图73和右影像截图74。关注点a的左影像75在左影像截图73中的位置距离左影像截图73中心的水平距离为X L,根据上述[0048]中所述的符号规则,X L<0。关注点a的右影像76在右影像截图74中的位置距离右影像截图74中心的水平距离为X R>0。关注点a的左影像75在左影像截图73中的位置和右影像76在右影像截图74中的位置都位于同一个横跨屏幕的水平线77上。水平线77距离左影像截图73和右影像截图74中心的垂直距离相等Y L=Y R
对于一个光扇左右格式和传统左右格式的影像,一个关注点a的左右两个影像在一个左
右格式的影像截图73和74中的视差为P=2(X L-X R),代入到公式(1)中得到;
Zc=Z D×T÷(T-P)=(Z D×T)÷[T-2(X L-X R)](8a)
将公式(7)代入公式(8a)中,简化后得到,
Z=(A×t×F)÷[T-2(X L-X R)]      (9a)
对于一个左右格式的影像,一个关注点a的左右两个影像在一个左右格式的影像截图
73和74中的视差为P=(X L-X R),代入到公式(1)中得到;
Zc=Z D×T÷(T-P)=(Z D×T)÷[T-(X L-X R)]     (8b)
将公式(7)代入公式(8b)中,简化后得到公式:
Z=(A×t×F)÷[T-(X L-X R)]       (9b)
对于两个独立的影像,左右两个影像截图是两个独立的影像截图。一个关注点a的左右
两个影像在两个独立的影像截图中的视差为P=(X L-X R),代入到公式(1)中得到;
Zc=Z D×T÷(T-P)=(Z D×T)÷[T-(X L-X R)]      (8c)
将公式(7)代入公式(8b)中,简化后得到公式:
Z=(A×t×F)÷[T-(X L-X R)]        (9c)在上述公式((8a),(8b)和(8c)中;
当P=0时,(X L-X R)=0,Zc=Z D,立体虚像出现在屏幕上。
当P>0时,(X L-X R)>0,Zc>=Z D,立体虚像出现在屏幕的后方。
当P<0时,(X L-X R)<0,Zc<=Z D,立体虚像出现屏幕和摄像机之间。
图19所示的是一个关注点的左右两个影像与一个空间坐标相对应的原理示意图。根据图19所示的几何关系,得到下面的关系式,
f 1=F×(x+t/2)÷Z;f 2=F×(x-t/2)÷Z
f 1=d 1+h;f 2=d 2-h
得到坐标x和Z的公式:
x=[Z×(d1+h)÷F]-t/2        (10)
对于一个光扇左右格式和传统左右格式的影像,将d 1=2X L/A,h=T/4A和公式(9a)带入公式(10)中,简化后得到,
x={t×(2X L+T/4)÷[T-2(X L-X R)]}-t/2
(11a)
一个关注点a的空间座标a(x,y,z)是;
x={t×(2X L+T/4)÷[T-2(X L-X R)]}-t/2
y=Y L÷(m×A)=Y R÷(m×A)
z=(A×F×t)÷[T-2(X L-X R)]
对于一个左右格式的影像,将d1=X L/A,h=T/2A和公式(9b)带入公式(10)中得到;
x={t×(X L+T/2)÷[T-(X L-X R)]}-t/2
(11b)
一个关注点a的空间座标a(x,y,z)是;
x={t×(X L+T/2)÷[T-(X L-X R)]}-t/2
y=Y L÷(m×A)=Y R÷(m×A)
z=(A×F×t)÷[T-(X L-X R)]
对于两个独立的影像,将d1=X L/A,h=T/2A和公式(9c)带入公式(10)中得到;
x={t×(X L+T/2)÷[T-(X L-X R)]}-t/2
(11c)
一个关注点a的空间座标a(x,y,z)是;
x={t×(X L+T/2)÷[T-(X L-X R)]}-t/2
y=Y L÷(m×A)=Y R÷(m×A)
z=(A×F×t)÷[T-(X L-X R)]
图20所示的是测量关注物体表面上的一个关注点a到内窥镜的距离示意图。跟据上述[0051]中所述的过程和方法,确定关注点a的左右两个影像分别在左右两个影像截图73和74中的位置X La和X Ra。内窥镜测量方法将计算出关注点a到内窥镜59前端面外表面中心的距离为;
Dc=[x a 2+y a 2+(z a-c) 2] 1/2
其中,c为坐标系原点到内窥镜前端面外表面之间的距离。
图21所示的是测量关注物体表面上的两个关注点a和b之间的距离示意图。跟据上述[0052]中所述的过程和方法,分别确定关注点a和b的左右两个影像在左右两个影像截图73和74中的位置X La,X Ra,X Lb和X Rb。内窥镜测量方法将计算出关注物体表面上的两个关注点a和b之间距离为;
D ab=[(x b-x a) 2+(y b-y a) 2+(z b-z a) 2] 1/2
图22所示的是测量关注物体表面上的一个关注点a到一条通过了两个特征点b和c的一条直线的距离示意图。第一步,跟据上述[0053]中所述的过程和方法,确定关注点a的左右两个影像分别在左右两个影像截图73和74中的位置X La和X Ra。第二步,分别确定位于一条直线上的两个特征点b和c的左右两个影像在左右两个影像截图73和74中的位置X Lb,X Rb,X Lc和X Rc。内窥镜测量方法将计算出关注物体表面上的一个关注点a到一条通过了两个特征点b和c的一条直线的距离为;
D a-bc ={[x a-λ(x c-x b)-x b] 2+[y a-λ(y c-y b)-y b] 2+
[z a-λ(z c-z b)-z b)] 2} 1/2
其中,λ=[(x b-x a)×(x c-x b)+(y b-y a)×(y c-y b)+(z b-z a)×(z c-z b)]
÷[(x c-x b) 2+(y c-y b) 2+(z c-z b) 2]
图23所示的是测量关注物体表面上的一个关注点a到一个平面78的距离示意图。第一步,跟据上述[0054]中所述的过程和方法,确定关注点a的左右两个影像分别在 左右两个影像截图73和74中的位置X La和X Ra。第二步,在平面78上分别确定不都在同一条直线上的三个特征点b,c和d的左右两个影像在左右两个影像截图73和74中的位置X Lb,X Rb,X Lc,X Rc,X Ld和X Rd。内窥镜测量方法将计算出关注物体表面上的一个关注点a到一个包括了三个特征点b,c和d的一个平面78的距离为;
D a-(bcd)=[I Ax a+By a+Cz a+D I]÷(A 2+B 2+C 2) 1/2
其中,A,B,C由下面的行列式中获得,D=-(Ax b+By b+Cz b)
Figure PCTCN2019096292-appb-000007
图24所示的是测量一个平面物体表面面积示意图。一个被闭环拼接曲线79包围的关注平面80的表面面积的测量方法和步骤;第一步,跟据上述[0056]和[0057]中所述的过程和方法,使用触屏笔,手指或鼠标在立体触摸屏幕上画出一条包括了关注平面80表面面积的闭环拼接曲线79。内窥镜测量方法将计算出被一条闭环拼接曲线79包围的面积。该面积只是关注平面80表面的实际面积在一个与立体摄像机中心线(Z轴)垂直的平面上正投影的面积。第二步,跟据上述[0054]中所述的过程和方法,分别确定包括了关注平面80表面上不都在一条直线上的三个特征点b,c和d的左右两个影像在左右两个影像截图73和74中的位置X Lb,X Rb,X Lc,X Rc,X Ld和X Rd。内窥镜测量方法将计算出关注平面80表面的实际面积等于第一步中获得的投影面积除以由关注平面80表面上的三个特征点b,c和d确定的法向矢量 N与Z轴之间夹角的余玄。
图25所示的是测量一个平板物体体积示意图。一个被闭环拼接曲线81包围的关注平板82的体积的测量方法和步骤;第一步,根据上述[0088]中所述的过程和方法,获得被一条闭环拼接曲线81包围的关注平板82表面的实际面积。第二步,根据上述[0052]中所述的过程和方法,分别确定关注平板82上的两个具有厚度特征点a和b的左右两个的影像在左右两个影像截图73和74中的位置X La,X Ra,X Lb和X Rb。立体测量方法将计算出关注平板82的实际厚度等于两个特征点a和b的长度乘以两个特征点构成的矢量 ab与关注平板82表面的法向矢量 N之间夹角的余玄。一个被闭环曲线81环绕的关注平板82的实际体积等于平板82表面的实际面积乘以实际厚度。
图26所示的是测量一个平面物体表面裂纹横截面示意图。图26-1中,一个关注物体表面上出现了一个裂纹83。裂纹83的测量内容包括裂纹宽度,纵向长度,表面裂纹开裂面积,表面裂纹横截面84处的开口形状和深度。根据上述[0052],[0056]和[0057]中所述的过程和方法分别获得裂纹83的宽度,纵向长度和表面开裂面积。表面裂纹横截面84处的开口形状和深度的测量方法和步骤:根据上述[0061]中所述的过程和方法,第一步,调整内窥镜中心线与裂纹83开裂的纵向方向一致并与物体表面平行。当立体触模屏幕44中看到物体表面裂纹横截面85中一个具有代表性的位置时采集一个左右格式的影像截图73和74。图26-2所示的是裂纹横截面84处开口部分85的形状和深度。第二步,确定裂纹横截面84处的开口部分85的左右两个边缘与关注物体表面的两个交点a和b之间的距离V,V为裂纹83在横截面84处的表面裂纹宽度。第三步,只保留其中的一个影像截图73或74并将保留的一个影像截图放大到全屏幕。使用触屏笔,手指或鼠标分别确定裂纹横截面84处的开口部分85的左边缘上的特征点X L1,X L2,X L3,……和右边缘上的特征点X R1,X R2,X R3,…….。内窥镜测量方法将计算出裂纹横截面84处开口部分85的左右两个边缘上每一个特征点的位置。裂纹横截面84处的开口部分85的左右两个边缘分别以点a和点b为起点依次连接裂纹横截面84处开口部分85的左右边缘上相邻特征点X L#和X R#的直线组成。每一个特征点X L#和X R#与点a和点b之间的垂直坐标y L#和y R#分别代表了该特征点距离关注物体表面的深度。
图27所示的是测量一个表面受损凹陷横截面示意图。图27-1中,一个关注物体表面上出现了一个凹陷部分86。凹陷部分86的测量内容包括凹陷部分宽度,长度,面积,横截面87的形状和最大深度。根据上述[0052],[0056]和[0057]中所述的过程和方法获得关注物体表面凹陷部分86的宽度,长度和表面凹陷的面积。测量物体表面凹陷部分横截面87的方法和步骤:根据上述[0062]中所述的过程和方法,第一步,调整内窥镜中心线与物体凹陷处表面平行并在立体触模屏幕44中看到物体表面凹陷86中一个具有代表性的部分时采集一个左右格式的影像截图73和74。图27-2所示的是横截面87的凹陷横截面的形状。第二步,确定横截面87与物体表面的两个交点a和b之间的距离U。第三步,在立体触模屏幕44的菜单中选择“受损横截面”并输入物体表面在受损部分横截面处的曲率半径+R,(凸曲面)或-R(凹曲面)。主控屏幕上将出现一个通过点a和点b和曲率半径为R的曲线88。第四步,保留其中的一个影像截图73或74并将保留的一个影像截图放大到全屏幕。使用触屏笔,手指或鼠标在两个交点a和b之间沿着影像截图中凹陷部分边缘画出一条拼接曲线89。物体表面上的一个凹陷横截面87上的一条闭环拼接曲线是由一条曲率半径为R的曲线88和凹陷部分影像边缘的一条拼接曲线89组成。第五步,在一个影像截图中确定横截面87上的最低点c的位置。内窥镜测量技术将计算出点a和点b分别距离点c之间的深度ya和yb以及横截面87的面积。

Claims (11)

  1. 一种立体内窥镜,其特征在于,所述的一种立体内窥镜包括;
    一个光扇立体摄像机;
    一个双器械通道医疗内窥镜;
    一个双器械通道医疗内窥镜操作手柄;
    一个医疗内窥镜稳定器;
    一个医疗内窥镜工作台;
    一个立体影像处理器。
  2. 根据权利要求1所述的一种立体内窥镜,其特征在于,所述的一个光扇立体摄像机包括两个完全相同的光学镜头模组、一个图像传感器(CCD或CMOS)和一个立体影像处理器;一个光扇立体摄像机中设置有两个完全相同的光学镜头模组,两个光学镜头模组中心线与光扇立体摄像机中心线对称且彼此平行,每一个光学镜头模组中设置有一个光扇,光扇沿着一条位于两个光学镜头模组中心线所在的平面上并与两个光学镜头模组中心线垂直的直线方向上对光学镜头模组采集的影像进行压缩,沿着一条与两个光学镜头模组中心线所在的平面垂直的直线方向上保持影像不变,两个光学镜头模组采集的两个影像分别经过各自模组中的光扇后,在同一个图像传感器成像表面的左半部和右半部上分别成像。
  3. 根据权利要求1或2所述的一种立体内窥镜,其特征在于,所述的光扇立体摄像机中每一个光学镜头模组中设置的一个光扇是由两个轴线相互垂直的柱镜组成,所述的柱镜是一个正柱面透镜或正柱面镜片,柱镜带有曲率的柱面表面是圆柱面或非圆柱面,其中的一个柱镜的轴线位于两个光学镜头模组中心线所在的平面上并与两个光学镜头模组中心线垂直,另一个柱镜的轴线与两个光学镜头模组中心线所在的平面垂直。
  4. 根据权利要求1或3所述的一种立体内窥镜,其特征在于,所述的光扇立体摄像机输出一种光扇左右格式的影像,光扇左右格式的影像中的左右两个影像分别是两个相对应的光学镜头模组采集的两个被各自模组中的光扇沿着一条位于两个光学镜头模组中心线所在的平面上并与两个光学镜头模组中心线垂直的直线方向上压缩了一半,沿着一条与两个光学镜头模组中心线所在的平面垂直的直线方向上保持不变的影像。
  5. 根据权利要求1或3所述的一种立体内窥镜,其特征在于,所述的光扇立体摄像机有三种不同的模型,三种不同模型的光扇立体摄像机分别使用下面三种具有不同光学设计和结构设计的光学镜头模组;
    第一种光学镜头模组设计包括一个镜头组、一个光扇和一个直角棱镜;镜头组中设置有一个斜平板镜片;
    第二种光学镜头模组设计包括两个镜头组、两个直角棱镜或一个斜方棱镜、一个光扇和一个直角棱镜;两个直角棱镜或一个斜方棱镜位于两个镜头组之间;
    第三种光学镜头模组设计包括两个镜头组、一个光扇和两个直角棱镜;两个直角棱镜之间的距离是固定的或可以变化的;两个直角棱镜位于两个镜头组之间;
    上述所述的镜头组是由一组镜片组成,镜片可以是球面或非球面镜片,也可以全部都是非球面镜片。
  6. 根据权利要求1所述的一种立体内窥镜,其特征在于,所述的立体影像处理器是一个包括一个图像处理芯片(ISP)、一个无线通讯模块、一个感知模块和定位模块、一个立体触模屏幕、一种立体影像平移方法、一种内窥镜测量方法和操作系统的装置。
  7. 根据权利要求1、4或6所述的一种立体内窥镜的立体影像平移方法,其特征在于,所述的一种立体影像平移方法是一种将光扇立体摄像机输出的光扇左右格式的影像中的左右两个影像分别沿着一条位于两个光学镜头模组中心线所在的平面上并与两个光学镜头模 组中心线垂直的直线方向上分别朝向对方进行一个平移量等于h和h’的平移的方法。
  8. 根据权利要求1或6所述的一种立体内窥镜的内窥镜测量方法,其特征在于,所述的内窥镜测量方法是一种根据两个独立和彼此平行设置的摄像机与一个关注物体之间构成的几何关系和数学原理,建立一个关注物体上的一个关注点的左右两个影像在一个左右格式的影像截图中的视差与该关注点在实际中的空间坐标的关系,建立一个关注物体表面面积在一个影像截图中的影像与该关注物体表面在实际中的表面面积的关系的方法;所述的内窥镜测量方法不仅应用于光扇立体摄像机,而且可以应用于所有其他拥有两个彼此平行设置的摄像机的立体摄像机。
  9. 根据权利要求1所述的一种立体内窥镜,其特征在于,所述的双器械通道医疗内窥镜是一种拥有两个独立的器械通道的医疗内窥镜;所述的双器械通道医疗内窥镜操作手柄是一种拥有两个独立的器械通道和器械通道接入口的医疗内窥镜操作手柄;一个双器械通道医疗内窥镜中的两个器械通道分别与一个双器械通道医疗内窥镜操作手柄上相对应的两个器械通道和器械通道接入口连接在一起。
  10. 根据权利要求1所述的一种立体内窥镜,其特征在于,所述的医疗内窥镜稳定器是一种通过固定仍在患者身体外的部分内窥镜软管的方式稳定已经位于患者身体内的内窥镜软管最前端的摄像机镜头和器械通道出口的位置、方向和角度的装置。
  11. 根据权利要求1所述的一种立体内窥镜,其特征在于,所述的医疗内窥镜工作台是一种将包括立体触摸屏幕、医疗内窥镜操作手柄和医疗内窥镜稳定器,以位置和角度可以随时被调整的方式固定在其上的装置。
PCT/CN2019/096292 2018-08-27 2019-07-17 一种立体内窥镜及内窥镜测量方法 WO2020042796A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810978026.1 2018-08-27
CN201810978026.1A CN109259717B (zh) 2018-08-27 2018-08-27 一种立体内窥镜及内窥镜测量方法

Publications (1)

Publication Number Publication Date
WO2020042796A1 true WO2020042796A1 (zh) 2020-03-05

Family

ID=65154236

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/096292 WO2020042796A1 (zh) 2018-08-27 2019-07-17 一种立体内窥镜及内窥镜测量方法

Country Status (2)

Country Link
CN (1) CN109259717B (zh)
WO (1) WO2020042796A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022101338A1 (en) * 2020-11-13 2022-05-19 Maxer Endoscopy Gmbh Imaging system and laparoscope for imaging an object

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109259717B (zh) * 2018-08-27 2020-08-14 彭波 一种立体内窥镜及内窥镜测量方法
CN110510147B (zh) * 2019-08-02 2022-11-22 西安飞机工业(集团)有限责任公司 一种飞机结构裂纹检测方法
CN112995640A (zh) * 2021-02-23 2021-06-18 毛新 一种同屏立体摄像机
CN112969060A (zh) * 2021-02-23 2021-06-15 毛新 一种移轴立体摄像机
CN114383543B (zh) * 2021-12-14 2022-12-27 上海交通大学 Waam熔池三维重建方法
CN114581455B (zh) * 2022-03-22 2023-03-31 中国工程物理研究院流体物理研究所 金属球腔内表面大视场高分辨率形貌图像获取系统及方法
CN115486794A (zh) * 2022-09-21 2022-12-20 彭波 一种导丝内窥镜

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080151041A1 (en) * 2006-12-21 2008-06-26 Intuitive Surgical, Inc. Stereoscopic endoscope
CN101500470A (zh) * 2006-06-13 2009-08-05 直观外科手术公司 微创手术系统
CN103399410A (zh) * 2013-08-08 2013-11-20 彭波 单镜头立体分光镜成像装置
CN103889353A (zh) * 2011-08-12 2014-06-25 直观外科手术操作公司 外科手术器械中的图像捕获单元
CN103961177A (zh) * 2008-07-22 2014-08-06 因西特医疗技术有限公司 组织改变装置及其使用方法
CN106464780A (zh) * 2013-10-03 2017-02-22 特拉华大学 Xslit相机
US20170235120A1 (en) * 2016-02-12 2017-08-17 Nikon Corporation Non-telecentric multispectral stereoscopic endoscope objective
CN109259717A (zh) * 2018-08-27 2019-01-25 彭波 一种立体内窥镜及内窥镜测量方法

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN2716858Y (zh) * 2004-02-27 2005-08-10 杨美丽 平面影像转换成立体影像的装置
JP2009064355A (ja) * 2007-09-07 2009-03-26 Cellius Inc プログラム、情報記憶媒体及び画像生成システム
US8228368B2 (en) * 2008-04-26 2012-07-24 Intuitive Surgical Operations, Inc. Augmented stereoscopic visualization for a surgical robot using a captured fluorescence image and captured stereoscopic visible images
CN101588512B (zh) * 2009-01-07 2011-06-08 深圳市掌网立体时代视讯技术有限公司 一种立体摄像装置及方法
CN102298216A (zh) * 2010-06-25 2011-12-28 韩松 一种用于普通照相机或摄像机的立体镜头
CN202096192U (zh) * 2011-06-01 2012-01-04 广州宝胆医疗器械科技有限公司 经人工通道的智能电子内窥镜系统
CN102973238A (zh) * 2012-12-16 2013-03-20 天津大学 一种用于内窥镜装置的立体镜头
KR101691517B1 (ko) * 2015-03-25 2017-01-09 (주)아솔 실린더리컬 렌즈를 이용한 무초점 3차원 광학 장치
CN205404973U (zh) * 2016-03-01 2016-07-27 刘向峰 一种具有左右格式图像合成镜片和左右眼遮光板的手机裸眼3d观影盒

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101500470A (zh) * 2006-06-13 2009-08-05 直观外科手术公司 微创手术系统
US20080151041A1 (en) * 2006-12-21 2008-06-26 Intuitive Surgical, Inc. Stereoscopic endoscope
CN103961177A (zh) * 2008-07-22 2014-08-06 因西特医疗技术有限公司 组织改变装置及其使用方法
CN103889353A (zh) * 2011-08-12 2014-06-25 直观外科手术操作公司 外科手术器械中的图像捕获单元
CN103399410A (zh) * 2013-08-08 2013-11-20 彭波 单镜头立体分光镜成像装置
CN106464780A (zh) * 2013-10-03 2017-02-22 特拉华大学 Xslit相机
US20170235120A1 (en) * 2016-02-12 2017-08-17 Nikon Corporation Non-telecentric multispectral stereoscopic endoscope objective
CN109259717A (zh) * 2018-08-27 2019-01-25 彭波 一种立体内窥镜及内窥镜测量方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022101338A1 (en) * 2020-11-13 2022-05-19 Maxer Endoscopy Gmbh Imaging system and laparoscope for imaging an object
EP4000494A1 (en) * 2020-11-13 2022-05-25 Maxer Endoscopy GmbH Imaging system, laparoscope and method for imaging an object

Also Published As

Publication number Publication date
CN109259717B (zh) 2020-08-14
CN109259717A (zh) 2019-01-25

Similar Documents

Publication Publication Date Title
WO2020042796A1 (zh) 一种立体内窥镜及内窥镜测量方法
US8736672B2 (en) Algorithmic interaxial reduction
WO2019085392A1 (zh) 牙齿三维数据重建方法、装置和系统
US20160295194A1 (en) Stereoscopic vision system generatng stereoscopic images with a monoscopic endoscope and an external adapter lens and method using the same to generate stereoscopic images
CN110830784B (zh) 一种移轴立体摄像机
US20030083551A1 (en) Optical observation device and 3-D image input optical system therefor
WO2012147363A1 (ja) 画像生成装置
JP2013244104A (ja) 立体視内視鏡装置
CN107462994A (zh) 沉浸式虚拟现实头戴显示装置和沉浸式虚拟现实显示方法
US9983384B2 (en) Stereoscopic lens for digital cameras
CN110780455A (zh) 一种立体眼镜
WO2017133160A1 (zh) 一种智能眼镜透视方法及系统
CN107595408A (zh) 2d与裸眼3d双屏内窥镜系统及显示方法
CN101836852A (zh) 包含结构光三维成像系统的医用内窥镜
CN112969060A (zh) 一种移轴立体摄像机
JP2020191624A (ja) 電子機器およびその制御方法
CN108322730A (zh) 一种可采集360度场景结构的全景深度相机系统
CN109151273B (zh) 一种光扇立体摄像机及立体测量方法
JP2006220603A (ja) 撮像装置
CN110840385A (zh) 基于单探测器的双目3d内窥镜三维图像处理方法及成像系统
CN112995640A (zh) 一种同屏立体摄像机
CN107669234A (zh) 单镜头横幅立体内窥镜系统
WO2020244273A1 (zh) 双摄像机三维立体成像系统和处理方法
CN209474563U (zh) 单镜头横幅立体内窥镜系统
WO2021088539A1 (zh) 一种移轴立体摄像机

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19856009

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19856009

Country of ref document: EP

Kind code of ref document: A1