WO2022206406A1 - Augmented reality system and method based on spatial position of corrected object, and computer-readable storage medium - Google Patents

Augmented reality system and method based on spatial position of corrected object, and computer-readable storage medium Download PDF

Info

Publication number
WO2022206406A1
WO2022206406A1 PCT/CN2022/081469 CN2022081469W WO2022206406A1 WO 2022206406 A1 WO2022206406 A1 WO 2022206406A1 CN 2022081469 W CN2022081469 W CN 2022081469W WO 2022206406 A1 WO2022206406 A1 WO 2022206406A1
Authority
WO
WIPO (PCT)
Prior art keywords
position information
space
correcting
augmented reality
image
Prior art date
Application number
PCT/CN2022/081469
Other languages
French (fr)
Chinese (zh)
Inventor
孙非
朱奕
郭晓杰
崔芙粒
单莹
Original Assignee
上海复拓知达医疗科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海复拓知达医疗科技有限公司 filed Critical 上海复拓知达医疗科技有限公司
Publication of WO2022206406A1 publication Critical patent/WO2022206406A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/25User interfaces for surgical systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/40ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mechanical, radiation or invasive therapies, e.g. surgery, laser therapy, dialysis or acupuncture
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/50ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/107Visualisation of planned trajectories or target regions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/108Computer aided selection or customisation of medical implants or cutting guides
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2055Optical tracking systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2065Tracking using image or pattern recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2068Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis using pointers, e.g. pointers having reference marks for determining coordinates of body points

Definitions

  • the present invention relates to the technical field of image processing, and in particular, to an augmented reality system and method based on correcting the position of an object in space.
  • Augmented reality technology usually captures images of real scenes through cameras, and needs to analyze and process the captured images of real scenes, and add additional information based on the real scenes to display to users, that is, augmentation of reality.
  • the process of analyzing and processing images of real scenes often includes locating objects in the scene. Under certain specific requirements, the accuracy of object positioning in a scene is extremely high, and the accuracy of object positioning in a scene in the prior art cannot meet the requirements.
  • augmented reality technology when augmented reality technology is applied to surgical navigation scenarios, it is necessary to very accurately determine the positional relationship between medical devices, patients, and scenarios to ensure accurate navigation information is provided to users.
  • puncture navigation based on augmented reality technology can realize fast and accurate surgical navigation with the most simple, convenient, easy-to-learn and easy-to-use equipment.
  • one of the cores of precise navigation accurate spatial positioning of surgical instruments based on visible light patterns, and registration of virtual organs and real human bodies, all depend on the accurate spatial positioning of identifiable patterns on the object to be positioned. Due to the limitation of device design, identifiable patterns of different sizes and shapes have different spatial positioning accuracy due to the inherent laws of spatial distribution of their own pattern feature points or the characteristics of their production processes.
  • the purpose of the present invention is to provide an augmented reality system and method based on correcting the position of an object in space.
  • An augmented reality system based on correcting the position of an object in space, comprising: a first acquisition unit, a second acquisition unit, a correction unit and a display unit, wherein:
  • the first acquisition unit is configured to capture the image of the first object in space, and identify the first object recognition characteristic in the image of the first object to obtain the spatial position information of the first object;
  • the second acquisition unit is configured to capture a second object image of the second object in space when the second object is at a specific position, and identify the second object identification characteristics in the second object image to obtain a second object image.
  • object space position information
  • the correction unit includes a first correction unit and/or a second correction unit, wherein:
  • the first correction unit configured to correct the second object space position information according to the first object space position information and the specific position
  • the second correcting unit configured to correct the spatial position information of the first object according to the spatial position information of the second object
  • the display unit is configured to display augmented reality information related to the position of the first object or the second object.
  • the first object identification characteristic includes at least the first object body shape characteristic and/or the first object mark identification characteristic; the first object body shape characteristic at least includes the structure, shape or color of the first object body; the first object body The object mark identification feature at least includes a pattern, graphic or two-dimensional code set on the first object;
  • the second object identification characteristic at least includes the second object ontology morphological characteristic and/or the second object mark identification characteristic; the second object ontology morphological characteristic at least includes the structure, shape or color of the second object ontology; the second object ontology
  • the object marking identification features at least include patterns, graphics or two-dimensional codes provided on the second object.
  • the first object space position information at least includes the first object space coordinates and/or the first object orientation; the second object space position information at least includes the second object space coordinates and/or the second object orientation.
  • the specific position is the position when the second object has a specific positional relationship with the first object, and the specific positional relationship includes the position between the second object and a preset point, line or surface on the first object Coincidence or partial coincidence.
  • the first correction unit is specifically used for:
  • the first correction unit is used for correcting the x and y coordinates of the second object.
  • the second correction unit is specifically used for:
  • the second correction unit is used for correcting the z coordinate of the first object.
  • the first object is a fixture in a surgical scene; the second object is an operating instrument in the surgical scene.
  • An augmented reality method based on correcting the position of objects in space including:
  • Augmented reality information related to the location of the first object or the second object is displayed.
  • the first object identification characteristic includes at least the first object body shape characteristic and/or the first object mark identification characteristic; the first object body shape characteristic at least includes the structure, shape or color of the first object body; the first object body The object mark identification feature at least includes a pattern, graphic or two-dimensional code set on the first object;
  • the second object identification characteristic at least includes the second object ontology morphological characteristic and/or the second object mark identification characteristic; the second object ontology morphological characteristic at least includes the structure, shape or color of the second object ontology; the second object ontology
  • the object marking identification features at least include patterns, graphics or two-dimensional codes provided on the second object.
  • the first object space position information at least includes the first object space coordinates and/or the first object orientation; the second object space position information at least includes the second object space coordinates and/or the second object orientation.
  • the specific position is a position when the second object has a specific positional relationship with a preset point, line or surface on the first object, and the specific positional relationship includes coincidence or partial coincidence of points, lines or surfaces.
  • the correcting the second object space position information according to the first object space position information and the specific position includes: calculating the second object theoretical position according to the first object space position information and the specific position relationship information; correcting the spatial position information of the second object according to the theoretical position information of the second object.
  • correcting the spatial position information of the second object includes correcting the x and y coordinates of the second object.
  • Correcting the first object space position information according to the second object space position information includes: calculating the second object theoretical position information according to the first object space position information and the specific position relationship; The theoretical position information of the second object is used to correct the spatial position information of the second object.
  • the calibrating the spatial position information of the second object includes calibrating the x and y coordinates of the second object.
  • the first object is a fixture in a surgical scene; the second object is an operating instrument in the surgical scene.
  • the present invention also provides a computer-readable storage medium storing a non-transitory computer-executable program for instructing the computer to execute the method described in the present invention.
  • the present invention provides an augmented reality system and method based on correcting the position of an object in space.
  • an augmented reality system and method based on correcting the position of an object in space.
  • two different objects can be compared with each other. Carry out mutual correction of image acquisition and position, and realize the improvement of optical positioning accuracy of one or both parties.
  • the method and system can be applied in various occasions, such as the positioning of medical device operations during surgery, applications in teaching simulation operations, and The application in the process of game activities, etc., accurate positioning and augmented reality of the location, can help users perform accurate and complete operations.
  • FIG. 1 is a structural block diagram of an augmented reality system based on the present invention for correcting the position of an object in space;
  • Fig. 2 is the embodiment example diagram in the specific embodiment of the present invention.
  • Fig. 3 is the flow chart of the augmented reality method of the present invention based on correcting the position of an object in space
  • FIG. 4 is a schematic diagram of mutual calibration based on the identification plate of the present invention.
  • the present invention provides an augmented reality method based on correcting the position of an object in space, which can be applied to an operation scene, an operation scene in a simulated teaching process, or a game process. position.
  • the embodiment of the present invention provides the user with the positioning of the instrument for the tissue and/or the instrument located in the object in the object.
  • the user is the observer of the whole in vivo navigation process, and he is also the operator who probes the instrument into the body of the subject.
  • Objects can be people or other animals that the user needs to operate on.
  • the instrument can be any tool that can be penetrated into the body of the subject.
  • the instrument may be, for example, a puncture needle, a biopsy needle, a radiofrequency or microwave ablation needle, an ultrasound probe, a rigid endoscope, an endoscopic oval forceps, an electric knife or a stapler and other medical instruments.
  • the first object is a fixture in a surgical scene; the second object is an operating instrument in a surgical scene.
  • an augmented reality system based on correcting the position of an object in space can be applied to surgical operations, simulated teaching operations, or game processes, and specifically includes: a first acquisition unit 1, a second acquisition unit 2, a correction Unit 3, and Display Unit 4, where:
  • the first acquisition unit 1 is configured to capture an image of a first object in space, and identify the first object recognition characteristic in the first object image to obtain spatial position information of the first object;
  • the second acquisition unit 2 is configured to capture a second object image of the second object in space when the second object is at a specific position, and identify the second object identification characteristics in the second object image to obtain the first object image. 2.
  • the correction unit 3 includes a first correction unit 31 and/or a second correction unit 32, wherein:
  • the first correcting unit 31 is configured to correct the spatial position information of the second object according to the spatial position information of the first object and a specific position;
  • the second correcting unit 32 is configured to correct the spatial position information of the first object according to the spatial position information of the second object;
  • the display unit 4 is configured to display augmented reality information related to the position of the first object or the second object.
  • the first object space position information at least includes the first object space coordinates and/or the first object orientation, which can be The specific positioning of the spatial position of the fixed first object is performed.
  • the first object identification characteristic includes at least the first object body morphological characteristic and/or the first object mark identification characteristic.
  • the morphological characteristic of the first object body at least includes the structure, shape, or color of the first object body, but in the specific implementation process, it is not limited to this, and may also be other identifiable characteristics of the object.
  • an object with a fixed shape can be fixed in the present invention.
  • the shape of the structure of the object is recognized.
  • different display methods can be used to prompt the user whether the capture process and the recognition process are successful.
  • the object is positioned and identified, and the accurate spatial position information of the object is obtained.
  • the first object mark identification characteristic includes at least a pattern, graphic or two-dimensional code set on the first object.
  • the pattern, graphic or two-dimensional code can be set on the first object through a printing process, and the identifiable patterns have different spatial accuracy according to their own pattern rules and production characteristics. Make full use of the combination of recognizable patterns with different characteristics to achieve rapid spatial calibration of navigation instruments.
  • the device for capturing the image of the first object is a device capable of image capturing, and the capturing angle is kept consistent with the user’s viewing direction. Consistent.
  • the user may wear the image capture device on the body, such as the head.
  • the image capture device is a head-mounted optical camera. When the user uses it, no matter what posture he adopts, the acquisition angle of the head-mounted optical camera can be well kept consistent with its viewing direction.
  • the first object image through the image acquisition device, find the first structure information corresponding to the first object in the database according to the first object image, identify the position and orientation of the first object, and set the current spatial coordinates for the first object, denoted as X1, Y1, Z1.
  • the second object is a moving instrument
  • the second object space position information at least includes the second object space coordinates and/or the second object orientation.
  • the specific position is the position when the second object has a specific positional relationship with a preset point, line or surface on the first object, for example, the specific positional relationship may be the second object and the first object. Preset points, lines or surfaces on an object coincide or partially coincide within a preset range.
  • the first correction unit 31 is specifically configured to: calculate the theoretical position information of the second object according to the spatial position information of the first object and the specific position relationship; The spatial position information of the object is corrected; exemplarily, the first correction unit 31 is configured to correct the x and y coordinates of the second object.
  • the display unit 4 is configured to display the image of the second object, the information content associated with the position of the second object, or the position prompt information associated with the position of the second object.
  • the second object identification characteristic at least includes the second object body shape characteristic and/or the second object mark identification characteristic; the second object body shape characteristic at least includes the structure, shape or color of the second object body; the second object body
  • the object marking identification features at least include patterns, graphics or two-dimensional codes provided on the second object.
  • the second object space position information at least includes the second object space coordinates and/or the second object orientation.
  • the second object identification characteristic at least includes the second object ontology morphological characteristic and/or the second object mark identification characteristic; the second object ontology morphological characteristic at least includes the structure, shape or color of the second object ontology; the second object ontology
  • the object marking identification features at least include patterns, graphics or two-dimensional codes provided on the second object.
  • a two-dimensional code is a black and white plane figure distributed on a plane, the points on it are very easy to identify, and the positioning of the two-dimensional code can be realized by identifying at least three points among them. Because the two-dimensional code is fixed to the object or device, the positioning of the object or device to which the two-dimensional code is fixed can be realized.
  • the second object marker identification characteristic may also be other planar graphics such as a checkerboard.
  • QR code or checkerboard as an identification makes positioning objects or instruments more accurate and fast. Thereby, fast moving instruments can be navigated more precisely.
  • the logo fixed on the surface of the instrument can also be a three-dimensional figure.
  • the logo can be the handle of the instrument, or a structure fixed on the side of the handle.
  • the second object in the present invention is a puncture needle in an operation, and the end of the puncture needle is provided with an identification structure and a two-dimensional code is printed.
  • the second obtaining unit 2 is specifically used for:
  • the first object is fixed in the space, and the second object is a moving object.
  • the second object moves to the specific position, the second object is identified according to the mark recognition characteristics of the second object, and the first object is obtained.
  • the second object is oriented and/or the current second object space coordinate is set for the second object.
  • the second correction unit 32 is specifically configured to: calculate the theoretical position information of the first object according to the spatial position information of the second object and the specific position; The spatial position information of the first object is corrected; exemplarily, the second correction unit 32 is used to correct the z-coordinate of the first object.
  • the display unit 4 is configured to display the image of the first object, the information content associated with the position of the first object, or the position prompt information associated with the position of the first object.
  • the specific position is the position when the second object has a specific positional relationship with a preset point, line, or surface on the first object.
  • the specific positional relationship may be the second object.
  • the object coincides with a preset point, line or surface on the first object or partially overlaps within a preset range.
  • the user When in use, the user can three-dimensionally display in-vivo organs, lesions, and parts of instruments within the subject's body that are not actually visible at the corresponding positions in the actual surgical scene.
  • invisible internal organs, lesions, and parts of the instrument located within the body are aligned with the human body and the actual instrument to guide the user through the surgical procedure.
  • the first object and the second object can be identified, and optical identification objects with different error characteristics can be used in the same scene.
  • the optical positioning accuracy of one or both parties can be improved.
  • the correlation of the coordinates of different identification patterns in the same space is determined by matching the geometric structure of the instruments with spatial correlation. By using the known trusted values, the calibration of the spatial recognition positions of different recognition patterns is realized.
  • the present invention also provides an augmented reality method based on correcting the position of an object in space, including:
  • S1 capture the first object image in space, and identify the first object recognition characteristic in the first object image, and obtain the first object spatial position information
  • first obtain specific spatial position information of a fixed object where the spatial position information at least includes the spatial coordinates of the first object and/or the orientation of the first object.
  • the specific positioning of the spatial location In order to perform positioning and calibration on the second object, first obtain specific spatial position information of a fixed object, where the spatial position information at least includes the spatial coordinates of the first object and/or the orientation of the first object. The specific positioning of the spatial location.
  • the first object identification characteristic includes at least the first object body morphological characteristic and/or the first object mark identification characteristic.
  • the morphological characteristics of the first object body at least include the structure, shape or color of the first object body, but in the specific implementation process, it is not limited to this, and may also be other identifiable characteristics of the object.
  • an object with a fixed shape can be fixed in the present invention.
  • the shape of the structure of the object is recognized.
  • different display methods can be used to prompt the user whether the capture process and the recognition process are successful. The object is positioned and identified, and the accurate spatial position information of the object is obtained.
  • the first object mark identification characteristic includes at least a pattern, graphic or two-dimensional code set on the first object.
  • the pattern, graphic or two-dimensional code can be set on the first object through a printing process, and the identifiable patterns have different spatial accuracy according to their own pattern rules and production characteristics. Make full use of the combination of recognizable patterns with different characteristics to achieve rapid spatial calibration of navigation instruments.
  • the device for capturing the image of the first object is a device capable of image capturing, and the capturing angle is consistent with the user's viewing direction.
  • the user may wear the image capture device on the body, such as the head.
  • the image capture device is a head-mounted optical camera.
  • the acquisition angle of the head-mounted optical camera can be well kept consistent with its viewing direction. In this way, it is not only ensured that the displayed angle is the angle viewed by the user, the accuracy of the display of the instrument is ensured, but also interference to various operations of the user during use is avoided.
  • the image of the first object is acquired by the image acquisition device, the identification characteristics of the first object are identified, the morphological characteristics of the body of the first object are obtained according to the identification characteristics of the first object, the orientation of the first object is obtained, and the current spatial coordinates of the first object are set for the first object , denoted as X1, Y1, Z1.
  • the second object is a moving instrument
  • the second object space position information at least includes the second object space coordinates and/or the second object orientation.
  • the second object identification characteristic at least includes the second object ontology morphological characteristic and/or the second object mark identification characteristic; the second object ontology morphological characteristic at least includes the structure, shape or color of the second object ontology; the second object ontology
  • the object marking identification features at least include patterns, graphics or two-dimensional codes provided on the second object.
  • a two-dimensional code is a black and white plane figure distributed on a plane, the points on it are very easy to identify, and the positioning of the two-dimensional code can be realized by identifying at least three points among them. Because the two-dimensional code is fixed to the object or device, the positioning of the object or device to which the two-dimensional code is fixed can be realized.
  • the second object marker identification characteristic may also be other planar graphics such as a checkerboard.
  • QR code or checkerboard as an identification makes positioning objects or instruments more accurate and fast. Thereby, fast moving instruments can be navigated more precisely.
  • the logo fixed on the surface of the instrument can also be a three-dimensional figure.
  • the logo can be the handle of the instrument, or a structure fixed on the side of the handle.
  • the second object in the present invention is a puncture needle in an operation, and the end of the puncture needle is provided with an identification structure and a two-dimensional code is printed.
  • capturing the second object image of the second object in space specifically includes:
  • the first object is fixed in the space
  • the second object is a moving object
  • an image of the second object in the space is captured.
  • the specific position can be set so that the second object moves to the preset coincidence with the first object, or, according to the needs of actual operation, when a certain position of the second object reaches a fixed position or completes a prescribed action, both can be positioned.
  • the first object is fixed in the space
  • the second object is a moving object
  • the second object is identified according to the identification characteristics of the second object.
  • the specific position is a position when the second object has a specific positional relationship with a preset associated point, line, or surface on the first object, and the specific positional relationship includes coincidence of points, lines or surfaces , partially overlapped.
  • the information board is used as the first object
  • the puncture needle is used as the second object.
  • the process can be two processes. According to the actual situation, two objects are relatively corrected, for example, according to the spatial position information of the first object and the specific position, the theoretical position information of the second object is calculated;
  • the spatial position information of the first object is corrected.
  • the position information of the object in space is calculated.
  • the coordinates of point A are the features of the first object captured (mainly on the panel). pattern features) calculated;
  • the point B of the needle tip of the puncture needle can be calculated. coordinate
  • the two points A and B are coincident at this time, but the coordinates of the two points A and B obtained through step 1 and step 2 respectively are not necessarily the same.
  • the accuracy of the x and y coordinates of point A on the first object is high but the accuracy of the z coordinate is relatively low, while the accuracy of the z coordinate of point B on the second object is relatively high.
  • the X2 and Y2 coordinates of the second object are corrected according to the X1 and Y1 coordinates of the first object, and the Z1 coordinate of the first object is corrected with the Z2 coordinate of the second object. Then the corresponding positions of the two structures in the database are adjusted as follows:
  • the specific mutual calibration method consists of the following two parts.
  • the schematic diagram of the mutual calibration is shown in Figure 4.
  • the first object is the identification plate
  • the second object is the puncture needle:
  • the calibration point has the following two expressions in the needle tip coordinate system:
  • the above two coordinates are the representation of the calibration point in the needle identifier coordinate system. Assuming that the expression (a) is more accurate for the z coordinate component, and the expression (b) is more accurate for the x and y coordinate components, the result after mutual calibration is
  • C camera coordinate system
  • T B ⁇ A Represents the coordinate transformation matrix from coordinate system A to coordinate system B
  • the camera can identify the positioning plate and the puncture needle, and then T C ⁇ Q and T C ⁇ N can be obtained. Place the puncture needle tip on a fixed point p on the identification plate. From the processing model of the identification plate, the coordinates of the fixed point in the identification plate coordinate system, that is, p Q , can be determined. According to the feature that the coordinates of this point in the camera coordinate system remain unchanged, the following coordinate relationship can be obtained:
  • the present invention can also be calibrated by using direction calibration, which specifically includes:
  • the direction vector v N of the puncture needle in the needle identification object coordinate system is manually determined in advance.
  • the calibration direction has two expressions in the needle tip coordinate system:
  • the above two vectors are the representation of the calibration direction in the coordinate system of the needle identifier. Assuming that the expression (a) is more accurate for the w coordinate component, and the expression (b) is more accurate for the u and v coordinate components, the result after mutual calibration is
  • the method for calibrating the orientation of the identification plate is shown in Figure 4.
  • the camera recognizes the identification plate and the puncture needle, and T C ⁇ Q and R C ⁇ N can be obtained. Insert the tip of the puncture needle into a fixed hole on the identification plate. From the processing model of the identification plate, the direction vector of the hole in the identification plate coordinate system, ie v Q , can be determined. Since the direction vector does not change in the camera coordinate system, the following conversion relationship can be obtained
  • the needle tip direction can be calculated in real time according to the following formula:
  • T C ⁇ N is given by the camera after identifying the pin identification object
  • v N is the calibration result calculated by the mutual calibration or the orientation calibration of the positioning plate.
  • the calibrated spatial position information of the first object and/or the second object is displayed, and augmented reality information related to the position is displayed, which may be that the content of the information is related to the position of the object, or the display position of the information. related to the location of the object.

Abstract

The present invention discloses an augmented reality system and method based on a spatial position of a corrected object. The method comprises: capturing a first object image in a space, and recognizing a first object recognition characteristic in the first object image, so as to obtain first object spatial position information; when a second object is at a specific position, capturing a second object image of the second object in the space, and recognizing a second object recognition characteristic in the second object image, so as to obtain second object spatial position information; correcting the second object spatial position information according to the first object spatial position information, and correcting the first object spatial position information according to the second object spatial position information; and according to the first object spatial position information and/or the second object spatial position information, displaying augmented reality information related to the position of the first object and/or the second object. By means of the method, a user can be helped to perform a precise and complete operation.

Description

一种基于校正物体在空间中位置的增强现实系统、方法及计算机可读存储介质An augmented reality system, method and computer-readable storage medium based on correcting the position of an object in space 技术领域technical field
本发明涉及图像处理技术领域,尤其涉及一种基于校正物体在空间中位置的增强现实系统及方法。The present invention relates to the technical field of image processing, and in particular, to an augmented reality system and method based on correcting the position of an object in space.
背景技术Background technique
增强现实技术,通常通过摄像头捕获到现实场景的图像,需要对捕获到的现实场景图像进行分析处理,并在现实场景的基础上增添附加的信息显示给用户,即对现实的增强。对现实场景的图像进行分析处理的过程,往往包括对场景中物体的定位。在某些特定需求下,对场景中的物体定位的精度要求极高,现有技术对场景中物体定位的准确度不能满足需求。Augmented reality technology usually captures images of real scenes through cameras, and needs to analyze and process the captured images of real scenes, and add additional information based on the real scenes to display to users, that is, augmentation of reality. The process of analyzing and processing images of real scenes often includes locating objects in the scene. Under certain specific requirements, the accuracy of object positioning in a scene is extremely high, and the accuracy of object positioning in a scene in the prior art cannot meet the requirements.
举例来说,增强现实技术应用于手术导航场景,需要非常准确地确定医疗器械与病人、场景之间的位置关系,才能确保向用户提供准确的导航信息。如基于增强现实技术的穿刺导航,可以用最简单方便,易学易用的设备实现快速精准的手术导航。而在整个流程中,精准导航的核心之一:基于可见光图案的精准手术器械空间定位,及虚拟器官与真实人体的注册配准,都依赖于对待定位物体上的可识别图案的准确空间定位。而由于器械设计所限,不同尺寸形状的可识别图案,因其自身图案特征点空间分布的固有规律或其生产过程的特点,所特有的空间定位准确度也不尽相同。如果是反复使用的识别物还可能通过在首次临床使用前的出厂校准事先提高其识别精度,但对于一次性使用且不同产品误差分布也不一致的情况,则很难有类似事先校准机 会。如何能在使用现场快速提高其图案识别精度,是实际应用此项技术的一大难点。For example, when augmented reality technology is applied to surgical navigation scenarios, it is necessary to very accurately determine the positional relationship between medical devices, patients, and scenarios to ensure accurate navigation information is provided to users. For example, puncture navigation based on augmented reality technology can realize fast and accurate surgical navigation with the most simple, convenient, easy-to-learn and easy-to-use equipment. In the whole process, one of the cores of precise navigation: accurate spatial positioning of surgical instruments based on visible light patterns, and registration of virtual organs and real human bodies, all depend on the accurate spatial positioning of identifiable patterns on the object to be positioned. Due to the limitation of device design, identifiable patterns of different sizes and shapes have different spatial positioning accuracy due to the inherent laws of spatial distribution of their own pattern feature points or the characteristics of their production processes. If it is an identifier that is used repeatedly, it is possible to improve its recognition accuracy in advance through factory calibration before the first clinical use, but it is difficult to have similar pre-calibration opportunities for single-use and inconsistent product error distributions. How to quickly improve the pattern recognition accuracy in the field of use is a major difficulty in the practical application of this technology.
发明内容SUMMARY OF THE INVENTION
针对上述缺陷或不足,本发明的目的在于提供一种基于校正物体在空间中位置的增强现实的系统及方法。In view of the above defects or deficiencies, the purpose of the present invention is to provide an augmented reality system and method based on correcting the position of an object in space.
为达到以上目的,本发明的技术方案为:For achieving the above purpose, the technical scheme of the present invention is:
一种基于校正物体在空间中位置的增强现实系统,包括:第一获取单元、第二获取单元、校正单元以及显示单元,其中:An augmented reality system based on correcting the position of an object in space, comprising: a first acquisition unit, a second acquisition unit, a correction unit and a display unit, wherein:
所述第一获取单元,用于捕获在空间中的第一物体图像,并识别所述第一物体图像中的第一物体识别特性,得到第一物体空间位置信息;The first acquisition unit is configured to capture the image of the first object in space, and identify the first object recognition characteristic in the image of the first object to obtain the spatial position information of the first object;
所述第二获取单元,用于当第二物体处于特定位置时,捕获第二物体在空间中的第二物体图像,并识别所述第二物体图像中的第二物体识别特性,得到第二物体空间位置信息;The second acquisition unit is configured to capture a second object image of the second object in space when the second object is at a specific position, and identify the second object identification characteristics in the second object image to obtain a second object image. object space position information;
所述校正单元包括第一校正单元和/或第二校正单元,其中:The correction unit includes a first correction unit and/or a second correction unit, wherein:
所述第一校正单元,用于根据所述第一物体空间位置信息以及所述特定位置,对所述第二物体空间位置信息进行校正;the first correction unit, configured to correct the second object space position information according to the first object space position information and the specific position;
所述第二校正单元,用于根据所述第二物体空间位置信息,对所述第一物体空间位置信息进行校正;the second correcting unit, configured to correct the spatial position information of the first object according to the spatial position information of the second object;
所述显示单元,用于显示与所述第一物体或所述第二物体的位置相关的增强现实信息。The display unit is configured to display augmented reality information related to the position of the first object or the second object.
所述第一物体识别特性至少包括第一物体本体形态特性和/或第一物体标记识别特性;所述第一物体本体形态特性至少包括第一物体本体的结构、 形态或颜色;所述第一物体标记识别特性至少包括第一物体上设置的图案、图形或二维码;The first object identification characteristic includes at least the first object body shape characteristic and/or the first object mark identification characteristic; the first object body shape characteristic at least includes the structure, shape or color of the first object body; the first object body The object mark identification feature at least includes a pattern, graphic or two-dimensional code set on the first object;
所述第二物体识别特性至少包括第二物体本体形态特性和/或第二物体标记识别特性;所述第二物体本体形态特性至少包括第二物体本体的结构、形态或颜色;所述第二物体标记识别特性至少包括第二物体上设置的图案、图形或二维码。The second object identification characteristic at least includes the second object ontology morphological characteristic and/or the second object mark identification characteristic; the second object ontology morphological characteristic at least includes the structure, shape or color of the second object ontology; the second object ontology The object marking identification features at least include patterns, graphics or two-dimensional codes provided on the second object.
所述第一物体空间位置信息至少包括第一物体空间坐标和/或第一物体朝向;所述第二物体空间位置信息至少包括第二物体空间坐标和/或第二物体朝向。The first object space position information at least includes the first object space coordinates and/or the first object orientation; the second object space position information at least includes the second object space coordinates and/or the second object orientation.
所述特定位置为所述第二物体与所述第一物体具有特定位置关系时的位置,所述特定位置关系包括第二物体与所述第一物体上预设的点、线或面之间重合或部分重合。The specific position is the position when the second object has a specific positional relationship with the first object, and the specific positional relationship includes the position between the second object and a preset point, line or surface on the first object Coincidence or partial coincidence.
所述第一校正单元具体用于:The first correction unit is specifically used for:
根据所述第一物体空间位置信息以及所述特定位置关系,计算第二物体理论位置信息;根据所述第二物体理论位置信息,对所述第二物体的空间位置信息进行校正。Calculate the theoretical position information of the second object according to the spatial position information of the first object and the specific position relationship; and correct the spatial position information of the second object according to the theoretical position information of the second object.
所述第一校正单元用于对所述第二物体的x、y坐标进行校正。The first correction unit is used for correcting the x and y coordinates of the second object.
所述第二校正单元具体用于:The second correction unit is specifically used for:
根据所述第二物体空间位置信息以及所述特定位置关系,计算第一物体理论位置信息;根据所述第一物体理论位置信息,对所述第一物体的空间位置信息进行校正。Calculate the theoretical position information of the first object according to the spatial position information of the second object and the specific position relationship; and correct the spatial position information of the first object according to the theoretical position information of the first object.
所述第二校正单元用于对第一物体的z坐标进行校正。The second correction unit is used for correcting the z coordinate of the first object.
所述第一物体为手术场景中的固定物;所述第二物体为手术场景中的操作器械。The first object is a fixture in a surgical scene; the second object is an operating instrument in the surgical scene.
一种基于校正物体在空间中位置的增强现实方法,包括:An augmented reality method based on correcting the position of objects in space, including:
捕获在空间中的第一物体图像,并识别所述第一物体图像中的第一物体识别特性,得到第一物体空间位置信息;capturing the first object image in space, and identifying the first object recognition characteristics in the first object image to obtain the spatial position information of the first object;
当第二物体处于特定位置时,捕获第二物体在空间中的第二物体图像,并识别所述第二物体图像中的第二物体识别特性,得到第二物体空间位置信息;When the second object is at a specific position, capturing a second object image of the second object in space, and recognizing the second object identification characteristics in the second object image, to obtain the spatial position information of the second object;
根据所述第一物体空间位置信息以及特定位置,对所述第二物体空间位置信息进行校正;和/或根据所述第二物体空间位置信息,对所述第一物体空间位置信息进行校正;Correcting the second object space position information according to the first object space position information and a specific position; and/or correcting the first object space position information according to the second object space position information;
显示与所述第一物体或所述第二物体的位置相关的增强现实信息。Augmented reality information related to the location of the first object or the second object is displayed.
所述第一物体识别特性至少包括第一物体本体形态特性和/或第一物体标记识别特性;所述第一物体本体形态特性至少包括第一物体本体的结构、形态或颜色;所述第一物体标记识别特性至少包括第一物体上设置的图案、图形或二维码;The first object identification characteristic includes at least the first object body shape characteristic and/or the first object mark identification characteristic; the first object body shape characteristic at least includes the structure, shape or color of the first object body; the first object body The object mark identification feature at least includes a pattern, graphic or two-dimensional code set on the first object;
所述第二物体识别特性至少包括第二物体本体形态特性和/或第二物体标记识别特性;所述第二物体本体形态特性至少包括第二物体本体的结构、形态或颜色;所述第二物体标记识别特性至少包括第二物体上设置的图案、图形或二维码。The second object identification characteristic at least includes the second object ontology morphological characteristic and/or the second object mark identification characteristic; the second object ontology morphological characteristic at least includes the structure, shape or color of the second object ontology; the second object ontology The object marking identification features at least include patterns, graphics or two-dimensional codes provided on the second object.
所述第一物体空间位置信息至少包括第一物体空间坐标和/或第一物体朝向;所述第二物体空间位置信息至少包括第二物体空间坐标和/或第二物体 朝向。The first object space position information at least includes the first object space coordinates and/or the first object orientation; the second object space position information at least includes the second object space coordinates and/or the second object orientation.
所述特定位置为所述第二物体与所述第一物体上的预设的点、线或面具有特定位置关系时的位置,所述特定位置关系包括点、线或面重合、部分重合。The specific position is a position when the second object has a specific positional relationship with a preset point, line or surface on the first object, and the specific positional relationship includes coincidence or partial coincidence of points, lines or surfaces.
所述根据所述第一物体空间位置信息以及特定位置,对所述第二物体空间位置信息进行校正包括:根据所述第一物体空间位置信息以及所述特定位置关系,计算第二物体理论位置信息;根据所述第二物体理论位置信息,对所述第二物体的空间位置信息进行校正。The correcting the second object space position information according to the first object space position information and the specific position includes: calculating the second object theoretical position according to the first object space position information and the specific position relationship information; correcting the spatial position information of the second object according to the theoretical position information of the second object.
优选地,对所述第二物体的空间位置信息进行校正包括对所述第二物体的x、y坐标进行校正。Preferably, correcting the spatial position information of the second object includes correcting the x and y coordinates of the second object.
根据所述第二物体空间位置信息,对所述第一物体空间位置信息进行校正包括:根据所述第一物体空间位置信息以及所述特定位置关系,计算第二物体理论位置信息;根据所述第二物体理论位置信息,对所述第二物体的空间位置信息进行校正。Correcting the first object space position information according to the second object space position information includes: calculating the second object theoretical position information according to the first object space position information and the specific position relationship; The theoretical position information of the second object is used to correct the spatial position information of the second object.
优选地,所述对所述第二物体的空间位置信息进行校正包括对所述第二物体的x、y坐标进行校正。Preferably, the calibrating the spatial position information of the second object includes calibrating the x and y coordinates of the second object.
所述第一物体为手术场景中的固定物;所述第二物体为手术场景中的操作器械。The first object is a fixture in a surgical scene; the second object is an operating instrument in the surgical scene.
本发明还提供一种计算机可读存储介质,存储有非暂时的计算机可执行程序,所述程序用于指令所述计算机执行本发明记载的方法。The present invention also provides a computer-readable storage medium storing a non-transitory computer-executable program for instructing the computer to execute the method described in the present invention.
与现有技术比较,本发明的有益效果为:Compared with the prior art, the beneficial effects of the present invention are:
本发明提供了一种基于校正物体在空间中位置的增强现实系统及方法, 通过在同一场景下使用具有不同误差特性的物体的识别特性,通过二者对应物体的空间关联,对两个不同物体进行图像获取和位置的相互校正,实现单方或双方的光学定位精度提高,该方法和系统能够在多种场合中进行应用,比如手术过程中的医疗器械操作的定位、教学模拟操作中的应用以及游戏活动过程中的应用等,精准的定位和位置的增强现实,能够帮助使用者进行精准和完整的操作。The present invention provides an augmented reality system and method based on correcting the position of an object in space. By using the identification characteristics of objects with different error characteristics in the same scene, and through the spatial association of the two corresponding objects, two different objects can be compared with each other. Carry out mutual correction of image acquisition and position, and realize the improvement of optical positioning accuracy of one or both parties. The method and system can be applied in various occasions, such as the positioning of medical device operations during surgery, applications in teaching simulation operations, and The application in the process of game activities, etc., accurate positioning and augmented reality of the location, can help users perform accurate and complete operations.
附图说明Description of drawings
图1是本发明基于校正物体在空间中位置的增强现实系统的结构框图;1 is a structural block diagram of an augmented reality system based on the present invention for correcting the position of an object in space;
图2是本发明具体实施方式中的实施方案示例图;Fig. 2 is the embodiment example diagram in the specific embodiment of the present invention;
图3是本发明基于校正物体在空间中位置的增强现实方法流程图Fig. 3 is the flow chart of the augmented reality method of the present invention based on correcting the position of an object in space
图4是本发明基于识别板互校准示意图。FIG. 4 is a schematic diagram of mutual calibration based on the identification plate of the present invention.
具体实施方式Detailed ways
下面将结合附图对本发明做详细描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明的保护范围。The present invention will be described in detail below with reference to the accompanying drawings. Obviously, the described embodiments are only a part of the embodiments of the present invention, but not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.
在进行精准的操作场景中,很多时候需要准确地获取物体的实际位置与图像中的位置,在某些特定需求下,对场景中的物体定位的精度要求极高,比如医疗过程中,需要非常准确地确定医疗器械与病人、场景之间的位置关系,才能够确保向用户提供准确的导航信息。帮助医疗人员准备的找到操作位置和身体的对应关系。基于该要求,本发明提供了一种基于校正物体在空 间中位置的增强现实方法,能够应用于手术实施场景,也可以是应用于模拟教学过程中的操作场景、也可以应用于游戏过程中的定位。In the precise operation scene, it is often necessary to accurately obtain the actual position of the object and the position in the image. Under certain specific requirements, the accuracy of the object positioning in the scene is extremely high. For example, in the medical process, it needs to be very precise. Only by accurately determining the positional relationship between the medical device, the patient, and the scene can provide accurate navigation information to the user. Help medical staff prepare to find the corresponding relationship between the operating position and the body. Based on this requirement, the present invention provides an augmented reality method based on correcting the position of an object in space, which can be applied to an operation scene, an operation scene in a simulated teaching process, or a game process. position.
以手术实施场景为例,本发明实施例为用户提供针对对象体内的组织和/或器械位于对象体内的器械定位。其中,用户是整个体内导航过程的观察者,其也是将器械探入对象体内的操作者。对象可以是用户需要对其进行操作的人或其他动物。器械可以是任意可探入对象体内的工具。器械可以例如是穿刺针、活检针、射频或微波消融针、超声探头、硬质内窥镜、内窥镜手术下卵圆钳、电刀或吻合器等医疗器械。优选地,所述第一物体为手术场景中的固定物;所述第二物体为手术场景中操作器械。Taking a surgical implementation scenario as an example, the embodiment of the present invention provides the user with the positioning of the instrument for the tissue and/or the instrument located in the object in the object. Among them, the user is the observer of the whole in vivo navigation process, and he is also the operator who probes the instrument into the body of the subject. Objects can be people or other animals that the user needs to operate on. The instrument can be any tool that can be penetrated into the body of the subject. The instrument may be, for example, a puncture needle, a biopsy needle, a radiofrequency or microwave ablation needle, an ultrasound probe, a rigid endoscope, an endoscopic oval forceps, an electric knife or a stapler and other medical instruments. Preferably, the first object is a fixture in a surgical scene; the second object is an operating instrument in a surgical scene.
如图1所示,一种基于校正物体在空间中位置的增强现实系统,能够应用于手术操作、模拟教学操作、或者游戏过程,具体包括:第一获取单元1、第二获取单元2、校正单元3,以及显示单元4,其中:As shown in Figure 1, an augmented reality system based on correcting the position of an object in space can be applied to surgical operations, simulated teaching operations, or game processes, and specifically includes: a first acquisition unit 1, a second acquisition unit 2, a correction Unit 3, and Display Unit 4, where:
所述第一获取单元1,用于捕获在空间中的第一物体图像,并识别所述第一物体图像中的第一物体识别特性,得到第一物体空间位置信息;The first acquisition unit 1 is configured to capture an image of a first object in space, and identify the first object recognition characteristic in the first object image to obtain spatial position information of the first object;
所述第二获取单元2,用于当第二物体处于特定位置时,捕获第二物体在空间中的第二物体图像,并识别所述第二物体图像中的第二物体识别特性,得到第二物体空间位置信息;The second acquisition unit 2 is configured to capture a second object image of the second object in space when the second object is at a specific position, and identify the second object identification characteristics in the second object image to obtain the first object image. 2. The spatial position information of the object;
所述校正单元3包括第一校正单元31和/或第二校正单元32,其中:The correction unit 3 includes a first correction unit 31 and/or a second correction unit 32, wherein:
所述第一校正单元31,用于根据所述第一物体空间位置信息以及特定位置,对所述第二物体空间位置信息进行校正;The first correcting unit 31 is configured to correct the spatial position information of the second object according to the spatial position information of the first object and a specific position;
所述第二校正单元32,用于根据所述第二物体空间位置信息,对所述第一物体空间位置信息进行校正;The second correcting unit 32 is configured to correct the spatial position information of the first object according to the spatial position information of the second object;
所述显示单元4,用于显示与所述第一物体或所述第二物体的位置相关的增强现实信息。The display unit 4 is configured to display augmented reality information related to the position of the first object or the second object.
为了能够对第二物体进行定位校准,首先获取一个固定的物体的具有的第一物体空间位置信息,所述第一物体空间位置信息至少包括第一物体空间坐标和/或第一物体朝向,能够对固定的第一物体进行空间位置的具体定位。In order to be able to position and calibrate the second object, first obtain the first object space position information of a fixed object, the first object space position information at least includes the first object space coordinates and/or the first object orientation, which can be The specific positioning of the spatial position of the fixed first object is performed.
本发明中,所述第一物体识别特性至少包括第一物体本体形态特性和/或第一物体标记识别特性。所述第一物体本体形态特性至少包括第一物体本体的结构、形态、或颜色,但是具体实施过程中,不局限于此,也可以是物体的其他的能够被识别的特性。示例性的,本发明可以固定设置一个形状固定的物体,在进行校准前,先识别物体的结构的形状,识别过程中,通过不同的显示方式,能够提示用户捕获过程和识别过程是否成功。对该物体进行定位识别,获取物体的准确空间位置信息。In the present invention, the first object identification characteristic includes at least the first object body morphological characteristic and/or the first object mark identification characteristic. The morphological characteristic of the first object body at least includes the structure, shape, or color of the first object body, but in the specific implementation process, it is not limited to this, and may also be other identifiable characteristics of the object. Exemplarily, an object with a fixed shape can be fixed in the present invention. Before calibration, the shape of the structure of the object is recognized. During the recognition process, different display methods can be used to prompt the user whether the capture process and the recognition process are successful. The object is positioned and identified, and the accurate spatial position information of the object is obtained.
另外,本发明中,所述第一物体标记识别特性至少包括第一物体上设置的图案、图形或二维码。所述图案、图形或二维码可以为通过印刷过程设置于第一物体上,可识别图案根据其自身图案的规律以及生产特点,所具备的空间准确度也不尽相同。充分利用不同特性可识别图案的组合,实现对导航器械的快速空间校准。In addition, in the present invention, the first object mark identification characteristic includes at least a pattern, graphic or two-dimensional code set on the first object. The pattern, graphic or two-dimensional code can be set on the first object through a printing process, and the identifiable patterns have different spatial accuracy according to their own pattern rules and production characteristics. Make full use of the combination of recognizable patterns with different characteristics to achieve rapid spatial calibration of navigation instruments.
示例性的,本发明中,如图2所示,可以使用印刷有二维码的矩形信息板,用于捕获第一物体图像的设备为能够图像采集的装置,采集角度与用户的观察方向保持一致。当用户使用时,其可以将图像采集装置佩戴于身体上,例如头部。可选地,图像采集装置是头戴式光学摄像头。在用户使用时,无论其采用何种姿势,都可以很好地保持头戴式光学摄像头的采集角度与其观 察方向一致。由此,不仅保证了显示增强现实信息的角度是用户所观看的角度,保证了精准度,而且避免了使用时对用户的各种操作的干扰。从而显著提高了用户体验。根据摄像头采集到的图像对空间中的物体进行定位,得到物体在xyz空间坐标系中的位置,其中,z坐标表示沿延摄像头拍摄的深度方向上的坐标,x、y坐标是垂直于z坐标轴方向的坐标。通过图像采集装置获取第一物体图像,根据第一物体图像寻找数据库内与第一物体对应的第一结构信息,识别第一物体位置与朝向,对第一物体设置当前空间坐标,记为X1、Y1、Z1。Exemplarily, in the present invention, as shown in FIG. 2 , a rectangular information board printed with a two-dimensional code can be used, the device for capturing the image of the first object is a device capable of image capturing, and the capturing angle is kept consistent with the user’s viewing direction. Consistent. When in use, the user may wear the image capture device on the body, such as the head. Optionally, the image capture device is a head-mounted optical camera. When the user uses it, no matter what posture he adopts, the acquisition angle of the head-mounted optical camera can be well kept consistent with its viewing direction. In this way, it is not only ensured that the angle at which the augmented reality information is displayed is the angle viewed by the user, and the accuracy is ensured, but also interference to various operations of the user during use is avoided. This significantly improves the user experience. Position the object in space according to the image collected by the camera, and obtain the position of the object in the xyz space coordinate system, where the z coordinate represents the coordinate along the depth direction captured by the camera, and the x and y coordinates are perpendicular to the z coordinate Coordinates in the axis direction. Obtain the first object image through the image acquisition device, find the first structure information corresponding to the first object in the database according to the first object image, identify the position and orientation of the first object, and set the current spatial coordinates for the first object, denoted as X1, Y1, Z1.
在进行具体的手术场景中,需要使用器械进行操作,本发明中所述第二物体为移动的器械,所述第二物体空间位置信息至少包括第二物体空间坐标和/或第二物体朝向。In a specific surgical scenario, instruments need to be used for operation. In the present invention, the second object is a moving instrument, and the second object space position information at least includes the second object space coordinates and/or the second object orientation.
所述特定位置为所述第二物体与所述第一物体上的预设的点、线或面具有特定位置关系时的位置,例如,特定位置关系可以是所述第二物体与所述第一物体上的预设的点、线或面重合或者在预设的范围内部分重合。The specific position is the position when the second object has a specific positional relationship with a preset point, line or surface on the first object, for example, the specific positional relationship may be the second object and the first object. Preset points, lines or surfaces on an object coincide or partially coincide within a preset range.
所述第一校正单元31具体用于:根据所述第一物体空间位置信息以及所述特定位置关系,计算第二物体理论位置信息;根据所述第二物体理论位置信息,对所述第二物体的空间位置信息进行校正;示例性的,所述第一校正单元31用于对所述第二物体的x、y坐标进行校正。The first correction unit 31 is specifically configured to: calculate the theoretical position information of the second object according to the spatial position information of the first object and the specific position relationship; The spatial position information of the object is corrected; exemplarily, the first correction unit 31 is configured to correct the x and y coordinates of the second object.
显示单元4用于显示所述第二物体图像、第二物体位置相关联的信息内容、或者与第二物体位置相关联的位置提示信息。The display unit 4 is configured to display the image of the second object, the information content associated with the position of the second object, or the position prompt information associated with the position of the second object.
所述第二物体识别特性至少包括第二物体本体形态特性和/或第二物体标记识别特性;所述第二物体本体形态特性至少包括第二物体本体的结构、 形态或颜色;所述第二物体标记识别特性至少包括第二物体上设置的图案、图形或二维码。所述第二物体空间位置信息至少包括第二物体空间坐标和/或第二物体朝向。The second object identification characteristic at least includes the second object body shape characteristic and/or the second object mark identification characteristic; the second object body shape characteristic at least includes the structure, shape or color of the second object body; the second object body The object marking identification features at least include patterns, graphics or two-dimensional codes provided on the second object. The second object space position information at least includes the second object space coordinates and/or the second object orientation.
所述第二物体识别特性至少包括第二物体本体形态特性和/或第二物体标记识别特性;所述第二物体本体形态特性至少包括第二物体本体的结构、形态或颜色;所述第二物体标记识别特性至少包括第二物体上设置的图案、图形或二维码。The second object identification characteristic at least includes the second object ontology morphological characteristic and/or the second object mark identification characteristic; the second object ontology morphological characteristic at least includes the structure, shape or color of the second object ontology; the second object ontology The object marking identification features at least include patterns, graphics or two-dimensional codes provided on the second object.
二维码是在平面上分布的黑白相间的平面图形,其上面的点非常易于识别,通过识别其中的至少3个点,可以实现该二维码的定位。因为二维码固定于对象或器械,所以,可以实现固定有该二维码的对象或器械的定位。A two-dimensional code is a black and white plane figure distributed on a plane, the points on it are very easy to identify, and the positioning of the two-dimensional code can be realized by identifying at least three points among them. Because the two-dimensional code is fixed to the object or device, the positioning of the object or device to which the two-dimensional code is fixed can be realized.
可选地,第二物体标记识别特性还可以是诸如棋盘格的其他平面图形。利用二维码或棋盘格作为标识,使得定位对象或器械更准确且快速。从而,可以对快速移动器械进行更精准地导航。Optionally, the second object marker identification characteristic may also be other planar graphics such as a checkerboard. Using QR code or checkerboard as an identification makes positioning objects or instruments more accurate and fast. Thereby, fast moving instruments can be navigated more precisely.
可选地,在器械表面上所固定的标识还可以是立体图形,例如,在器械设计生产过程中,标识的图形可以是该器械的手柄,或者是固定于手柄侧面的某个结构。使用立体图形进行空间定位虽然识别所需的计算时间相对平面图形长,但对固定不动或慢速移动的目标空间定位精度较高。Optionally, the logo fixed on the surface of the instrument can also be a three-dimensional figure. For example, during the design and production process of the instrument, the logo can be the handle of the instrument, or a structure fixed on the side of the handle. Although the calculation time required for recognition is longer than that of plane graphics, the spatial positioning accuracy of stationary or slow-moving objects is relatively high.
示例性的,如图2所示,本发明中第二物体为手术中的穿刺针,穿刺针的端部设置有标识结构,并且印刷有二维码。Exemplarily, as shown in FIG. 2 , the second object in the present invention is a puncture needle in an operation, and the end of the puncture needle is provided with an identification structure and a two-dimensional code is printed.
基于上述内容,所述第二获取单元2具体用于:Based on the above content, the second obtaining unit 2 is specifically used for:
所述第一物体固定设置在空间中,所述第二物体为移动物体,当所述第二物体移动到所述特定位置,则根据第二物体标记识别特性识别所述第二物 体,得到第二物体朝向和/或对第二物体设置当前的第二物体空间坐标。The first object is fixed in the space, and the second object is a moving object. When the second object moves to the specific position, the second object is identified according to the mark recognition characteristics of the second object, and the first object is obtained. The second object is oriented and/or the current second object space coordinate is set for the second object.
所述第二校正单元32具体用于:根据所述第二物体空间位置信息以及所述特定位置,计算第一物体理论位置信息;根据所述第一物体理论位置信息,对所述第一物体的空间位置信息进行校正;示例性的,所述第二校正32单元用于对所述第一物体的z坐标进行校正。The second correction unit 32 is specifically configured to: calculate the theoretical position information of the first object according to the spatial position information of the second object and the specific position; The spatial position information of the first object is corrected; exemplarily, the second correction unit 32 is used to correct the z-coordinate of the first object.
显示单元4用于显示所述第一物体图像、第一物体位置相关联的信息内容、或者与第一物体位置相关联的位置提示信息。The display unit 4 is configured to display the image of the first object, the information content associated with the position of the first object, or the position prompt information associated with the position of the first object.
本发明中,所述特定位置为所述第二物体与所述第一物体上的预设的点、线、或面具有特定位置关系时的位置,例如,特定位置关系可以是所述第二物体与所述第一物体上的预设的点、线或面重合或者在预设的范围内部分重合。In the present invention, the specific position is the position when the second object has a specific positional relationship with a preset point, line, or surface on the first object. For example, the specific positional relationship may be the second object. The object coincides with a preset point, line or surface on the first object or partially overlaps within a preset range.
当进行使用时,用户可以在现实手术场景的对应位置三维地显示实际不可见的对象的体内器官、病变以及器械在对象体内的部分。换言之,不可见的体内器官、病变以及器械位于体内的部分与人体及实际器械对准,从而指引用户进行手术操作。When in use, the user can three-dimensionally display in-vivo organs, lesions, and parts of instruments within the subject's body that are not actually visible at the corresponding positions in the actual surgical scene. In other words, invisible internal organs, lesions, and parts of the instrument located within the body are aligned with the human body and the actual instrument to guide the user through the surgical procedure.
本实施例中能够根据第一物体和第二物体进行识别,在同一场景下使用具有不同误差特性的光学识别物,通过二者对应物体的空间关联,实现单方或双方的光学定位精度提高。针对不同误差特征的识别物,将与之有空间关联性的器械通过几何结构的匹配,确定不同识别图案在同一空间中坐标的关联性。通过利用已知的可信数值,实现对不同识别图案空间识别位置的校准。In this embodiment, the first object and the second object can be identified, and optical identification objects with different error characteristics can be used in the same scene. Through the spatial association of the two corresponding objects, the optical positioning accuracy of one or both parties can be improved. For the identification objects with different error characteristics, the correlation of the coordinates of different identification patterns in the same space is determined by matching the geometric structure of the instruments with spatial correlation. By using the known trusted values, the calibration of the spatial recognition positions of different recognition patterns is realized.
另外,如图3所示,本发明还提供了一种基于校正物体在空间中位置的增强现实方法,包括:In addition, as shown in FIG. 3 , the present invention also provides an augmented reality method based on correcting the position of an object in space, including:
S1、捕获在空间中的第一物体图像,并识别所述第一物体图像中的第一物体识别特性,得到第一物体空间位置信息;S1, capture the first object image in space, and identify the first object recognition characteristic in the first object image, and obtain the first object spatial position information;
为了能够对第二物体进行定位校准,首先获取一个固定的物体的具体的空间位置信息,该空间位置信息至少包括第一物体空间坐标和/或第一物体朝向,能够对固定的第一物体进行空间位置的具体定位。In order to perform positioning and calibration on the second object, first obtain specific spatial position information of a fixed object, where the spatial position information at least includes the spatial coordinates of the first object and/or the orientation of the first object. The specific positioning of the spatial location.
本发明中,所述第一物体识别特性至少包括第一物体本体形态特性和/或第一物体标记识别特性。所述第一物体本体形态特性至少包括第一物体本体的结构、形态或颜色,但是具体实施过程中,不局限于此,也可以是物体的其他的能够被识别的特性。示例性的,本发明可以固定设置一个形状固定的物体,在进行校准前,先识别物体的结构的形状,识别过程中,通过不同的显示方式,能够提示用户捕获过程和识别过程是否成功。对该物体进行定位识别,获取物体的准确空间位置信息。In the present invention, the first object identification characteristic includes at least the first object body morphological characteristic and/or the first object mark identification characteristic. The morphological characteristics of the first object body at least include the structure, shape or color of the first object body, but in the specific implementation process, it is not limited to this, and may also be other identifiable characteristics of the object. Exemplarily, an object with a fixed shape can be fixed in the present invention. Before calibration, the shape of the structure of the object is recognized. During the recognition process, different display methods can be used to prompt the user whether the capture process and the recognition process are successful. The object is positioned and identified, and the accurate spatial position information of the object is obtained.
另外,所述第一物体标记识别特性至少包括第一物体上设置的图案、图形或二维码。所述图案、图形或二维码可以为通过印刷过程设置于第一物体上,可识别图案根据其自身图案的规律以及生产特点,所具备的空间准确度也不尽相同。充分利用不同特性可识别图案的组合,实现对导航器械的快速空间校准。In addition, the first object mark identification characteristic includes at least a pattern, graphic or two-dimensional code set on the first object. The pattern, graphic or two-dimensional code can be set on the first object through a printing process, and the identifiable patterns have different spatial accuracy according to their own pattern rules and production characteristics. Make full use of the combination of recognizable patterns with different characteristics to achieve rapid spatial calibration of navigation instruments.
示例性的,如图2所示,可以使用印刷有二维码的矩形信息板,用于捕获第一物体图像的设备为能够图像采集的装置,采集角度与用户的观察方向保持一致。当用户使用时,其可以将图像采集装置佩戴于身体上,例如头部。可选地,图像采集装置是头戴式光学摄像头。在用户使用时,无论其采用何种姿势,都可以很好地保持头戴式光学摄像头的采集角度与其观察方向一致。 由此,不仅保证了显示的角度是用户所观看的角度,保证了器械显示的精准度,而且避免了使用时对用户的各种操作的干扰。从而显著提高了用户体验。通过图像采集装置获取第一物体图像,识别第一物体标记识别特性,根据第一物体标记识别特性获取第一物体本体形态特性,得到第一物体朝向,对第一物体设置当前第一物体空间坐标,记为X1、Y1、Z1。Exemplarily, as shown in FIG. 2 , a rectangular information board printed with a two-dimensional code may be used, the device for capturing the image of the first object is a device capable of image capturing, and the capturing angle is consistent with the user's viewing direction. When in use, the user may wear the image capture device on the body, such as the head. Optionally, the image capture device is a head-mounted optical camera. When the user uses it, no matter what posture the user adopts, the acquisition angle of the head-mounted optical camera can be well kept consistent with its viewing direction. In this way, it is not only ensured that the displayed angle is the angle viewed by the user, the accuracy of the display of the instrument is ensured, but also interference to various operations of the user during use is avoided. This significantly improves the user experience. The image of the first object is acquired by the image acquisition device, the identification characteristics of the first object are identified, the morphological characteristics of the body of the first object are obtained according to the identification characteristics of the first object, the orientation of the first object is obtained, and the current spatial coordinates of the first object are set for the first object , denoted as X1, Y1, Z1.
S2、当第二物体处于特定位置时,捕获第二物体在空间中的第二物体图像,并识别所述第二物体图像中的第二物体识别特性,得到第二物体空间位置信息;S2, when the second object is in a specific position, capture the second object image of the second object in space, and identify the second object recognition characteristic in the second object image, and obtain the second object space position information;
在进行具体的手术场景中,需要使用器械进行操作,本发明中所述第二物体为移动的器械,所述第二物体空间位置信息至少包括第二物体空间坐标和/或第二物体朝向。In a specific surgical scenario, instruments need to be used for operation. In the present invention, the second object is a moving instrument, and the second object space position information at least includes the second object space coordinates and/or the second object orientation.
所述第二物体识别特性至少包括第二物体本体形态特性和/或第二物体标记识别特性;所述第二物体本体形态特性至少包括第二物体本体的结构、形态或颜色;所述第二物体标记识别特性至少包括第二物体上设置的图案、图形或二维码。The second object identification characteristic at least includes the second object ontology morphological characteristic and/or the second object mark identification characteristic; the second object ontology morphological characteristic at least includes the structure, shape or color of the second object ontology; the second object ontology The object marking identification features at least include patterns, graphics or two-dimensional codes provided on the second object.
二维码是在平面上分布的黑白相间的平面图形,其上面的点非常易于识别,通过识别其中的至少3个点,可以实现该二维码的定位。因为二维码固定于对象或器械,所以,可以实现固定有该二维码的对象或器械的定位。A two-dimensional code is a black and white plane figure distributed on a plane, the points on it are very easy to identify, and the positioning of the two-dimensional code can be realized by identifying at least three points among them. Because the two-dimensional code is fixed to the object or device, the positioning of the object or device to which the two-dimensional code is fixed can be realized.
可选地,第二物体标记识别特性还可以是诸如棋盘格的其他平面图形。利用二维码或棋盘格作为标识,使得定位对象或器械更准确且快速。从而,可以对快速移动器械进行更精准地导航。Optionally, the second object marker identification characteristic may also be other planar graphics such as a checkerboard. Using QR code or checkerboard as an identification makes positioning objects or instruments more accurate and fast. Thereby, fast moving instruments can be navigated more precisely.
可选地,在器械表面上所固定的标识还可以是立体图形,例如,在器械设计生产过程中,标识的图形可以是该器械的手柄,或者是固定于手柄侧面的某个结构。使用立体图形进行空间定位虽然识别所需的计算时间相对平面图形长,但对固定不动或慢速移动的目标空间定位精度较高。Optionally, the logo fixed on the surface of the instrument can also be a three-dimensional figure. For example, during the design and production process of the instrument, the logo can be the handle of the instrument, or a structure fixed on the side of the handle. Although the calculation time required for recognition is longer than that of plane graphics, the spatial positioning accuracy of stationary or slow-moving objects is relatively high.
示例性的,如图2所示,本发明中第二物体为手术中的穿刺针,穿刺针的端部设置有标识结构,并且印刷有二维码。Exemplarily, as shown in FIG. 2 , the second object in the present invention is a puncture needle in an operation, and the end of the puncture needle is provided with an identification structure and a two-dimensional code is printed.
当所述第二物体处于特定位置,捕获所述第二物体在空间中的第二物体图像时具体包括:When the second object is at a specific position, capturing the second object image of the second object in space specifically includes:
所述第一物体固定设置在空间中,所述第二物体为移动物体,当所述第二物体移动到特定位置,则捕获所述第二物体在空间中的所述第二物体图像。该过程所述特定位置可以设置为第二物体移动到与第一物体的预设重合,或者,根据实际操作的需要,当所述第二物体的某一位置到达固定位置或者完成规定动作,皆可以进行定位。The first object is fixed in the space, the second object is a moving object, and when the second object moves to a specific position, an image of the second object in the space is captured. In this process, the specific position can be set so that the second object moves to the preset coincidence with the first object, or, according to the needs of actual operation, when a certain position of the second object reaches a fixed position or completes a prescribed action, both can be positioned.
具体包括:所述第一物体固定设置在空间中,所述第二物体为移动物体,当所述第二物体移动到所述特定位置,则根据第二物体标记识别特性识别所述第二物体,根据所述第二物体本体形态特性,得到第二物体朝向,对第二物体设置当前的第二物体空间坐标,记为X2、Y2、Z2。所述特定位置为所述第二物体与所述第一物体上的预设的相关联的点、线、或面具有特定位置关系时的位置,所述特定位置关系包括点、线或面重合、部分重合。Specifically, the first object is fixed in the space, the second object is a moving object, and when the second object moves to the specific position, the second object is identified according to the identification characteristics of the second object. , obtain the orientation of the second object according to the morphological characteristics of the body of the second object, and set the current spatial coordinates of the second object for the second object, denoted as X2, Y2, and Z2. The specific position is a position when the second object has a specific positional relationship with a preset associated point, line, or surface on the first object, and the specific positional relationship includes coincidence of points, lines or surfaces , partially overlapped.
示例性的,将信息板作为第一物体,穿刺针作为第二物体,当用户手持穿刺针使针尖B点与信息板的A点重合时,对两个物体的位置定位,并且相互校准。Exemplarily, the information board is used as the first object, and the puncture needle is used as the second object. When the user holds the puncture needle and makes point B of the needle point coincide with point A of the information board, the positions of the two objects are positioned and calibrated with each other.
S3、根据所述第一物体空间位置信息以及特定位置,对所述第二物体空间位置信息进行校正;和/或根据所述第二物体空间位置信息,对所述第一物体空间位置信息进行校正。S3. Correct the second object space position information according to the first object space position information and a specific position; and/or perform a correction on the first object space position information according to the second object space position information Correction.
S4、显示与所述第一物体或所述第二物体的位置相关的增强现实信息:S4. Display augmented reality information related to the position of the first object or the second object:
该过程可以是两个过程,根据实际的情况,对两个物体进行相对的校正,比如根据第一物体空间位置信息以及特定位置,计算第二物体理论位置信息;The process can be two processes. According to the actual situation, two objects are relatively corrected, for example, according to the spatial position information of the first object and the specific position, the theoretical position information of the second object is calculated;
根据第二物体理论位置信息,对第二物体的空间位置信息进行校正;和/或根据第二物体空间位置信息以及特定位置,计算第一物体理论位置信息;Correcting the spatial position information of the second object according to the theoretical position information of the second object; and/or calculating the theoretical position information of the first object according to the spatial position information of the second object and the specific position;
根据第一物体理论位置信息,对第一物体的空间位置信息进行校正。According to the theoretical position information of the first object, the spatial position information of the first object is corrected.
举例说明,如图2所示,根据拍摄的所述第一物体图像,计算该物体在空间中的位置信息,此时A点的坐标是通过拍摄到的第一物体的特征(主要是面板上的图案特征)计算得到的;For example, as shown in FIG. 2 , according to the first object image captured, the position information of the object in space is calculated. At this time, the coordinates of point A are the features of the first object captured (mainly on the panel). pattern features) calculated;
当医生手持第二物体(穿刺针)将针尖B点放置在第一物体(识别板)的A点,此时根据识别穿刺针末端设置的易于识别的特征,可以计算穿刺针的针尖B点的坐标;When the doctor holds the second object (puncture needle) and places point B of the needle tip at point A of the first object (identification board), at this time, according to the easily identifiable features set at the end of the identification puncture needle, the point B of the needle tip of the puncture needle can be calculated. coordinate;
已知此时A、B两点是重合的,但通过步骤1和步骤2分别得到的A、B两点的坐标未必相同。根据两个物体的空间几何特征可知,第一物体上A点的x,y坐标的精确度高但z坐标的精确度相对较低,而第二物体上B点的z坐标精确度相对较高,所以根据第一物体的X1、Y1坐标校正第二物体的X2、Y2坐标,用第二物体的Z2坐标校正第一物体的Z1坐标。则两个结构在数据库内的对应位置做如下调整:It is known that the two points A and B are coincident at this time, but the coordinates of the two points A and B obtained through step 1 and step 2 respectively are not necessarily the same. According to the spatial geometric characteristics of the two objects, the accuracy of the x and y coordinates of point A on the first object is high but the accuracy of the z coordinate is relatively low, while the accuracy of the z coordinate of point B on the second object is relatively high. , so the X2 and Y2 coordinates of the second object are corrected according to the X1 and Y1 coordinates of the first object, and the Z1 coordinate of the first object is corrected with the Z2 coordinate of the second object. Then the corresponding positions of the two structures in the database are adjusted as follows:
X2=X1;Y2=Y1;Z1=Z2;X2=X1; Y2=Y1; Z1=Z2;
具体的互校准方法由以下2个部分组成,互校准示意图如图4所示,具体实施中,第一物体为识别板,第二物体为穿刺针:The specific mutual calibration method consists of the following two parts. The schematic diagram of the mutual calibration is shown in Figure 4. In the specific implementation, the first object is the identification plate, and the second object is the puncture needle:
(1)通过人工事先确定出穿刺针的针尖点在针识别物坐标系下的坐标。(1) The coordinates of the needle tip point of the puncture needle in the needle identifier coordinate system are manually determined in advance.
(2)在识别板上加工一孔洞,使其平行于z轴,垂直于Oxy平面,孔洞底部一点为标定点(Calibration Point)。通过设计识别板模体,要确定出标定点在识别板坐标系下的坐标p Q。标定时,将穿刺针插入孔洞内,并保证针尖点位于标定点处。根据标定点在摄像机坐标系下的坐标保持不变的特点,通过坐标转换,可知以下关系,此时标定点在针尖坐标系下有以下2个表达式:T C←Qp Q=T C←Np N (2) Process a hole on the identification plate so that it is parallel to the z-axis and perpendicular to the Oxy plane, and a point at the bottom of the hole is the calibration point. By designing the recognition plate phantom, it is necessary to determine the coordinate p Q of the calibration point in the recognition plate coordinate system. When calibrating, insert the puncture needle into the hole and ensure that the needle point is at the calibration point. According to the feature that the coordinates of the calibration point in the camera coordinate system remain unchanged, through the coordinate transformation, the following relationship can be known. At this time, the calibration point has the following two expressions in the needle tip coordinate system: T C←Q p Q =T C← N p N
此时标定点在针尖坐标系下有以下2个表达式:At this time, the calibration point has the following two expressions in the needle tip coordinate system:
(a)由针识别物识别出并经人工点标定直接确定的第二物体的坐标系:(a) The coordinate system of the second object identified by the needle identifier and directly determined by manual point calibration:
Figure PCTCN2022081469-appb-000001
Figure PCTCN2022081469-appb-000001
(b)由识别板识别出并经坐标转换得到的穿刺针的坐标系:(b) The coordinate system of the puncture needle identified by the identification plate and obtained by coordinate transformation:
Figure PCTCN2022081469-appb-000002
Figure PCTCN2022081469-appb-000002
上述2个坐标均是标定点在针识别物坐标系下的表示。假设z坐标分量采用表达式(a)更准确,x、y坐标分量采用表达式(b)更准确,那么互校准后的结果为The above two coordinates are the representation of the calibration point in the needle identifier coordinate system. Assuming that the expression (a) is more accurate for the z coordinate component, and the expression (b) is more accurate for the x and y coordinate components, the result after mutual calibration is
Figure PCTCN2022081469-appb-000003
Figure PCTCN2022081469-appb-000003
其中,C:摄像机坐标系Among them, C: camera coordinate system
Q:识别板坐标系Q: Identify the board coordinate system
N:穿刺针坐标系N: puncture needle coordinate system
T B←A:表示从坐标系A到坐标系B的坐标转换矩阵 T B←A : Represents the coordinate transformation matrix from coordinate system A to coordinate system B
p A:坐标系A中的点p p A : point p in coordinate system A
v A:坐标系A中的向量v v A : vector v in coordinate system A
识别板点标定方法,摄像机识别出定位板和穿刺针,即可得到T C←Q和T C←N。将穿刺针针尖放置于识别板上一固定点p。从识别板的加工模型可以确定出该固定点在识别板坐标系下的坐标,即p Q。根据该点在摄像机坐标系下坐标不变的特点,可得下述坐标关系: Identifying the plate point calibration method, the camera can identify the positioning plate and the puncture needle, and then T C←Q and T C←N can be obtained. Place the puncture needle tip on a fixed point p on the identification plate. From the processing model of the identification plate, the coordinates of the fixed point in the identification plate coordinate system, that is, p Q , can be determined. According to the feature that the coordinates of this point in the camera coordinate system remain unchanged, the following coordinate relationship can be obtained:
T C←Qp Q=T C←Np N T C←Q p Q =T C←N p N
因此得到该点在穿刺针坐标系下的坐标,即Therefore, the coordinates of the point in the puncture needle coordinate system are obtained, that is,
Figure PCTCN2022081469-appb-000004
Figure PCTCN2022081469-appb-000004
另外,本发明也可是用过方向标定进行校准,具体包括:In addition, the present invention can also be calibrated by using direction calibration, which specifically includes:
(1)通过人工事先确定出穿刺针在针识别物坐标系下的方向向量v N(1) The direction vector v N of the puncture needle in the needle identification object coordinate system is manually determined in advance.
(2)在识别板上加工一孔洞,使其平行于z轴,垂直于Oxy平面,孔洞底部一点为标定点(Calibration Point),孔洞方向称为标定方向(Calibration Direction)。通过设计识别板模体,要确定出该孔洞方向在识别板坐标系下的方向向量v Q。标定时,将识别针插入孔洞内,并保证针尖点位于标定点处。根据标定方向在摄像机坐标系下的方向保持不变的特点,通过坐标转换,可知以下关系: (2) Process a hole on the identification plate so that it is parallel to the z-axis and perpendicular to the Oxy plane. A point at the bottom of the hole is the Calibration Point, and the direction of the hole is called the Calibration Direction. By designing the recognition plate phantom, it is necessary to determine the direction vector v Q of the hole direction in the recognition plate coordinate system. When calibrating, insert the identification needle into the hole, and ensure that the needle point is located at the calibration point. According to the feature that the direction of the calibration direction remains unchanged in the camera coordinate system, through the coordinate transformation, the following relationship can be known:
T C←Qv Q=T C←Nv N T C←Q v Q =T C←N v N
此时标定方向在针尖坐标系下有2个表达式:At this time, the calibration direction has two expressions in the needle tip coordinate system:
(a)由针识别物识别出并经人工方向标定直接确定的穿刺针的方向向量:(a) The direction vector of the puncture needle identified by the needle identifier and directly determined by the manual direction calibration:
Figure PCTCN2022081469-appb-000005
Figure PCTCN2022081469-appb-000005
(b)由识别板识别出并经坐标转换得到的穿刺针的方向向量:(b) The direction vector of the puncture needle identified by the identification plate and obtained by coordinate transformation:
Figure PCTCN2022081469-appb-000006
Figure PCTCN2022081469-appb-000006
上述2个向量均是标定方向在针识别物坐标系下的表示。假设w坐标分量采用表达式(a)更准确,u、v坐标分量采用表达式(b)更准确,那么互校准后的结果为The above two vectors are the representation of the calibration direction in the coordinate system of the needle identifier. Assuming that the expression (a) is more accurate for the w coordinate component, and the expression (b) is more accurate for the u and v coordinate components, the result after mutual calibration is
Figure PCTCN2022081469-appb-000007
Figure PCTCN2022081469-appb-000007
识别板方向标定方法如图4所示。摄像机识别出识别板和穿刺针,即可得到T C←Q和R C←N。将穿刺针针尖插入识别板上一固定孔洞内。从识别板的加工模型可以确定出该孔洞在识别板坐标系下的方向向量,即v Q。由该方向向量在摄像机坐标系下方向不变,可得下述转换关系 The method for calibrating the orientation of the identification plate is shown in Figure 4. The camera recognizes the identification plate and the puncture needle, and T C←Q and R C←N can be obtained. Insert the tip of the puncture needle into a fixed hole on the identification plate. From the processing model of the identification plate, the direction vector of the hole in the identification plate coordinate system, ie v Q , can be determined. Since the direction vector does not change in the camera coordinate system, the following conversion relationship can be obtained
T C←Qv Q=T C←Nv N T C←Q v Q =T C←N v N
因此得到该方向向量在穿刺针坐标系下的表示,即Therefore, the representation of the direction vector in the puncture needle coordinate system is obtained, that is,
Figure PCTCN2022081469-appb-000008
Figure PCTCN2022081469-appb-000008
经过方向标定后,摄像机实时识别针识别物时,可按下述公式实时计算针尖方向:After the direction calibration, when the camera recognizes the needle identification object in real time, the needle tip direction can be calculated in real time according to the following formula:
v C=T C←Nv N v C =T C←N v N
其中,T C←N由摄像机识别针识别物后给出,v N为采用互校准或定位板方向标定计算后的标定结果。 Among them, T C←N is given by the camera after identifying the pin identification object, and v N is the calibration result calculated by the mutual calibration or the orientation calibration of the positioning plate.
当校准完成后,显示校准后的第一物体和/或第二物体空间位置信息,并且显示与位置有关的增强现实信息,可以是信息的内容与物体的位置有关, 也可以是信息的显示位置与物体位置有关。After the calibration is completed, the calibrated spatial position information of the first object and/or the second object is displayed, and augmented reality information related to the position is displayed, which may be that the content of the information is related to the position of the object, or the display position of the information. related to the location of the object.
对于本领域技术人员而言,显然能了解到上述具体事实例只是本发明的优选方案,因此本领域的技术人员对本发明中的某些部分所可能作出的改进、变动,体现的仍是本发明的原理,实现的仍是本发明的目的,均属于本发明所保护的范围。For those skilled in the art, it is obvious that the above-mentioned specific examples are only the preferred solutions of the present invention. Therefore, the improvements and changes that those skilled in the art may make to certain parts of the present invention still embody the present invention. However, what is achieved is still the purpose of the present invention, which belongs to the scope of protection of the present invention.

Claims (10)

  1. 一种基于校正物体在空间中位置的增强现实系统,其特征在于,包括:第一获取单元、第二获取单元、校正单元以及显示单元,其中:An augmented reality system based on correcting the position of an object in space, comprising: a first acquisition unit, a second acquisition unit, a correction unit and a display unit, wherein:
    所述第一获取单元,用于捕获在空间中的第一物体图像,并识别所述第一物体图像中的第一物体识别特性,得到第一物体空间位置信息;The first acquisition unit is configured to capture the image of the first object in space, and identify the first object recognition characteristic in the image of the first object to obtain the spatial position information of the first object;
    所述第二获取单元,用于当第二物体处于特定位置时,捕获第二物体在空间中的第二物体图像,并识别所述第二物体图像中的第二物体识别特性,得到第二物体空间位置信息;The second acquisition unit is configured to capture a second object image of the second object in space when the second object is at a specific position, and identify the second object identification characteristics in the second object image to obtain a second object image. object space position information;
    所述校正单元包括第一校正单元和/或第二校正单元,其中:The correction unit includes a first correction unit and/or a second correction unit, wherein:
    所述第一校正单元,用于根据所述第一物体空间位置信息以及所述特定位置,对所述第二物体空间位置信息进行校正;the first correction unit, configured to correct the second object space position information according to the first object space position information and the specific position;
    所述第二校正单元,用于根据所述第二物体空间位置信息,对所述第一物体空间位置信息进行校正;the second correcting unit, configured to correct the spatial position information of the first object according to the spatial position information of the second object;
    所述显示单元,用于显示与所述第一物体或所述第二物体的位置相关的增强现实信息。The display unit is configured to display augmented reality information related to the position of the first object or the second object.
  2. 根据权利要求1所述的基于校正物体在空间中位置的增强现实系统,其特征在于,所述第一物体识别特性至少包括第一物体本体形态特性和/或第一物体标记识别特性;所述第一物体本体形态特性至少包括第一物体本体的结构、形态或颜色;所述第一物体标记识别特性至少包括第一物体上设置的图案、图形或二维码;The augmented reality system based on correcting the position of an object in space according to claim 1, wherein the first object recognition characteristic at least includes the first object body shape characteristic and/or the first object mark recognition characteristic; the The morphological characteristics of the first object body at least include the structure, shape or color of the first object body; the first object mark identification characteristics at least include patterns, graphics or two-dimensional codes set on the first object;
    所述第二物体识别特性至少包括第二物体本体形态特性和/或第二物体标记识别特性;所述第二物体本体形态特性至少包括第二物体本体的结构、形态或颜色;所述第二物体标记识别特性至少包括第二物体上设置的图案、 图形或二维码。The second object identification characteristic at least includes the second object ontology morphological characteristic and/or the second object mark identification characteristic; the second object ontology morphological characteristic at least includes the structure, shape or color of the second object ontology; the second object ontology The object marking identification features at least include patterns, graphics or two-dimensional codes provided on the second object.
  3. 根据权利要求1所述的基于校正物体在空间中位置的增强现实系统,其特征在于,所述第一物体空间位置信息至少包括第一物体空间坐标和/或第一物体朝向;所述第二物体空间位置信息至少包括第二物体空间坐标和/或第二物体朝向。The augmented reality system based on correcting the position of an object in space according to claim 1, wherein the first object space position information at least includes the first object space coordinates and/or the first object orientation; the second The object space position information includes at least the second object space coordinate and/or the second object orientation.
  4. 根据权利要求1所述的基于校正物体在空间中位置的增强现实系统,其特征在于,所述特定位置为所述第二物体与所述第一物体具有特定位置关系时的位置。The augmented reality system based on correcting the position of an object in space according to claim 1, wherein the specific position is a position when the second object has a specific positional relationship with the first object.
  5. 根据权利要求4所述的基于校正物体在空间中位置的增强现实系统,其特征在于,所述第一校正单元具体用于:根据所述第一物体空间位置信息以及所述特定位置关系,计算第二物体理论位置信息;根据所述第二物体理论位置信息,对所述第二物体的空间位置信息进行校正;The augmented reality system based on correcting the position of an object in space according to claim 4, wherein the first correcting unit is specifically configured to: according to the spatial position information of the first object and the specific position relationship, calculate second object theoretical position information; correcting the spatial position information of the second object according to the second object theoretical position information;
    所述第二校正单元具体用于:根据所述第二物体空间位置信息以及所述特定位置关系,计算第一物体理论位置信息;根据所述第一物体理论位置信息,对所述第一物体的空间位置信息进行校正。The second correction unit is specifically configured to: calculate the theoretical position information of the first object according to the spatial position information of the second object and the specific position relationship; correcting the spatial position information.
  6. 根据权利要求4所述的基于校正物体在空间中位置的增强现实系统,其特征在于,所述第一校正单元用于对所述第二物体的x、y坐标进行校正;所述第二校正单元用于对所述第一物体的z坐标进行校正。The augmented reality system based on correcting the position of an object in space according to claim 4, wherein the first correcting unit is used to correct the x and y coordinates of the second object; the second correcting The unit is used to correct the z coordinate of the first object.
  7. 根据权利要求1—6任意一项权利要求所述的基于校正物体在空间中位置的增强现实系统,其特征在于,所述第一物体为手术场景中的固定物;所述第二物体为手术场景中的操作器械。The augmented reality system based on correcting the position of an object in space according to any one of claims 1 to 6, wherein the first object is a fixture in an operation scene; the second object is an operation Manipulators in the scene.
  8. 一种基于校正物体在空间中位置的增强现实方法,其特征在于,包括:An augmented reality method based on correcting the position of an object in space, comprising:
    捕获在空间中的第一物体图像,并识别所述第一物体图像中的第一物体识别特性,得到第一物体空间位置信息;capturing the first object image in space, and identifying the first object recognition characteristics in the first object image to obtain the spatial position information of the first object;
    当第二物体处于特定位置时,捕获第二物体在空间中的第二物体图像,并识别所述第二物体图像中的第二物体识别特性,得到第二物体空间位置信息;When the second object is at a specific position, capturing a second object image of the second object in space, and recognizing the second object identification characteristics in the second object image, to obtain the spatial position information of the second object;
    根据所述第一物体空间位置信息以及特定位置,对所述第二物体空间位置信息进行校正;和/或根据所述第二物体空间位置信息,对所述第一物体空间位置信息进行校正;Correcting the second object space position information according to the first object space position information and a specific position; and/or correcting the first object space position information according to the second object space position information;
    显示与所述第一物体或所述第二物体的位置相关的增强现实信息。Augmented reality information related to the location of the first object or the second object is displayed.
  9. 根据权利要求8所述的基于校正物体在空间中位置的增强现实方法,其特征在于,所述第一物体为手术场景中的固定物;所述第二物体为手术场景中的操作器械。The augmented reality method based on correcting the position of an object in space according to claim 8, wherein the first object is a fixture in a surgical scene; the second object is an operating instrument in the surgical scene.
  10. 一种计算机可读存储介质,存储有非暂时的计算机可执行程序,其特征在于,所述计算机可执行程序用于指令所述计算机执行权利要求8-9中任意一项所述的方法。A computer-readable storage medium storing a non-transitory computer-executable program, characterized in that, the computer-executable program is used to instruct the computer to execute the method according to any one of claims 8-9.
PCT/CN2022/081469 2021-04-01 2022-03-17 Augmented reality system and method based on spatial position of corrected object, and computer-readable storage medium WO2022206406A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110357372.X 2021-04-01
CN202110357372.XA CN113509264A (en) 2021-04-01 2021-04-01 Augmented reality system, method and computer-readable storage medium based on position correction of object in space

Publications (1)

Publication Number Publication Date
WO2022206406A1 true WO2022206406A1 (en) 2022-10-06

Family

ID=78061350

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/081469 WO2022206406A1 (en) 2021-04-01 2022-03-17 Augmented reality system and method based on spatial position of corrected object, and computer-readable storage medium

Country Status (2)

Country Link
CN (1) CN113509264A (en)
WO (1) WO2022206406A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113509264A (en) * 2021-04-01 2021-10-19 上海复拓知达医疗科技有限公司 Augmented reality system, method and computer-readable storage medium based on position correction of object in space

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090312629A1 (en) * 2008-06-13 2009-12-17 Inneroptic Technology Inc. Correction of relative tracking errors based on a fiducial
US20110082467A1 (en) * 2009-10-02 2011-04-07 Accumis Inc. Surgical tool calibrating device
US20200078133A1 (en) * 2017-05-09 2020-03-12 Brainlab Ag Generation of augmented reality image of a medical device
CN113509264A (en) * 2021-04-01 2021-10-19 上海复拓知达医疗科技有限公司 Augmented reality system, method and computer-readable storage medium based on position correction of object in space
CN113509263A (en) * 2021-04-01 2021-10-19 上海复拓知达医疗科技有限公司 Object space calibration positioning method
CN216535498U (en) * 2021-04-01 2022-05-17 上海复拓知达医疗科技有限公司 Positioning device based on object in space

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102007013407B4 (en) * 2007-03-20 2014-12-04 Siemens Aktiengesellschaft Method and device for providing correction information
CN101904770B (en) * 2009-06-05 2012-11-14 复旦大学 Operation guiding system and method based on optical enhancement reality technology
KR101367366B1 (en) * 2012-12-13 2014-02-27 주식회사 사이버메드 Method and apparatus of calibrating a medical instrument used for image guided surgery
ES2912332T3 (en) * 2016-11-23 2022-05-25 Clear Guide Medical Inc Intervention instrument navigation system
EP3593226B1 (en) * 2017-03-10 2022-08-03 Brainlab AG Medical augmented reality navigation
CN110621253A (en) * 2017-03-17 2019-12-27 智能联合外科公司 System and method for navigating an augmented reality display in surgery
TWI678181B (en) * 2018-04-30 2019-12-01 長庚大學 Surgical guidance system
CN110353806B (en) * 2019-06-18 2021-03-12 北京航空航天大学 Augmented reality navigation method and system for minimally invasive total knee replacement surgery
EP3760157A1 (en) * 2019-07-04 2021-01-06 Scopis GmbH Technique for calibrating a registration of an augmented reality device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090312629A1 (en) * 2008-06-13 2009-12-17 Inneroptic Technology Inc. Correction of relative tracking errors based on a fiducial
US20110082467A1 (en) * 2009-10-02 2011-04-07 Accumis Inc. Surgical tool calibrating device
US20200078133A1 (en) * 2017-05-09 2020-03-12 Brainlab Ag Generation of augmented reality image of a medical device
CN113509264A (en) * 2021-04-01 2021-10-19 上海复拓知达医疗科技有限公司 Augmented reality system, method and computer-readable storage medium based on position correction of object in space
CN113509263A (en) * 2021-04-01 2021-10-19 上海复拓知达医疗科技有限公司 Object space calibration positioning method
CN216535498U (en) * 2021-04-01 2022-05-17 上海复拓知达医疗科技有限公司 Positioning device based on object in space

Also Published As

Publication number Publication date
CN113509264A (en) 2021-10-19

Similar Documents

Publication Publication Date Title
JP6889703B2 (en) Methods and devices for observing 3D surface images of patients during surgery
EP2637593B1 (en) Visualization of anatomical data by augmented reality
EP3254621A1 (en) 3d image special calibrator, surgical localizing system and method
KR102105974B1 (en) Medical imaging system
CN109998678A (en) Augmented reality assisting navigation is used during medicine regulation
WO2022206417A1 (en) Object space calibration positioning method
CN105078573B (en) Use of Neuronavigation spatial registration method based on hand-held scanner
Lathrop et al. Minimally invasive holographic surface scanning for soft-tissue image registration
CN103948432A (en) Algorithm for augmented reality of three-dimensional endoscopic video and ultrasound image during operation
Zeng et al. A surgical robot with augmented reality visualization for stereoelectroencephalography electrode implantation
CN113940755B (en) Surgical planning and navigation method integrating surgical operation and image
Agustinos et al. Visual servoing of a robotic endoscope holder based on surgical instrument tracking
CN109907801B (en) Locatable ultrasonic guided puncture method
WO2022206406A1 (en) Augmented reality system and method based on spatial position of corrected object, and computer-readable storage medium
Liu et al. On-demand calibration and evaluation for electromagnetically tracked laparoscope in augmented reality visualization
CN116327079A (en) Endoscopic measurement system and tool
CN109833092A (en) Internal navigation system and method
Shao et al. Augmented reality calibration using feature triangulation iteration-based registration for surgical navigation
CN113100941B (en) Image registration method and system based on SS-OCT (scanning and optical coherence tomography) surgical navigation system
Feuerstein et al. Automatic Patient Registration for Port Placement in Minimally Invasixe Endoscopic Surgery
CN216535498U (en) Positioning device based on object in space
CN115120350A (en) Computer-readable storage medium, electronic device, position calibration and robot system
Wang et al. Real-time marker-free patient registration and image-based navigation using stereovision for dental surgery
CN112971996A (en) Computer-readable storage medium, electronic device, and surgical robot system
CN201085689Y (en) Calibration mould

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22778589

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22778589

Country of ref document: EP

Kind code of ref document: A1