CN115100257A - Sleeve alignment method and device, computer equipment and storage medium - Google Patents

Sleeve alignment method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN115100257A
CN115100257A CN202210860308.8A CN202210860308A CN115100257A CN 115100257 A CN115100257 A CN 115100257A CN 202210860308 A CN202210860308 A CN 202210860308A CN 115100257 A CN115100257 A CN 115100257A
Authority
CN
China
Prior art keywords
target
point
aligned
virtual
alignment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210860308.8A
Other languages
Chinese (zh)
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Microport Medbot Group Co Ltd
Original Assignee
Shanghai Microport Medbot Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Microport Medbot Group Co Ltd filed Critical Shanghai Microport Medbot Group Co Ltd
Priority to CN202210860308.8A priority Critical patent/CN115100257A/en
Publication of CN115100257A publication Critical patent/CN115100257A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The present application relates to a cannula alignment method, apparatus, computer device, storage medium and computer program product. The method comprises the following steps: and determining a target motionless point of at least one sleeve to be aligned in a target action region in a target working scene. And determining the virtual fixed point of each alignment part according to the target in at least one alignment part in the target working scene. And virtually fusing the target fixed point, the virtual fixed point and the target working scene, displaying the target fixed point on the sleeve to be aligned through an augmented reality device, and displaying the virtual fixed point on the alignment component so as to guide the sleeve to be aligned to the alignment component. In this way, the accuracy of the alignment of the ferrule to be aligned is greatly improved.

Description

Sleeve alignment method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method, an apparatus, a computer device, a storage medium, and a computer program product for aligning a ferrule.
Background
With the development of instrument technology, in order to perform a repair operation on a tiny abnormal area, a tiny incision is made on the surface of a target object, and then a sleeve is placed in the incision, so that a mechanical arm can enter an instrument for the repair operation into the abnormal area in the target object through a channel formed by the sleeve to complete the repair operation.
Before the instrument reaches the passage formed by the cannula, it is necessary to align the mechanical arm with the cannula, i.e. to align the motionless point on the mechanical arm with the motionless point on the cannula. In the conventional art, the alignment operation is often performed by an operator manually controlling the movement of the robot arm.
However, in the process of moving the robot arm by the operator, the operator does not know the real-time position of the fixed point on the robot arm, and thus it is difficult to accurately align the robot arm with the sleeve, that is, there is a problem that the accuracy of sleeve alignment is low.
Disclosure of Invention
In view of the above, there is a need to provide a cannula alignment method, apparatus, computer device, computer readable storage medium and computer program product capable of improving the accuracy of cannula alignment.
In a first aspect, the present application provides a ferrule alignment method. The method comprises the following steps:
determining a target motionless point of at least one casing to be aligned in a target action area in a target working scene;
determining a virtual motionless point of each alignment component according to a target in at least one alignment component in the target working scene;
and virtually fusing the target fixed point, the virtual fixed point and the target working scene, displaying the target fixed point on the sleeve to be aligned through an augmented reality device, and displaying the virtual fixed point on the alignment component so as to guide the sleeve to be aligned to the alignment component.
In a second aspect, the present application further provides a cannula alignment device. The device comprises:
a first determination module for determining a target motionless point of at least one casing to be aligned in a target action region in a target working scene
The second determining module is used for determining a virtual fixed point of each alignment component according to a target in at least one alignment component in the target working scene;
and the display module is used for virtually fusing the target fixed point, the virtual fixed point and the target working scene, displaying the target fixed point on the sleeve to be aligned through an augmented reality device, and displaying the virtual fixed point on the alignment component so as to guide the sleeve to be aligned to the alignment component.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor implementing the following steps when executing the computer program:
determining a target motionless point of at least one casing to be aligned in a target action area in a target working scene;
determining a virtual motionless point of each alignment component according to a target in at least one alignment component in the target working scene;
and virtually fusing the target fixed point, the virtual fixed point and the target working scene, displaying the target fixed point on the sleeve to be aligned through an augmented reality device, and displaying the virtual fixed point on the alignment component so as to guide the sleeve to be aligned to the alignment component.
In a fourth aspect, the present application further provides a computer-readable storage medium. The computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of:
determining a target motionless point of at least one casing to be aligned in a target action area in a target working scene;
determining a virtual motionless point of each alignment component according to a target in at least one alignment component in the target working scene;
and virtually fusing the target fixed point, the virtual fixed point and the target working scene, displaying the target fixed point on the sleeve to be aligned through an augmented reality device, and displaying the virtual fixed point on the alignment component so as to guide the sleeve to be aligned to the alignment component.
In a fifth aspect, the present application further provides a computer program product. The computer program product comprising a computer program which when executed by a processor performs the steps of:
determining a target fixed point of at least one sleeve to be aligned in a target action area in a target working scene;
determining a virtual motionless point of each alignment component according to a target in at least one alignment component in the target working scene;
and virtually fusing the target fixed point, the virtual fixed point and the target working scene, displaying the target fixed point on the sleeve to be aligned through an augmented reality device, and displaying the virtual fixed point on the alignment component so as to guide the sleeve to be aligned to the alignment component.
According to the sleeve alignment method, the sleeve alignment device, the computer equipment, the storage medium and the computer program product, the target fixed point of at least one sleeve to be aligned in the target action area in the target working scene is determined, so that the target fixed point is accurately positioned in the target working scene, and the position information of the sleeve to be aligned is faithfully reflected. According to the target in at least one alignment component in the target working scene, the accurate positioning of the virtual fixed point of each alignment component in the target working scene can be realized, and the position information of the alignment component is accurately and effectively reflected. And virtually fusing the target fixed point, the virtual fixed point and the target working scene, displaying the target fixed point on the sleeve to be aligned through an augmented reality device, and displaying the virtual fixed point on the alignment component so as to guide the sleeve to be aligned to the alignment component. Therefore, the alignment state of the sleeve to be aligned and the alignment part in the target working scene can be timely and accurately reproduced by enhancing the positions of the target fixed point and the virtual fixed point displayed in real time in the realizing device, and the alignment accuracy of the sleeve to be aligned is greatly improved.
Drawings
FIG. 1 is a diagram of an embodiment of a ferrule alignment method;
FIG. 2 is a schematic flow chart of a ferrule alignment method according to one embodiment;
FIG. 3 is a schematic illustration of a target work scenario in one embodiment;
FIG. 4A is a schematic view of the shape of a target in one embodiment;
FIG. 4B is a schematic view of the shape of a target in another embodiment;
FIG. 5 is a schematic representation of a target coordinate system in one embodiment;
FIG. 6 is a diagram illustrating a virtual motionless point and a target motionless point displayed in an augmented reality device, according to an embodiment;
FIG. 7 is a flowchart illustrating the steps of determining a target motionless point in one embodiment;
FIG. 8 is a flowchart illustrating the steps for determining a target motionless point in another embodiment;
FIG. 9 is a schematic illustration of the alignment sleeve tail intersecting the target action zone in one embodiment;
FIG. 10 is a schematic flow chart diagram illustrating the steps for determining target contour points in one embodiment;
FIG. 11 is a flowchart illustrating the steps for determining a target motionless point in another embodiment;
FIG. 12 is a flowchart illustrating the steps of determining a virtual stationary point in one embodiment;
FIG. 13 is a flowchart illustrating the steps for determining a virtual stationary point in another embodiment;
FIG. 14 is a diagram illustrating a transformation between three coordinate systems, according to one embodiment;
FIG. 15 is a flowchart illustrating the steps for determining a virtual stationary point in another embodiment;
FIG. 16 is a schematic illustration of a target work scenario in another embodiment;
FIG. 17 is a diagram illustrating a pairing process in one embodiment;
FIG. 18 is a schematic flow chart diagram illustrating monitoring pairing operations in one embodiment;
FIG. 19 is a schematic flow chart of a ferrule alignment method according to another embodiment;
FIG. 20 is a schematic illustration of a unit used in implementing a ferrule alignment method in one embodiment;
FIG. 21 is a block diagram of the structure of a ferrule alignment device in one embodiment;
FIG. 22 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The sleeve alignment method provided by the embodiment of the application can be applied to the application environment shown in fig. 1. Wherein the alignment component 102 communicates with the computer device 106 via a network and the augmented reality apparatus 104 communicates with the computer device 106 via a network. The data storage system may store data that computer devices 106 need to process. The data storage system may be integrated on the computer device 106, or may be placed on the cloud or other networked computer device. The computer device 106 determines a target motionless point of at least one casing to be aligned in the target action zone in the target work scenario. The computer device 106 determines a virtual motionless point for each of the alignment members 102 based on the target in at least one of the alignment members 102 in the target work scenario. The target motionless point, the virtual motionless point and the target working scene are virtually fused, the target motionless point is displayed on the sleeve to be aligned through the augmented reality device 104, and the virtual motionless point is displayed on the alignment component 102, so as to guide the sleeve to be aligned with the alignment component 102. The computer device may be a terminal or a server. The terminal can be, but is not limited to, various personal computers, console devices, notebook computers, smart phones, tablet computers, portable wearable devices. The portable wearable device may be a head-mounted device. The server may be implemented as a stand-alone server or as a server cluster consisting of a plurality of servers.
In one embodiment, as shown in FIG. 2, a cannula alignment method is provided, illustrated by way of example as applied to the computer device 106 of FIG. 1, comprising the steps of:
step S202, determining a target motionless point of at least one casing to be aligned in a target action region in a target working scene.
The target working scene is a scene including an alignment component, a sleeve to be aligned, and a target object, as shown in fig. 3. The alignment component may be an end of the mechanical arm aligned with the sleeve to be aligned, or the entire mechanical arm, which is not limited specifically. Wherein the cannula to be aligned may be a trocar for forming a tubular working channel at a target action area of a target object. The augmented reality device is used for observing the motion condition of the alignment component in a virtual scene in real time, and the virtual scene is a scene capable of really restoring a target working scene. Wherein the target action region is the region of the abdominal cavity of the target object.
Specifically, in the target work scene, the punching process is performed on the target acting region of the target object in advance. The computer equipment acquires an environment image obtained by shooting a target working scene by the camera device, or the computer equipment acquires a target image obtained by shooting a sleeve to be aligned in the target working scene by the camera device. The environment image comprises the detailed information of the sleeve to be aligned and the alignment part, and the target image only comprises the detailed information of the sleeve to be aligned. The computer device determines a location in the target work scene at which to implement a target motionless point based on one of the environmental image or the target image. The camera device may be directly mounted on the augmented reality device, or may be independently disposed in a target working scene, which is not limited specifically.
The environmental image comprises the detailed information of the sleeve to be aligned and the alignment part, namely the structural shape and size information of the alignment part, the structural shape and size information of the sleeve to be aligned, the relative position relation between the sleeve to be aligned and the alignment part, the spatial position of the alignment part and the position of the sleeve to be aligned in a target action region. Besides, the environment image also contains the position of other devices, except the alignment component and the sleeve to be aligned, and the structural shape and size information of other devices, etc. in the target working scene.
It should be noted that the target image is obtained by only shooting the region where the casing to be aligned is located in the target working scene, and more accurate detailed information of the casing to be aligned can be ensured, so that the position of the target fixed point in the target working scene can be effectively determined. The environmental image is shot of the whole target working scene, contains the detailed information of the sleeve to be aligned and the alignment part, can simplify the shooting work flow, and simplifies the earlier-stage image acquisition flow.
For example, the operator wears the augmented reality device in which at least one camera device is disposed. At least one environmental image shot by the at least one camera device is shot on the target working scene, or at least one camera device is used for shooting the immobile point of at least one sleeve to be aligned in the target working scene to obtain at least one target image. The augmented reality device transmits the at least one target image or the at least one environment image to the computer device. The computer device determines a position of the target motionless point in the target image based on the position information of the target motionless point in the at least one target image. Or the computer equipment determines the position of the target motionless point in the environment image based on the position information of the target motionless point in the at least one environment image.
Step S204, according to the target in at least one alignment component in the target working scene, determining the virtual fixed point of each alignment component.
The target is a picture, and the target carries coded information which is used for determining an alignment part corresponding to the target. The shape of a target is shown in fig. 4A, and the target may be formed by black and white squares, wherein a black frame is used as a background, four vertices of the target are used as four focal points, and encoded information is determined based on the position and number of the white squares in the target, and the encoded information may be represented in binary. The shape of another target is shown in fig. 4B, and a preset number of circular reflective stickers are pasted on the black frame, and the preset number is generally between 4 and 9. When the camera device shoots the target, the circular reflective sticker presents bright white, and the coding information is determined according to the number and the positions of the circular reflective stickers. Wherein the virtual motionless point is a point located at a predetermined height on the alignment member.
Specifically, the imaging device captures an image to be processed obtained by imaging an alignment component to which a target is attached in a target working scene. The computer equipment acquires the image to be processed sent by the camera device, and determines the position of at least one virtual immobile point in the target working scene based on the position information of the virtual immobile point in the image to be processed. Or the computer equipment acquires the environment image sent by the camera device and determines the position of at least one virtual immobile point in the target working scene based on the position information of the virtual immobile point in the environment image.
For example, the computer device obtains preset position information corresponding to each alignment component, where the preset position information represents position information of a virtual fixed point corresponding to the alignment component in a target coordinate system. The preset position information is determined before the alignment part is shipped. And shooting at least one alignment component in the target working scene through at least one camera device to obtain at least one to-be-processed image. The computer equipment acquires the image to be processed sent by the camera device, and determines the virtual fixed points of the alignment components in the target working scene based on the position information of the virtual fixed points in the image to be processed and the preset position information corresponding to the virtual fixed points respectively. Or the computer equipment acquires at least one environment image, and determines the virtual immobile point of each alignment component in the target working scene based on the position information of each virtual immobile point in the environment image and the preset position information corresponding to each virtual immobile point. The target coordinate system is shown in fig. 5, the plane of the x-axis and the y-axis of the target coordinate system is on the surface of the alignment part, and the z-axis of the target coordinate system is perpendicular to the plane of the x-axis and the y-axis.
Step S206, the target fixed point, the virtual fixed point and the target working scene are virtually fused, the target fixed point is displayed on the sleeve to be aligned through the augmented reality device, the virtual fixed point is displayed on the alignment component, and therefore the sleeve to be aligned is guided to be aligned with the alignment component.
Specifically, the computer device sends the target motionless point, the virtual motionless point and the scene information corresponding to the target working scene to the augmented reality device for virtual fusion, and a virtual space is constructed, wherein the position of each point in the virtual space is consistent with the position of each point in the target working scene. And displaying a target motionless point corresponding to the sleeve to be aligned on the sleeve to be aligned through an augmented reality device, and displaying a virtual motionless point corresponding to the alignment component in the alignment component so as to guide the alignment of the sleeve to be aligned and the alignment component.
In the virtual space, the virtual stationary point and the target stationary point are displayed as shown in fig. 6. The virtual casing in the virtual space is represented by a cylinder, the virtual immobile point in the virtual space is represented by a sphere connected with the cylinder, the virtual casing is an expression form of the casing in a simulated alignment state, and the virtual immobile point and the real immobile point are overlapped in the alignment state. The target motionless point in the virtual space may be displayed as a sphere of different colors. In the virtual space, the virtual motionless point and the target motionless point which are in the alignment state are displayed in the form of luminous balls, so that the luminous balls with different colors can be transmitted to the eyes of operators through the augmented reality device, and the fusion with the target working scene in the real space is realized.
According to the casing alignment method, the target fixed point of at least one casing to be aligned in the target action area in the target working scene is determined, so that the target fixed point is accurately positioned in the target working scene, and the position information of the casing to be aligned is faithfully reflected. According to the target in at least one alignment component in the target working scene, the accurate positioning of the virtual fixed point of each alignment component in the target working scene can be realized, and the position information of the alignment component is accurately and effectively reflected. And virtually fusing the target fixed point, the virtual fixed point and the target working scene, displaying the target fixed point on the sleeve to be aligned through an augmented reality device, and displaying the virtual fixed point on the alignment component so as to guide the sleeve to be aligned to the alignment component. Therefore, the alignment state of the sleeve to be aligned and the alignment part in the target working scene can be timely and accurately reproduced by enhancing the positions of the target fixed point and the virtual fixed point displayed in real time in the real-time device, and the alignment accuracy of the sleeve to be aligned is greatly improved.
In one embodiment, as shown in fig. 7, the determining the target motionless point of the at least one casing to be aligned in the target action region in the target working scenario includes:
step S702, acquiring a target image obtained by image acquisition of at least one sleeve to be aligned in the target action region.
The target image displays the sleeve to be aligned and the target action region, and the center of the intersection region of the sleeve to be aligned and the target action region is regarded as a real motionless point.
Specifically, when one camera device is deployed in the augmented reality device, at least one target image is obtained by shooting at least one target immobile point of the casing to be aligned in the target action area in the target working scene through the camera device. Under the condition that a plurality of camera devices are deployed in the enhanced implementation device, each camera device respectively shoots one casing pipe to be aligned to obtain a target image corresponding to the casing pipe to be aligned.
It should be noted that the positions of the pixels in the target image are all positions in the image coordinate system.
Step S704, based on the target image, determines the position of at least one target motionless point in the world coordinate system through a first coordinate system conversion process.
The first coordinate system conversion processing is used for determining the position of the target motionless point in the world coordinate system. The first coordinate system conversion process is a process of converting at least one coordinate system, for example, converting the image coordinate system into a camera coordinate system, and then converting the camera coordinate system into a world coordinate system, or converting the image coordinate system into the world coordinate system, which is not limited in particular. The world coordinate system is a coordinate system characterized in a real three-dimensional space, and can be a coordinate system of a target working scene in a real three-dimensional space.
Specifically, the computer device performs segmentation processing on the target image to obtain a mask corresponding to the two-dimensional sleeve to be aligned, where the mask is a result of a gray value, for example, a brightness value on the sleeve to be aligned in the mask is 255, and a brightness value on the sleeve not to be aligned is 0. And the computer equipment determines the position information of the target motionless point under the two-dimensional condition based on the coordinate information corresponding to the mask, and determines the position of at least one target motionless point in the world coordinate system through first coordinate system conversion processing based on the position information of the target motionless point under the two-dimensional condition.
For example, the computer device segments the cannula to be aligned in the target image by a segmentation model, which may be constructed based on a neural network, resulting in a two-dimensional mask. For each casing to be aligned, the computer device determines, through the mask, the respective initial contour points characterizing the contour of the respective casing to be aligned. And determining the position of at least one target fixed point in a camera coordinate system based on the two-dimensional coordinate information of a plurality of initial contour points corresponding to the sleeves to be aligned respectively. The computer device determines a position of the at least one target motionless point in the world coordinate system based on the position of the at least one target motionless point in the camera coordinate system.
In the present embodiment, the position of the target fixed point on the target image can be faithfully reflected by the target image corresponding to the cannula to be aligned. In this way, the position of the target motionless point in the world coordinate system can be accurately positioned in time by the first coordinate system conversion processing. Meanwhile, in the embodiment, only the image needs to be acquired, and additional modeling processing is not needed, so that the data processing steps are greatly simplified.
In one embodiment, as shown in fig. 8, the determining at least one target motionless point in the world coordinate system through the first coordinate system conversion process based on the target image includes:
step S802, determining a plurality of initial contour points corresponding to each cannula to be aligned, based on the target image.
Specifically, the computer device segments each cannula to be aligned in the target image through a segmentation model, which may be constructed based on a neural network, to obtain a two-dimensional mask corresponding to each cannula to be aligned. For each casing to be aligned, the computer device determines, through the mask, the respective initial contour points characterizing the contour of the respective casing to be aligned. In which case the tail of the cannula to be aligned intersects the target action zone, as shown in figure 9.
It should be noted that one mask corresponds to one ferrule to be aligned, that is, one mask corresponds to one target fixed point.
Step S804, for each casing to be aligned, screening is performed based on each initial contour point corresponding to the corresponding casing to be aligned, and a target contour point corresponding to the corresponding casing to be aligned is obtained.
In particular, for each ferrule to be aligned, the computer device determines adjacent contour points adjacent to the respective ferrule to be aligned. Based on the positions of neighboring contour points neighboring the corresponding initial contour point, a position change result of the corresponding contour point is determined. And for each sleeve to be aligned, the computer equipment screens a target contour point corresponding to the corresponding sleeve to be aligned from the plurality of initial contour points through a screening condition on the basis of a plurality of position change results corresponding to the corresponding sleeve to be aligned.
The screening condition is a condition for screening an inflection point with a rapid position change, and the inflection point is a target contour point.
It should be noted that the result of the position change of each initial contour point can visually indicate whether the contour region formed by all the initial contour points can match the contour of the casing to be aligned. In addition, the target contour points with rapid position change can be accurately screened through the screening conditions, and the shape characteristics of the intersected area can be obtained based on the target contour points obtained through the screening conditions, so that the position of the target fixed point can be determined.
Step S806, for each casing to be aligned, determining a position of a target motionless point corresponding to the corresponding casing to be aligned in the camera coordinate system through ellipse fitting processing based on the target contour point corresponding to the corresponding casing to be aligned.
Specifically, for each casing to be aligned, the computer device determines a contour line containing a target contour point based on the target contour point corresponding to the corresponding casing to be aligned, and performs three-dimensional space conversion on each contour point of the two-dimensional lower contour line to obtain a three-dimensional contour point to be processed in a camera coordinate system. And the computer equipment determines the position of the target fixed point corresponding to the corresponding casing to be aligned through ellipse fitting processing based on the positions of the contour points to be processed.
It should be noted that the cannula to be aligned and the target action region may be regarded as an ellipse or a circle, and a contour line passing through a major axis or a minor axis of the ellipse is determined by a target contour point, based on which, the center of the ellipse in the three-dimensional space can be obtained based on the contour line fitting, and then the position of the target motionless point is determined.
For example, for each ferrule to be aligned, the computer device connects the target contour points corresponding to the respective ferrules to be aligned to form contour lines. And the computer equipment maps the two-dimensional contour line into a three-dimensional space through a binocular stereo matching algorithm to obtain a three-dimensional contour point to be processed under the camera coordinate. And for each sleeve to be aligned, determining the position of the center of the ellipse through three-dimensional ellipse fitting based on the contour point to be processed corresponding to the corresponding sleeve to be aligned. The computer device takes the position of the center of the ellipse as the position of the target motionless point corresponding to the respective cannula to be aligned.
The binocular stereo matching algorithm is used for sensing 3D (3 Dimensions) position information of the feature points.
Step S808, for each casing to be aligned, determining a position of the target motionless point corresponding to the corresponding casing to be aligned in the world coordinate system based on the position of the target motionless point corresponding to the corresponding casing to be aligned in the camera coordinates.
Specifically, the computer device determines a positional relationship of a camera coordinate system and a world coordinate system, and determines, for each of the ferrules to be aligned, a position of a target motionless point corresponding to the corresponding alignment ferrule in the world coordinate system based on a position of the target motionless point corresponding to the corresponding ferrule to be aligned in the camera coordinate system through the positional relationship.
For example, the computer device determines the position relationship T between the camera coordinate system And the actual coordinate system through a SLAM (Simultaneous Localization And Mapping) algorithm based on the environment image corresponding to the target work scene. For each ferrule to be aligned, the position q of the target motionless point in the camera coordinates corresponding to the respective ferrule to be aligned c And the positional relationship T as the position q of the target motionless point in the world coordinate system corresponding to the respective alignment sleeve w
The SLAM algorithm (synchronous positioning And map construction) is used for a space Mapping And positioning technology, And the position relation between a camera And a world coordinate system is obtained in real time.
In this embodiment, for each casing to be aligned, a target contour point of the casing to be aligned on the target action area can be quickly obtained by determining an initial contour point corresponding to the contour of the casing to be aligned and screening the initial contour point, so that the contour of the casing to be aligned on the target action area can be accurately fitted, the position of the target stationary point can be accurately determined, and the position accuracy of the target stationary point is ensured.
In an embodiment, as shown in fig. 10, for each ferrule to be aligned, the screening based on each initial contour point corresponding to the corresponding ferrule to be aligned to obtain a target contour point corresponding to the corresponding ferrule to be aligned includes:
step S1002, for each initial contour point corresponding to each casing to be aligned, two adjacent contour points adjacent to the corresponding initial contour point are determined.
Wherein, for each initial contour point, the two adjacent contour points corresponding to the respective initial contour point are points located before and after the respective initial contour point, respectively.
Step S1004, for each initial contour point corresponding to each cannula to be aligned, using the corresponding initial contour point as a vertex of an angle, and determining an angle corresponding to the corresponding initial contour point based on the corresponding initial contour point and two adjacent contour points corresponding to the corresponding initial contour point.
Specifically, for each initial contour point corresponding to each casing to be aligned, the computer device takes the corresponding initial contour point as a vertex of an angle, and forms an angle corresponding to the corresponding initial contour point based on two adjacent contour points of the corresponding initial contour point. And the computer equipment calculates and obtains the angle corresponding to the corresponding initial contour point based on the position of the corresponding initial contour point and the positions of two adjacent contour points of the corresponding initial contour point.
Step S1006, based on the angle corresponding to each initial contour point, performing angle screening on each initial contour point to obtain a middle contour point corresponding to the corresponding sleeve to be aligned.
The angle screening is to screen out the middle contour points with the angles within a preset angle range.
Specifically, the computer device obtains the angle corresponding to each initial contour point, and compares each angle with a preset angle range. And the computer equipment takes the initial contour point within the preset angle range as a middle contour point corresponding to the corresponding sleeve to be aligned.
It should be noted that, as shown in fig. 9, the shape of each ferrule to be aligned on the two-dimensional image can be understood as a side view of the ferrule to be aligned, that is, can be regarded as a rectangle (as shown in fig. 9), wherein the preset angle range is (a-b, a + b), where a is a reference angle, which may be 90 °, b is an angle error, which may be 0.6%, and the like, and is not limited in particular. That is, the angle screened through the preset angle range is an angle of approximately 90 degrees, that is, four vertices of the ferrule to be aligned in the image (as shown in fig. 9) are obtained.
Step S1008, for each cannula to be aligned, determining the distance between any two middle contour points based on the position of each middle contour point corresponding to the corresponding cannula to be aligned, performing distance screening on each middle contour point based on each distance, and determining two middle contour points which are farthest from the corresponding cannula to be aligned and are positioned on the target action area.
Specifically, for each ferrule to be aligned, the distance between any two intermediate contour points is determined based on the position of the respective intermediate contour point corresponding to the respective ferrule to be aligned. The computer device screens out from the plurality of distances the two most distant intermediate contour points located on the target region of action. It should be noted that, through distance screening, four farthest-distance middle contour points are obtained, and these four middle contour points form a rectangle, which can be regarded as a projection of the cannula to be aligned on a plane, and the rectangle intersects with the target action region to obtain two middle contour points. Both of these intermediate contour points can be considered vertices, as shown by points M and N in fig. 9.
Step S1010, regarding each cannula to be aligned, taking the two middle contour points which are farthest away and located on the target action region as the corresponding target contour points of the corresponding cannula to be aligned.
It should be noted that the target contour points can be regarded as the two ends of the region where the casing to be aligned intersects the target action region in the target image.
It should be noted that, in the process of determining the target motionless point corresponding to each casing to be aligned, the determination is only performed according to the vertex where the corresponding casing to be aligned intersects with the target action region. As shown in fig. 9, after determining the vertices 3 and 4 at which the respective cannula to be aligned intersects the target region of action. Only the position information of the vertex 3 and the vertex 4 needs to be subjected to ellipse fitting processing, and the position of the target fixed point corresponding to the sleeve to be aligned in the world coordinate can be obtained.
In this embodiment, by performing angle screening on the angles of the initial contour points, the contour information of the casing to be aligned in the target image can be quickly reflected. The intermediate contour points obtained through angle screening are subjected to distance screening, and the intermediate contour points positioned on the target action area can be quickly positioned, so that the target contour points can be efficiently screened out.
In one embodiment, as shown in fig. 11, a computer device obtains a target image (e.g., an abdominal image) of a target object, and segments each cannula to be aligned (Trocar in the corresponding diagram) in the target image through a neural network model to obtain a two-dimensional mask (mask in the corresponding diagram) corresponding to each cannula to be aligned. For each casing to be aligned, the computer device determines, through the mask, the respective initial contour points characterizing the contour of the respective casing to be aligned. (i.e., corresponding to the "acquire Trocar edge profile" step in the figure). For each initial contour point corresponding to each casing to be aligned, the computer device determines two adjacent contour points adjacent to the respective initial contour point. For each initial contour point corresponding to each ferrule to be aligned, the computer device determines an angle corresponding to the corresponding initial contour point based on the corresponding initial contour point and two adjacent contour points corresponding to the corresponding initial contour point, with the corresponding initial contour point as a vertex of the angle. And based on the angle corresponding to each initial contour point, the computer equipment performs angle screening on each initial contour point to obtain a middle contour point corresponding to the corresponding sleeve to be aligned. For each casing to be aligned, the computer device determines the distance between any two middle contour points based on the position of each middle contour point corresponding to the corresponding casing to be aligned, performs distance screening on each middle contour point based on each distance, and determines two middle contour points which are farthest from the corresponding casing to be aligned and located on the target action region. For each cannula to be aligned, the computer device takes the two intermediate contour points which are farthest away and located on the target action area as the corresponding target contour points of the corresponding cannula to be aligned. And the computer equipment intercepts the contour lines of the tail parts of the sleeves to be aligned, which are intersected with the target action region (such as the belly), based on the target contour points respectively corresponding to the sleeves to be aligned. For each sleeve to be aligned, the computer equipment maps the two-dimensional contour line corresponding to the corresponding sleeve to be aligned to a three-dimensional space through a binocular stereo matching algorithm to obtain a three-dimensional contour point to be processed under the camera coordinate. For each casing to be aligned, the position of the center of the ellipse is determined through three-dimensional ellipse fitting (namely 3D ellipse fitting in the corresponding graph) based on the contour point to be processed corresponding to the corresponding casing to be aligned. The computer device takes the position of the center of the ellipse as the position of the target motionless point corresponding to the respective cannula to be aligned. And the computer equipment determines the position relation between the camera coordinate system and the actual coordinate system through an SLAM algorithm based on the environment image corresponding to the target working scene. And for each sleeve to be aligned, taking the product of the position and the position relation of the target fixed point corresponding to the corresponding sleeve to be aligned in the camera coordinate system as the position of the target fixed point corresponding to the corresponding sleeve to be aligned in the world coordinate system.
In this embodiment, for each casing to be aligned, the target contour points of the casing to be aligned on the target action region can be quickly obtained by determining the initial contour points corresponding to the contour of the casing to be aligned for screening, so that the contour of the casing to be aligned on the target action region can be accurately fitted, the position of the target immobile point can be accurately determined, and the position accuracy of the target immobile point is ensured.
In one embodiment, as shown in fig. 12, the determining the virtual fixed point of each alignment part according to the target in at least one alignment part in the target working scenario includes:
step S1202, an image to be processed obtained by image-capturing a target in at least one alignment member is acquired.
Specifically, in a case where one image capturing device is disposed in the augmented reality device, the image capturing device captures a target in at least one alignment component in a target working scene to obtain at least one to-be-processed image. In the case that a plurality of image capturing devices are disposed in the augmented reality device, each image capturing device captures a target in one alignment component, and a to-be-processed image corresponding to a to-be-aligned sleeve is obtained.
It should be noted that one target is disposed in one alignment member. Also, there are generally at least two alignment features in the target work scenario. Therefore, the alignment member corresponding to the target can be determined from the plurality of alignment members by the target in the image to be processed.
Step S1204, determining position information of the target in the image to be processed, and determining a position of at least one virtual immobile point in the world coordinate system through second coordinate system conversion processing, where the virtual immobile point is a point located at a preset height of the alignment member.
And the second coordinate system conversion processing is used for determining the position of the virtual motionless point in the world coordinate system. The second coordinate system conversion process is a process of converting at least one coordinate system, for example, converting the image coordinate system into the camera coordinate system, and then converting the camera coordinate system into the world coordinate system, or converting the image coordinate system into the world coordinate system, which is not limited in particular. The world coordinate system represents a coordinate system in a real three-dimensional space, and can be a coordinate system of a target working scene in a real three-dimensional space.
Specifically, the computer device determines position information of the target based on the image to be processed, and identifies a plurality of corner points of the target based on the position information of the target to obtain two-dimensional corner point position information in an image coordinate system. And the computer equipment determines the position of at least one target fixed point in the world coordinate system through second coordinate system conversion processing based on the two-dimensional corner point position information in the image coordinate system.
For example, the computer device acquires preset position information corresponding to each alignment member, and the computer device determines position information of a target corresponding to an image to be processed. As shown in fig. 4A, the first target type may be regarded as a matrix with m rows and n columns formed by a plurality of rectangular units, and the corner points are the rectangular unit with the first row and the first column, the rectangular unit with the mth row and the first column, the rectangular unit with the nth row and the rectangular unit with the nth column. The computer device identifies two-dimensional corner position information of the first target type in an image coordinate system. In the case where one imaging device is present, the position of the virtual stationary point corresponding to the alignment member in the world coordinate system is determined based on two-dimensional corner point position information and preset position information corresponding to the target in the target image. And under the condition that at least two camera devices exist, acquiring a target conversion relation between a target coordinate system and a world coordinate system, determining coordinate information under a three-dimensional camera based on two-dimensional corner position information, and determining the position of a virtual fixed point corresponding to the alignment part in the world coordinate system based on the coordinate information under the three-dimensional camera, the target conversion relation and preset position information corresponding to the target in the target image.
In the present embodiment, the position of the virtual stationary point on the image to be processed can be faithfully reflected by the image to be processed corresponding to the target in the alignment means. In this way, the position of the virtual stationary point in the world coordinate system can be accurately positioned in time by the second coordinate system conversion processing. Meanwhile, in the embodiment, only the image needs to be acquired, and additional modeling processing is not needed, so that the data processing steps are greatly simplified.
In one embodiment, as shown in fig. 13, the determining the position information of the target in the image to be processed, and the determining the position of the at least one virtual fixed point in the world coordinate system through the second coordinate system conversion process, includes:
step S1302, for each alignment component, obtaining preset position information of the virtual fixed point corresponding to the corresponding alignment component in the target coordinate system.
The preset position information represents the position information of the virtual fixed point corresponding to the alignment component in the target coordinate system. The preset position information is determined before the alignment part is shipped.
Specifically, for each alignment member, a sleeve is mounted on the corresponding alignment member, resulting in a first mounted alignment member, and a virtual immobile point is pre-marked on a preset height of the sleeve. The mark images are obtained by shooting a plurality of first alignment parts through a camera device. For each first alignment part, the computer device intercepts the corresponding first alignment part at the preset height to obtain an intercepted area corresponding to the corresponding first alignment part. For each first alignment feature, the computer device determines a position of a virtual stationary point corresponding to the corresponding first alignment feature in the camera coordinate system based on the positions of points in the truncated region corresponding to the corresponding first alignment feature. For each first alignment part, the computer equipment identifies the target corner point corresponding to the corresponding first alignment part in the marking image to obtain the coordinates of the corner points in the image coordinate system. For each first alignment part, the computer device determines the position relation between the camera coordinate system and the target coordinate system based on the corner point coordinates corresponding to the corresponding first alignment part in the image coordinate system. And for each first alignment part, determining preset position information of the virtual fixed point corresponding to the corresponding alignment part in the target coordinate system based on the position relation between the camera coordinate system and the target coordinate system and the position of the virtual fixed point corresponding to the corresponding first alignment part in the camera coordinate system.
The target coordinate system is matched with the result of the alignment of the component, and therefore, the preset position information of the virtual fixed point of the alignment component in the target coordinate system can be used as the position information of one reference.
The camera device may be the same as or different from the imaging device. Among them, the image capturing apparatus is provided with a depth camera for sensing the entire scene depth information and spatial xyz information (e.g., a TOF (Time of Flight) camera, a structured light camera, etc.).
For example, for each first alignment component, the computer device obtains the position of the two-dimensional contour point corresponding to the corresponding first alignment component by image algorithm segmentation based on the marker image corresponding to the corresponding first alignment component, and maps the position of the two-dimensional contour point to a three-dimensional space to obtain the position of each contour point in the camera coordinate system. For each first alignment part, the computer device processes the position of each contour point in the camera coordinate system through ellipse fitting to obtain the position of the virtual immobile point corresponding to the corresponding first alignment part in the camera coordinate system. The computer device identifies a plurality of corner points of the target on the respective first alignment feature and determines two-dimensional corner point coordinates corresponding to the respective first alignment feature. For each first alignment part, the computer device maps the two-dimensional corner coordinates corresponding to the corresponding first alignment part to the three-dimensional space to obtain the corner coordinates of the camera coordinate system. For each first alignment part, the computer device determines the position relationship between the camera coordinate system and the target coordinate system through an ICP (Iterative Closest Point) algorithm based on the angular Point coordinates of the camera coordinate system, and determines preset position information of the virtual stationary Point corresponding to the corresponding alignment part in the target coordinate system based on the position relationship between the camera coordinate system and the target coordinate system, the angular Point coordinates of the camera coordinate system corresponding to the corresponding first alignment part, and the position of the virtual stationary Point corresponding to the corresponding first alignment part in the camera coordinate system.
The ICP algorithm is to calculate the optimal rigid body transformation by repeatedly selecting the corresponding relation point pairs to obtain a transformation matrix between coordinate systems.
In step S1304, for each alignment component, a first corresponding relationship corresponding to the corresponding alignment component is determined based on the position information of the target corresponding to the corresponding alignment component in the image to be processed, and the first corresponding relationship represents a relationship between the camera coordinate system and the target coordinate system.
Specifically, when at least two cameras are present, for each alignment component, based on the position information of the target corresponding to the corresponding alignment component in the image to be processed, a first corresponding relationship corresponding to the corresponding alignment component is obtained through a binocular stereo matching algorithm or an ICP algorithm. In the case that one camera device exists, for each alignment part, three-dimensional target position information corresponding to the corresponding alignment part is acquired, and based on the three-dimensional target position information and the position information of the target corresponding to the corresponding alignment part in the image to be processed, a first corresponding relation corresponding to the corresponding alignment part is determined through a PnP (Perspectral-n-Point, n-Point Perspective) algorithm.
Step S1306, an environment image corresponding to the target work scene environment is acquired, and based on the environment image, a second correspondence between the camera coordinate system and the world coordinate system is determined.
Specifically, an environment image corresponding to the target working scene environment is obtained through shooting by the camera device. The computer device performs SLAM algorithm processing on the environment image, and determines a second corresponding relation between the camera coordinate system and the world coordinate system.
Step S1308, for each alignment part, determining a position of the virtual immobile point corresponding to the corresponding alignment part in the world coordinate system based on the first corresponding relationship, the second corresponding relationship corresponding to the corresponding alignment part, and the preset position information of the virtual immobile point corresponding to the corresponding alignment part.
Specifically, for each alignment component, the computer device multiplies the first corresponding relationship and the second corresponding relationship corresponding to the corresponding alignment component by the preset position information of the virtual fixed point corresponding to the corresponding alignment component, so as to obtain the position of the virtual fixed point corresponding to the corresponding alignment component in the world coordinate system.
It should be noted that, the present embodiment mainly relates to the conversion between three coordinate systems, and the conversion relationship is as shown in fig. 14, that is, to realize the conversion of the point in the target coordinate system into the world coordinate system, the point in the target coordinate system is first converted into the point in the camera coordinate system, and then the point in the camera coordinate system is converted into the world coordinate system. Wherein the transformation matrix involved in the transformation of the target coordinate system into the world coordinate system
Figure BDA0003758081490000111
The transformation matrix
Figure BDA0003758081490000112
This can be obtained according to the following formula:
Figure BDA0003758081490000113
wherein the content of the first and second substances,
Figure BDA0003758081490000114
for the change matrix involved in the transformation of the points in the target coordinate system into the camera coordinate system,
Figure BDA0003758081490000115
the transformation matrix involved in the transformation of points in the camera coordinate system to the world coordinate system is used.
In this embodiment, a first corresponding relationship between the camera coordinate system and the target coordinate system can be determined by the position of the virtual stationary point on the image to be processed. And a second corresponding relation between the camera coordinate system and the world coordinate system can be determined and obtained through the environment image of the target working scene environment. Therefore, the conversion from the target coordinate system to the world coordinate system is realized through the first corresponding relation and the second corresponding relation, and the position of the virtual fixed point of the component to be aligned in the world coordinate system can be rapidly and accurately determined based on the preset position information of the virtual fixed point of the alignment component in the target coordinate system.
In one embodiment, as shown in fig. 15, before the alignment operation, for each first alignment part, the computer device performs image algorithm segmentation based on the mark image corresponding to the corresponding first alignment part to obtain the position of the two-dimensional contour point corresponding to the corresponding first alignment part, and maps the position of the two-dimensional contour point to a three-dimensional space to obtain the position of each contour point in the camera coordinate system. For each first alignment part, the computer device processes the positions of the contour points in the camera coordinate system through ellipse fitting to obtain the positions of the virtual fixed points corresponding to the first alignment parts in the camera coordinate system. The computer device identifies a plurality of corner points of the target on the respective first alignment feature and determines two-dimensional corner point coordinates corresponding to the respective first alignment feature. For each first alignment part, the computer device maps the two-dimensional corner point coordinates corresponding to the corresponding first alignment part to a three-dimensional space to obtain the corner point coordinates of the camera coordinate system. For each first alignment part, the computer device determines the position relation between the camera coordinate system and the target coordinate system through an ICP (inductively coupled plasma) algorithm based on the corner point coordinates of the camera coordinate system, and determines the preset position information of the virtual fixed point corresponding to the corresponding alignment part in the target coordinate system based on the position relation between the camera coordinate system and the target coordinate system, the corner point coordinates of the camera coordinate system corresponding to the corresponding first alignment part, and the position of the virtual fixed point corresponding to the corresponding first alignment part in the camera coordinate system.
In the alignment operation, under the condition that at least two camera devices exist, for each alignment part, based on the position information of the target corresponding to the corresponding alignment part in the image to be processed, a first corresponding relation corresponding to the corresponding alignment part is obtained through a binocular stereo matching algorithm or an ICP algorithm. In the case where one imaging device is present, for each alignment member, three-dimensional target position information corresponding to the corresponding alignment member is acquired, and a first correspondence relationship corresponding to the corresponding alignment member is determined from the three-dimensional target position information and the position information of the target corresponding to the corresponding alignment member in the image to be processed. And shooting by a camera to obtain an environment image corresponding to the target working scene environment. The computer device performs SLAM algorithm processing on the environment image, and determines a second corresponding relation between the camera coordinate system and the world coordinate system. For each alignment component, the computer device multiplies the first corresponding relationship and the second corresponding relationship corresponding to the corresponding alignment component by preset position information of the virtual stationary point corresponding to the corresponding alignment component to obtain the position of the virtual stationary point corresponding to the corresponding alignment component in the world coordinate system.
In this embodiment, a first corresponding relationship between the camera coordinate system and the target coordinate system can be determined by the position of the virtual stationary point on the image to be processed. And a second corresponding relation between the camera coordinate system and the world coordinate system can be determined and obtained through the environment image of the target working scene environment. Therefore, the conversion from the target coordinate system to the world coordinate system is realized through the first corresponding relation and the second corresponding relation, and the position of the virtual fixed point of the component to be aligned in the world coordinate system can be rapidly and accurately determined based on the preset position information of the virtual fixed point of the alignment component in the target coordinate system.
In one embodiment, in the case that there are at least two alignment members and there are at least two sleeves to be aligned, the method further includes, after the target motionless point is displayed on the sleeve to be aligned by the augmented reality device and the virtual motionless point is displayed on the alignment member to guide the sleeve to be aligned to the alignment member: and determining the number of each target fixed point based on the preset sequencing direction. And determining the number of each alignment part based on the coding information carried by each target in the image to be processed. Based on the number of each alignment feature, the number of each virtual motionless point is determined. And determining the target motionless points corresponding to the virtual motionless points respectively based on a preset pairing principle, the serial numbers of the virtual motionless points and the serial numbers of the target motionless points.
Wherein, in the target work scenario shown in fig. 16, the cannula to be aligned is inserted in the target position (e.g. a cut hole) of the target action zone. The preset sorting direction may be a direction from the front end of the target object to the rear end, or a direction from the rear end of the target object to the front end, and is not limited specifically. And the preset pairing principle represents the number of the virtual immobile point corresponding to the number of each target immobile point. For example, the preset pairing rule may be that the virtual fixed point of the part to be aligned of the number R is paired with the real fixed point of the number R.
Specifically, the computer device numbers each target motionless point in sequence according to a preset sorting direction to determine the number of each target motionless point. And the computer equipment identifies the coded information carried by each target and determines the number of the alignment part where the target is positioned. For each component to be aligned, the computer device takes the number of the corresponding alignment component as the number of the virtual motionless point corresponding to the alignment component. The computer equipment determines target fixed points corresponding to the virtual fixed points respectively based on a preset pairing principle, the number of each virtual fixed point and the number of each target fixed point.
For example, as shown in fig. 17, the computer device determines the type of each target and, based on the type of target, determines the encoded information corresponding to the type of target. For each target, the computer device determines a type corresponding to the respective target and determines, from the type corresponding to the respective target, encoded information corresponding to the respective target. For each target, the computer device determines the number of the alignment part where the corresponding target is located based on the encoded information corresponding to the corresponding target, and takes the number of the corresponding alignment part as the number of the virtual immobile point corresponding to the alignment part, i.e. identifies that each virtual immobile point belongs to the fourth alignment part. And the computer equipment sequentially numbers the target fixed points according to a preset sequencing direction so as to determine the number of each target fixed point. And the computer equipment determines target fixed points corresponding to the virtual fixed points according to a preset pairing principle, and controls the virtual fixed points corresponding to the alignment parts to reach the positions of the corresponding target fixed points so as to align the sleeve to be aligned with the alignment parts.
In this embodiment, the number of each alignment part is determined by the encoded information carried by the target, so as to distinguish the alignment parts. In this way, it is also possible to have a separate numbering of the virtual motionless point of each alignment member. Therefore, each virtual fixed point is mapped to the corresponding target fixed point through a preset pairing principle, so that the interference of the current virtual fixed point to other virtual fixed points when the current virtual fixed point moves is avoided, and the alignment efficiency is ensured.
In one embodiment, the method further comprises: and determining the motion path of the virtual motionless point based on the position of the target motionless point and the position of the virtual motionless point. And sending the motion path to the augmented reality device so that the virtual fixed point moves according to the motion path until the virtual fixed point moves to the position of the target fixed point, and finishing the alignment operation of the sleeve to be aligned.
And the number of the virtual fixed points is less than or equal to the number of the target fixed points in the operation process.
Specifically, in the case where there are at least two target motionless points, for each virtual motionless point, the computer device takes a linear distance between the corresponding virtual motionless point and the target motionless point corresponding to the corresponding virtual motionless point as a movement path of the corresponding virtual motionless point. And under the condition that one target motionless point exists and the number of the virtual motionless points is the same as that of the target motionless points, directly taking the linear distance between the virtual motionless point and the target motionless point as the motion path of the virtual motionless point. And the computer equipment sends the motion path to the enhanced implementation device for displaying so as to move according to the motion path until the virtual motionless point moves to the position of the target motionless point, and the alignment operation of the sleeve to be aligned is completed.
In this embodiment, under the condition that the target motionless point and the virtual motionless point are determined, the motion path corresponding to the virtual motionless point is displayed in the augmented reality device, so that the operator can move the virtual motionless point in real time according to the displayed motion path, and the convenience degree of the movement of the virtual motionless point is greatly improved.
In one embodiment, the method further comprises: when the position of the target fixed point is not shifted, the movement angle of the virtual fixed point is determined. And under the condition that the movement angle is not the threshold angle, controlling the virtual motionless point to stop moving and sending warning information to the augmented reality device.
Wherein the movement angle represents the deviation degree of the position of the virtual fixed point and the movement path.
Specifically, under the condition that the position of the target fixed point does not deviate, the computer device determines the position of the virtual fixed point at each moment in the current time period, and calculates the moving angle of the virtual fixed point at the current moment based on the position of the virtual fixed point at each moment in the current time period. And under the condition that the movement angle is not the threshold angle, controlling the virtual motionless point to stop moving and sending warning information to the augmented reality device.
The ending time of the current time period is the current time, and the starting time of the current time period is any time before the current time. For example, the current time is 12:00, and the current time period is a time period of 11:50 to 12: 00.
In this embodiment, under the condition that the position of the target motionless point does not deviate, the movement condition of the virtual motionless point is monitored in real time by monitoring the movement angle of the virtual motionless point, so that the movement of the virtual motionless point deviating from the movement path can be effectively avoided, and the accuracy of the movement of the virtual motionless point is ensured.
In one embodiment, the method further comprises: and when the position of the target motionless point deviates, sending an updating command to the augmented reality device, and determining the deviation position of the target motionless point with the deviation. And updating the position of the target fixed point to the offset position.
Specifically, for each target motionless point, after determining the position of the corresponding target motionless point, the position of the corresponding target motionless point is taken as an anchor point. For each alignment component, when the operator controls the corresponding alignment component to move, and the computer device determines that the position of the target motionless point corresponding to the corresponding virtual motionless point is not at the position of the anchor point, determining that the position of the target motionless point corresponding to the corresponding virtual motionless point is shifted. And under the condition that the position of the target motionless point is deviated, taking the virtual motionless point corresponding to the deviated target motionless point as a virtual motionless point to be processed, stopping the motion of the virtual motionless point to be processed, sending an updating command to the augmented reality device, and determining the deviation position of the deviated target motionless point. And the computer equipment updates the position of the target motionless point to the offset position and updates the motion path corresponding to the virtual motionless point to be processed.
In this embodiment, whether the position of the target motionless point deviates or not is monitored in real time, the motion of the virtual motionless point can be pre-judged, and the false movement of the virtual motionless point is avoided in time, so that the accuracy of the movement of the virtual motionless point is greatly improved.
In one embodiment, as shown in fig. 18, for each target motionless point, after determining the position of the corresponding target motionless point, the position of the corresponding target motionless point is taken as an anchor point. For each virtual immobile point, when an operator controls the alignment part corresponding to the corresponding virtual immobile point to move, the position of the target immobile point corresponding to the corresponding virtual immobile point is not changed. And when the position of the target fixed point corresponding to the corresponding virtual fixed point is not at the position of the anchor point, determining that the position of the target fixed point corresponding to the corresponding virtual fixed point is deviated. And under the condition that the position of the target motionless point is deviated, taking the virtual motionless point corresponding to the deviated target motionless point as a virtual motionless point to be processed, stopping the motion of the virtual motionless point to be processed, sending an updating command to the augmented reality device, and determining the deviation position of the deviated target motionless point. And the computer equipment updates the position of the target motionless point to the offset position and updates the motion path corresponding to the virtual motionless point to be processed. For each virtual immobile point, under the condition that the position of the target immobile point corresponding to the corresponding virtual immobile point does not deviate, the computer equipment determines the position of the corresponding virtual immobile point at each moment in the current time period, and calculates the moving angle of the corresponding virtual immobile point at the current moment based on the position of the corresponding virtual immobile point at each moment in the current time period. And controlling the corresponding virtual motionless point to stop moving and sending warning information (namely error warning) to the augmented reality device under the condition that the moving angle is not the threshold angle. And under the condition that the movement angle is the threshold angle, the corresponding virtual motionless point is not modified, the corresponding virtual motionless point is continuously controlled to move by operating the aligning component corresponding to the corresponding virtual motionless point by an operator until the corresponding virtual motionless point and the real motionless point are matched, and a success indication is sent.
In this embodiment, under the condition that the target motionless point and the virtual motionless point are determined, the motion path corresponding to the virtual motionless point is displayed in the augmented reality device, so that the operator can move the virtual motionless point in real time according to the displayed motion path, and the convenience degree of the movement of the virtual motionless point is greatly improved. In addition, whether the position of the target motionless point deviates or not and whether the moving angle of the virtual motionless point deviates or not are monitored in real time, so that the false movement of the virtual motionless point is avoided in time, and the moving accuracy of the virtual motionless point is greatly improved.
In one embodiment, a more detailed embodiment is provided for description in order to facilitate a clearer understanding of the technical solutions of the present application. The flow of this embodiment is shown in fig. 19. The present embodiment relates to an interactive operation of an augmented reality apparatus in which an image acquisition unit and an AR augmented reality unit are disposed, and a computer device in which a 3D positioning unit, an image processing unit, a robot arm adjustment control unit (i.e., an alignment member adjustment control unit), and a pairing monitoring unit are disposed, wherein the units involved are as shown in fig. 20.
The 3D positioning unit is used for measuring spatial information in front of a sight line, and specifically, firstly, mapping a central point of a target position obtained by recognition and segmentation on an image into a 3D scene to obtain spatial XYZ coordinates of the image; and then, carrying out space mapping and positioning through an SLAM algorithm to obtain a conversion relation between a camera coordinate system and a world coordinate system. The 3D positioning unit function comprises a PnP algorithm (used for estimating the relative position relationship between a camera and a target), a binocular stereo matching algorithm, a SLAM algorithm, an ICP algorithm and a depth camera (such as a TOF camera and a structured light camera). Wherein, AR augmented reality unit's function includes: tracking eyeballs by infrared rays, producing a virtual 3D model, projecting the 3D model onto a lens in a virtual image form, and then directly reflecting the image onto the retina of a user by using glasses of the glasses; and setting a space anchor point, storing and persisting in the cloud for any other AR equipment to inquire and share. The specific steps are as follows:
the method comprises the following steps: an image processing unit in the computer equipment acquires a target image obtained by carrying out image acquisition on at least one sleeve to be aligned in a target action area. Based on the target image, the image processing unit determines a plurality of initial contour points corresponding to the casings to be aligned respectively. For each initial contour point corresponding to each casing to be aligned, an image processing unit in the computer device determines two adjacent contour points adjacent to the respective initial contour point. For each initial contour point corresponding to each casing to be aligned, the image processing unit in the computer device takes the corresponding initial contour point as a vertex of an angle, and determines the angle corresponding to the corresponding initial contour point based on the corresponding initial contour point and two adjacent contour points corresponding to the corresponding initial contour point. And based on the angles respectively corresponding to the initial contour points, carrying out angle screening on the initial contour points to obtain middle contour points corresponding to the corresponding sleeve to be aligned. For each casing to be aligned, based on the position of each intermediate contour point corresponding to the corresponding casing to be aligned, an image processing unit in the computer device determines the distance between any two intermediate contour points, performs distance screening on each intermediate contour point based on each distance, and determines two intermediate contour points which are farthest from the corresponding casing to be aligned and located on the target action region. For each cannula to be aligned, the image processing unit in the computer device takes the two intermediate contour points which are farthest away from the cannula and located on the target action area as the corresponding target contour points of the corresponding cannula to be aligned. For each casing to be aligned, the image processing unit in the computer device determines the position of the target motionless point corresponding to the corresponding casing to be aligned in the camera coordinate system through ellipse fitting processing based on the target contour point corresponding to the corresponding casing to be aligned. For each ferrule to be aligned, a 3D positioning unit in the computer device determines a position of a target motionless point corresponding to the respective ferrule to be aligned in the world coordinate system based on a position of the target motionless point corresponding to the respective ferrule to be aligned in the camera coordinates.
And step two, the image acquisition unit of the augmented reality device acquires an image to be processed by image acquisition of the target in at least one alignment component, and sends the image to be processed to the image processing unit in the computer equipment. For each alignment part, an image processing unit in the computer device acquires preset position information of a virtual fixed point corresponding to the corresponding alignment part in a target coordinate system. For each alignment part, based on the position information of the target corresponding to the respective alignment part in the image to be processed, the 3D positioning unit in the computer device determines a first correspondence corresponding to the respective alignment part, the first correspondence characterizing a relationship between the camera coordinate system and the target coordinate system. Based on the environment image corresponding to the target work scene environment, the 3D positioning unit in the computer equipment determines a second corresponding relation between the camera coordinate system and the world coordinate system. For each alignment part, the image processing unit in the computer device determines the position of the virtual motionless point corresponding to the respective alignment part in the world coordinate system based on the first corresponding relationship corresponding to the respective alignment part, the second corresponding relationship, and the preset position information of the virtual motionless point corresponding to the respective alignment part.
Step three: and an image processing unit in the computer equipment sends the target motionless point, the virtual motionless point and the scene information corresponding to the target working scene to an augmented reality device for virtual fusion to construct a virtual space, wherein the position of each point in the virtual space is consistent with the position of each point in the target working scene. And displaying a target motionless point corresponding to the sleeve to be aligned on the sleeve to be aligned through an augmented reality device, and displaying a virtual motionless point corresponding to the alignment component in the alignment component so as to guide the alignment of the sleeve to be aligned and the alignment component.
Step four: and under the condition that at least two alignment parts exist and at least two sleeves to be aligned exist, determining the number of each target fixed point by a pairing monitoring unit in the computer equipment based on the preset sequencing direction. Based on the coded information carried by each target in the image to be processed, the pairing monitoring unit in the computer device determines the number of each alignment part. Based on the number of each alignment feature, a pairing monitoring unit in the computer device determines the number of each virtual stationary point. And determining the target motionless points corresponding to the virtual motionless points respectively based on a preset pairing principle, the serial numbers of the virtual motionless points and the serial numbers of the target motionless points.
Step five: for each target motionless point, after determining the position of the corresponding target motionless point, the pairing monitoring unit in the computer device takes the position of the corresponding target motionless point as an anchor point. For each virtual immobile point, when an operator controls the alignment component corresponding to the corresponding virtual immobile point to move, the position of the target immobile point corresponding to the corresponding virtual immobile point is not shifted, and the position is not modified. When the position of the target fixed point corresponding to the corresponding virtual fixed point is not at the position of the anchor point, the pairing monitoring unit in the computer equipment determines that the position of the target fixed point corresponding to the corresponding virtual fixed point deviates. Under the condition that the position of the target motionless point is deviated, the pairing monitoring unit in the computer equipment takes the virtual motionless point corresponding to the deviated target motionless point as a to-be-processed virtual motionless point, stops the motion of the to-be-processed virtual motionless point, sends an updating instruction to the augmented reality device, and determines the deviated position of the deviated target motionless point. And the pairing monitoring unit in the computer equipment updates the position of the target motionless point to the offset position and updates the motion path corresponding to the virtual motionless point to be processed. For each virtual immobile point, under the condition that the position of the target immobile point corresponding to the corresponding virtual immobile point does not deviate, the pairing monitoring unit in the computer equipment determines the position of the corresponding virtual immobile point at each moment in the current time period, and calculates the moving angle of the corresponding virtual immobile point at the current moment based on the position of the corresponding virtual immobile point at each moment in the current time period. And under the condition that the movement angle is not the threshold angle, the pairing monitoring unit in the computer equipment controls the corresponding virtual motionless point to stop moving and sends warning information (namely error warning) to the augmented reality device. And under the condition that the moving angle is the threshold angle, the corresponding virtual motionless point is not modified, the corresponding virtual motionless point is continuously controlled to move by operating the aligning component corresponding to the corresponding virtual motionless point by an operator until the corresponding virtual motionless point is superposed with the real motionless point, the posture of the mechanical arm (namely, the aligning component) is adjusted by a mechanical arm adjusting and controlling unit in the computer equipment so as to be matched with the casing to be aligned in which the target object is placed, and the target motionless point of the casing to be aligned is kept at the target position. At this point, the pairing operation is complete and a success indication is issued.
In this embodiment, by determining the target motionless point of at least one casing to be aligned in the target action region in the target working scene, the target motionless point is accurately positioned in the target working scene, so that the position information of the casing to be aligned is faithfully reflected. According to the target in at least one alignment component in the target working scene, the accurate positioning of the virtual fixed point of each alignment component in the target working scene can be realized, and the position information of the alignment component is accurately and effectively reflected. And virtually fusing the target fixed point, the virtual fixed point and the target working scene, displaying the target fixed point on the sleeve to be aligned through an augmented reality device, and displaying the virtual fixed point on the alignment component so as to guide the sleeve to be aligned to the alignment component. Therefore, the alignment state of the sleeve to be aligned and the alignment part in the target working scene can be timely and accurately reproduced by enhancing the positions of the target fixed point and the virtual fixed point displayed in real time in the realizing device, and the alignment accuracy of the sleeve to be aligned is greatly improved. In addition, the accuracy of identifying the target position and the position of the virtual motionless point of the target object is improved through deep learning, image segmentation and positioning algorithms. In addition, through a 3D tracking and positioning algorithm, images are obtained only through a camera on the augmented reality device to calculate the spatial position, so that other external spatial positioning equipment is reduced, other external spatial positioning equipment is not needed, and the cost is greatly simplified. In addition, by means of the enhanced display technology, the virtual fixed point can be visualized, and an operator can quickly and accurately complete the alignment operation of the sleeve.
It should be understood that, although the steps in the flowcharts related to the embodiments as described above are sequentially displayed as indicated by arrows, the steps are not necessarily performed sequentially as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the flowcharts related to the embodiments described above may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the execution order of the steps or stages is not necessarily sequential, but may be rotated or alternated with other steps or at least a part of the steps or stages in other steps.
Based on the same inventive concept, the embodiment of the present application further provides a ferrule alignment device for implementing the ferrule alignment method mentioned above. The solution to the problem provided by the device is similar to the solution described in the above method, so the specific definition in one or more embodiments of the cannula alignment device provided below can be referred to the definition of the cannula alignment method in the above, and is not described herein again.
In one embodiment, as shown in fig. 21, there is provided a cannula alignment device comprising: a first determining module 2102, a second determining module 2104, and a display module 2106, wherein:
a first determining module 2102 configured to determine a target motionless point of at least one cannula to be aligned in a target action region in a target work scenario.
A second determining module 2104 for determining a virtual motionless point of each of the alignment components based on a target of at least one of the alignment components in the target work scenario.
A display module 2106, configured to virtually fuse the target fixed point, the virtual fixed point, and the target work scene, display the target fixed point on the alignment target through an augmented reality device, and display the virtual fixed point on the alignment target, so as to guide the alignment target to align with the alignment target.
In one embodiment, the first determining module 2102 is configured to acquire a target image obtained by image capturing at least one cannula to be aligned in a target region of action. Based on the target image, the position of at least one target motionless point in the world coordinate system is determined through a first coordinate system conversion process.
In one embodiment, the first determining module 2102 is configured to determine a plurality of initial contour points corresponding to the respective sleeves to be aligned based on the target image. And for each sleeve to be aligned, screening based on each initial contour point corresponding to the corresponding sleeve to be aligned to obtain a target contour point corresponding to the corresponding sleeve to be aligned. And for each casing to be aligned, determining the position of a target fixed point corresponding to the corresponding casing to be aligned in the camera coordinate system through ellipse fitting processing based on the target contour point corresponding to the corresponding casing to be aligned. For each ferrule to be aligned, a position of a target motionless point in the world coordinate system corresponding to the respective alignment ferrule is determined based on a position of the target motionless point in the camera coordinates corresponding to the respective ferrule to be aligned.
In one embodiment, the first determining module 2102 is configured to determine, for each initial contour point corresponding to each ferrule to be aligned, two adjacent contour points adjacent to the corresponding initial contour point. And for each initial contour point corresponding to each casing to be aligned, taking the corresponding initial contour point as the vertex of an angle, and determining the angle corresponding to the corresponding initial contour point based on the corresponding initial contour point and two adjacent contour points corresponding to the corresponding initial contour point. And based on the angles respectively corresponding to the initial contour points, carrying out angle screening on the initial contour points to obtain middle contour points corresponding to the corresponding sleeve to be aligned. For each sleeve to be aligned, determining the distance between any two middle contour points based on the position of each middle contour point corresponding to the corresponding sleeve to be aligned, performing distance screening on each middle contour point based on each distance, and determining two middle contour points which are farthest from the corresponding sleeve to be aligned and are positioned on a target action region. And regarding each cannula to be aligned, taking the two middle contour points which are farthest away and positioned on the target action area as the corresponding target contour points of the corresponding cannula to be aligned.
In one embodiment, the second determining module 2104 is configured to acquire an image to be processed from image acquisition of the target in the at least one alignment feature. And determining the position information of the target in the image to be processed, and determining the position of at least one virtual immobile point in a world coordinate system through second coordinate system conversion processing, wherein the virtual immobile point is a point at the preset height of the alignment part.
In one embodiment, the second determining module 2104 is configured to, for each alignment component, obtain preset position information of the virtual stationary point corresponding to the corresponding alignment component in the target coordinate system. For each alignment part, determining a first correspondence corresponding to the respective alignment part, the first correspondence characterizing a relationship between the camera coordinate system and the target coordinate system, based on positional information of a target in the image to be processed corresponding to the respective alignment part. And acquiring an environment image corresponding to the target working scene environment, and determining a second corresponding relation between the camera coordinate system and the world coordinate system based on the environment image. For each alignment component, determining a position of the virtual motionless point corresponding to the respective alignment component in the world coordinate system based on the first correspondence, the second correspondence corresponding to the respective alignment component, and the preset position information of the virtual motionless point corresponding to the respective alignment component.
In one embodiment, the display module 2106 is further configured to determine the number of each target fixed point based on a preset sorting direction. And determining the number of each alignment part based on the coding information carried by each target in the image to be processed. Based on the number of each alignment feature, the number of each virtual motionless point is determined. And determining the target motionless points corresponding to the virtual motionless points respectively based on a preset pairing principle, the serial numbers of the virtual motionless points and the serial numbers of the target motionless points.
In one embodiment, the display module 2106 is further configured to determine a movement path of the virtual motionless point based on the position of the target motionless point and the position of the virtual motionless point. And sending the motion path to the augmented reality device, so that the virtual immobile point moves according to the motion path until the virtual immobile point moves to the position of the target immobile point, and finishing the alignment operation of the sleeve to be aligned.
In one embodiment, the display module 2106 is further configured to determine a movement angle of the virtual stationary point when the position of the target stationary point is not shifted. And under the condition that the movement angle is not the threshold angle, controlling the virtual motionless point to stop moving and sending warning information to the augmented reality device.
In an embodiment, the display module 2106 is further configured to send an update instruction to the augmented reality device when the position of the target motionless point deviates, and determine the deviated position of the deviated target motionless point. And updating the position of the target fixed point to the offset position.
The various modules in the ferrule alignment device described above may be implemented in whole or in part by software, hardware, and combinations thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 22. The computer device includes a processor, a memory, an Input/Output interface (I/O for short), and a communication interface. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface is connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used to store casing alignment data. The input/output interface of the computer device is used for exchanging information between the processor and an external device. The communication interface of the computer device is used for connecting and communicating with an external terminal through a network. The computer program is executed by a processor to implement a cannula alignment method.
Those skilled in the art will appreciate that the architecture shown in fig. 22 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is further provided, which includes a memory and a processor, the memory stores a computer program, and the processor implements the steps of the above method embodiments when executing the computer program.
In an embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
In an embodiment, a computer program product is provided, comprising a computer program which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, displayed data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data need to comply with the relevant laws and regulations and standards of the relevant country and region.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high-density embedded nonvolatile Memory, resistive Random Access Memory (ReRAM), Magnetic Random Access Memory (MRAM), Ferroelectric Random Access Memory (FRAM), Phase Change Memory (PCM), graphene Memory, and the like. Volatile Memory can include Random Access Memory (RAM), external cache Memory, and the like. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others. The databases referred to in various embodiments provided herein may include at least one of relational and non-relational databases. The non-relational database may include, but is not limited to, a block chain based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing based data processing logic devices, etc., without limitation.
All possible combinations of the technical features in the above embodiments may not be described for the sake of brevity, but should be considered as being within the scope of the present disclosure as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

Claims (13)

1. A method of ferrule alignment, the method comprising:
determining a target motionless point of at least one casing to be aligned in a target action area in a target working scene;
determining a virtual motionless point of each alignment component according to a target in at least one alignment component in the target working scene;
and virtually fusing the target fixed point, the virtual fixed point and the target working scene, displaying the target fixed point on the sleeve to be aligned through an augmented reality device, and displaying the virtual fixed point on the alignment component so as to guide the sleeve to be aligned to the alignment component.
2. The method of claim 1, wherein determining a target motionless point of at least one casing to be aligned in a target action zone in a target work scenario comprises:
acquiring a target image obtained by carrying out image acquisition on at least one sleeve to be aligned in a target action area;
based on the target image, the position of at least one target motionless point in the world coordinate system is determined through a first coordinate system conversion process.
3. The method of claim 2, wherein determining the position of the at least one target stationary point in the world coordinate system based on the target image by a first coordinate system transformation process comprises:
determining a plurality of initial contour points corresponding to the sleeves to be aligned respectively based on the target image;
for each sleeve to be aligned, screening based on each initial contour point corresponding to the corresponding sleeve to be aligned to obtain a target contour point corresponding to the corresponding sleeve to be aligned;
for each sleeve to be aligned, determining the position of a target fixed point corresponding to the corresponding sleeve to be aligned in a camera coordinate system through ellipse fitting processing based on the target contour point corresponding to the corresponding sleeve to be aligned;
for each ferrule to be aligned, the position of the target motionless point corresponding to the respective alignment ferrule in the world coordinate system is determined based on the position of the target motionless point corresponding to the respective ferrule to be aligned in the camera coordinates.
4. The method according to claim 3, wherein the for each casing to be aligned, screening based on each initial contour point corresponding to the corresponding casing to be aligned to obtain a target contour point corresponding to the corresponding casing to be aligned comprises:
for each initial contour point corresponding to each sleeve to be aligned, determining two adjacent contour points adjacent to the corresponding initial contour point;
for each initial contour point corresponding to each casing to be aligned, taking the corresponding initial contour point as the vertex of an angle, and determining the angle corresponding to the corresponding initial contour point based on the corresponding initial contour point and two adjacent contour points corresponding to the corresponding initial contour point;
based on the angle corresponding to each initial contour point, carrying out angle screening on each initial contour point to obtain a middle contour point corresponding to the corresponding sleeve to be aligned;
for each sleeve to be aligned, determining the distance between any two middle contour points based on the position of each middle contour point corresponding to the corresponding sleeve to be aligned, performing distance screening on each middle contour point based on each distance, and determining two middle contour points which have the farthest distance corresponding to the corresponding sleeve to be aligned and are positioned on a target action area;
and regarding each cannula to be aligned, taking the two middle contour points which are farthest away and positioned on the target action area as the corresponding target contour points of the corresponding cannula to be aligned.
5. The method of claim 1, wherein determining a virtual motionless point for each of the alignment members based on the target in the at least one alignment member in the target work scenario comprises:
acquiring an image to be processed obtained by image acquisition of a target in at least one alignment part;
and determining the position information of the target in the image to be processed, and determining the position of at least one virtual immobile point in a world coordinate system through second coordinate system conversion processing, wherein the virtual immobile point is a point at the preset height of the alignment part.
6. The method according to claim 5, wherein the determining the position information of the target in the image to be processed, and the determining the position of the at least one virtual stationary point in the world coordinate system through a second coordinate system conversion process, comprises:
for each alignment component, acquiring preset position information of a virtual fixed point corresponding to the corresponding alignment component in a target coordinate system;
for each alignment part, determining a first corresponding relation corresponding to the corresponding alignment part based on the position information of the target corresponding to the corresponding alignment part in the image to be processed, wherein the first corresponding relation represents the relation between a camera coordinate system and a target coordinate system;
acquiring an environment image corresponding to a target working scene environment, and determining a second corresponding relation between a camera coordinate system and a world coordinate system based on the environment image;
for each alignment component, determining the position of the virtual immobile point corresponding to the corresponding alignment component in the world coordinate system based on the first corresponding relationship and the second corresponding relationship corresponding to the corresponding alignment component and the preset position information of the virtual immobile point corresponding to the corresponding alignment component.
7. The method according to claim 1, wherein in a case where there are at least two alignment members and there are at least two ferrules to be aligned, the target motionless point is displayed on the ferrules to be aligned by an augmented reality device, and after the alignment member displays the virtual motionless point to guide the alignment of the ferrules to be aligned with the alignment member, the method further comprises:
determining the number of each target fixed point based on a preset sequencing direction;
determining the number of each alignment part based on the coding information carried by each target in the image to be processed;
determining a number of each virtual immobile point based on the number of each alignment part;
and determining the target motionless points corresponding to the virtual motionless points respectively based on a preset pairing principle, the serial numbers of the virtual motionless points and the serial numbers of the target motionless points.
8. The method of claim 1, further comprising:
determining a motion path of the virtual motionless point based on the position of the target motionless point and the position of the virtual motionless point;
and sending the motion path to the augmented reality device so that the virtual motionless point moves according to the motion path until the virtual motionless point moves to the position of the target motionless point, and finishing the alignment operation of the sleeve to be aligned.
9. The method of claim 8, further comprising:
determining the movement angle of the virtual fixed point under the condition that the position of the target fixed point does not deviate;
and under the condition that the movement angle is not the threshold angle, controlling the virtual immobile point to stop moving, and sending warning information to the augmented reality device.
10. The method of claim 9, further comprising:
under the condition that the position of the target fixed point deviates, sending an updating instruction to the augmented reality device, and determining the deviation position of the target fixed point with the deviation;
and updating the position of the target fixed point to the offset position.
11. A cannula alignment device, comprising:
the system comprises a first determination module, a second determination module and a third determination module, wherein the first determination module is used for determining a target fixed point of at least one sleeve to be aligned in a target action area in a target working scene;
the second determining module is used for determining a virtual fixed point of each alignment component according to a target in at least one alignment component in the target working scene;
and the display module is used for virtually fusing the target fixed point, the virtual fixed point and the target working scene, displaying the target fixed point on the sleeve to be aligned through an augmented reality device, and displaying the virtual fixed point on the alignment component so as to guide the sleeve to be aligned to the alignment component.
12. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor realizes the steps of the method of any one of claims 1 to 10 when executing the computer program.
13. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 10.
CN202210860308.8A 2022-07-21 2022-07-21 Sleeve alignment method and device, computer equipment and storage medium Pending CN115100257A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210860308.8A CN115100257A (en) 2022-07-21 2022-07-21 Sleeve alignment method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210860308.8A CN115100257A (en) 2022-07-21 2022-07-21 Sleeve alignment method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115100257A true CN115100257A (en) 2022-09-23

Family

ID=83298621

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210860308.8A Pending CN115100257A (en) 2022-07-21 2022-07-21 Sleeve alignment method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115100257A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115690374A (en) * 2023-01-03 2023-02-03 江西格如灵科技有限公司 Interaction method, device and equipment based on model edge ray detection

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115690374A (en) * 2023-01-03 2023-02-03 江西格如灵科技有限公司 Interaction method, device and equipment based on model edge ray detection

Similar Documents

Publication Publication Date Title
US11928838B2 (en) Calibration system and method to align a 3D virtual scene and a 3D real world for a stereoscopic head-mounted display
KR101761751B1 (en) Hmd calibration with direct geometric modeling
US10896497B2 (en) Inconsistency detecting system, mixed-reality system, program, and inconsistency detecting method
US20060050087A1 (en) Image compositing method and apparatus
US11455746B2 (en) System and methods for extrinsic calibration of cameras and diffractive optical elements
US9759918B2 (en) 3D mapping with flexible camera rig
JP4234343B2 (en) Dynamic visual alignment of 3D objects using graphical models
CN108304075B (en) Method and device for performing man-machine interaction on augmented reality device
WO2020033822A1 (en) Capture and adaptive data generation for training for machine vision
JP5093053B2 (en) Electronic camera
CN110377148B (en) Computer readable medium, method of training object detection algorithm, and training apparatus
CN112652016A (en) Point cloud prediction model generation method, pose estimation method and device
US11490062B2 (en) Information processing apparatus, information processing method, and storage medium
US11108964B2 (en) Information processing apparatus presenting information, information processing method, and storage medium
US11132590B2 (en) Augmented camera for improved spatial localization and spatial orientation determination
US11315313B2 (en) Methods, devices and computer program products for generating 3D models
EP3622481B1 (en) Method and system for calibrating a velocimetry system
EP3330928A1 (en) Image generation device, image generation system, and image generation method
US11263818B2 (en) Augmented reality system using visual object recognition and stored geometry to create and render virtual objects
CN115100257A (en) Sleeve alignment method and device, computer equipment and storage medium
US11758100B2 (en) Portable projection mapping device and projection mapping system
US20230224576A1 (en) System for generating a three-dimensional scene of a physical environment
JP2022128087A (en) Measurement system and measurement program
Bownes Using motion capture and augmented reality to test aar with boom occlusion
WO2023054661A1 (en) Gaze position analysis system and gaze position analysis method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination