CN112587235A - Binocular navigator hyper-threading optimization method - Google Patents

Binocular navigator hyper-threading optimization method Download PDF

Info

Publication number
CN112587235A
CN112587235A CN202011438392.1A CN202011438392A CN112587235A CN 112587235 A CN112587235 A CN 112587235A CN 202011438392 A CN202011438392 A CN 202011438392A CN 112587235 A CN112587235 A CN 112587235A
Authority
CN
China
Prior art keywords
thread
image
images
time
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011438392.1A
Other languages
Chinese (zh)
Inventor
侯礼春
芦颖僖
芦嘉毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Linghua Microelectronics Technology Co ltd
Original Assignee
Nanjing Linghua Microelectronics Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Linghua Microelectronics Technology Co ltd filed Critical Nanjing Linghua Microelectronics Technology Co ltd
Priority to CN202011438392.1A priority Critical patent/CN112587235A/en
Publication of CN112587235A publication Critical patent/CN112587235A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2065Tracking using image or pattern recognition

Abstract

The invention discloses a binocular navigator hyper-threading optimization method, belongs to the field of surgical navigation, and improves the processing efficiency of a graphic workstation by using a multi-threading method. The invention processes a plurality of tasks which do not have data access by starting a plurality of threads at the same time; multithreading is to process the surgical instrument images shot from the left camera and the right camera respectively, and two-dimensional coordinates of the surgical instrument images are extracted, online experiments show that compared with non-multithreading processing, the time efficiency can be improved by about 38% by using the multithreading technology, and the reason why the time efficiency can be improved by using the multithreading technology is that resources of a CPU are fully called. The resource occupancy rate of the CPU in the experiment is also analyzed, and the online experiment result shows that the occupancy rate of the CPU under multithreading is 42 percent, but the occupancy rate of the CPU under non-multithreading is 23 percent.

Description

Binocular navigator hyper-threading optimization method
Technical Field
The invention belongs to the field of surgical navigation, and particularly relates to a binocular navigator hyper-threading optimization method.
Background
The operation navigation needs to process the acquired focus image information through a computer workstation, reconstruct a three-dimensional model image of a patient, and then perform preoperative planning and simulate an operation process by operating related software by an operating doctor on the basis; during the actual operation, the surgical instruments and the focus are displayed on the virtual anatomical structure reconstructed before the operation in real time through the processing of the computer workstation.
The operation navigator performs positioning based on binocular stereo vision, and can also be called a stereo positioning system, the work flow diagram of the operation navigator is shown in fig. 1, the composition and the principle of the positioning system are shown in fig. 2, and the operation navigator comprises a binocular camera, an operation instrument and a graphic workstation. When the system is used, the two cameras input the shot images of the surgical instruments with the mark points to the graphic workstation, the optical centers of the left camera and the right camera are respectively connected with the corresponding light spot centers, and the intersection point of the optical centers and the corresponding light spot centers is the space position of the mark point. The position of the surgical instrument can be determined by the spatial coordinates of each marker point on the surgical instrument, which is the basic process of positioning and real-time tracking of the surgical instrument.
To achieve real-time tracking of the surgical instruments, an imaging workstation must be required to be able to process images of the surgical instruments from the binocular camera in real-time. In recent years, as the quality of cameras is improved, the resolution of shot pictures and real-time videos is higher and higher, which brings certain test to the processing efficiency of a graphic workstation. How to improve the computing power of the graphic workstation is a problem worthy of research in the field of surgical navigation. For the problem, many scholars and researchers achieve corresponding requirements by improving hardware level, but the improvement of hardware is limited, and the huge cost brought by the improvement of hardware is not suitable for large-area popularization and application.
Disclosure of Invention
The invention provides a binocular navigator hyper-threading optimization method, which improves the processing efficiency of a graphic workstation by using a multithreading method.
In order to achieve the purpose, the invention adopts the following technical scheme:
a binocular navigator hyper-threading optimization method comprises the following steps:
(1) inputting the images of the surgical instruments with the mark points shot by the two cameras of the binocular camera into a graphic workstation;
(2) acquiring an acquired image matrix by utilizing an OpenCV (open source/sink computer vision library), and separating a color channel from a designed RGB (red, green and blue) value with color ball mark points to acquire a separated image matrix;
(3) extracting the outline of each small ball marker point by using findContours and drawContours commands in OpenCV, processing the residual noise of the image, and using an expansion and morphological method to enable the extracted outline of the marker point to be more accurate;
(4) after the processed image only with the mark points is obtained, calculating two-dimensional coordinates of each spherical center mark point in an image coordinate system;
(5) solving a three-dimensional coordinate according to the two-dimensional coordinate obtained in the step (4);
in the above steps, the process of obtaining the two-dimensional coordinates of the center of sphere of the mark point in the steps (1) to (4) is a relatively independent process, and no data access exists between two frames of images in the process of obtaining the two-dimensional coordinates of the mark point from two camera images;
creating multiple threads by using a pthread library in the steps (2) to (3), respectively and simultaneously processing images from two cameras by using different threads, and releasing each thread by using a computer, wherein the total running time is the maximum value of the running times in all the threads; the method specifically comprises the following steps:
firstly, inputting n frames of surgical instrument images to be processed, wherein i represents the number of the image frames to be processed (i is less than or equal to n); as long as the image is not completely processed, i is not more than n, the program enters a multithreading statement block; in a multi-threaded block of statements, T is used first1Record the time, i.e. the system time at which the multithreading process is started, and then apply for two thread IDs, p1And p2IDs representing two threads of application respectively; then, a thread is created by using pthread _ create, the applied thread ID is given to the created thread, FindCircle represents a processing algorithm corresponding to the processing procedures from the step (2) to the step (3), niRepresenting the ith frame image to be processed, and assigning a function (FindCircle) to be called to the created corresponding thread at the same time by a pthread _ create command; after the thread is created, the thread is started using pthread _ join, and the system time at the moment is subtracted from the system time recorded at the beginning of the program (i.e., T) at the end of the program0) Given t is the requirement for simultaneous processing of two frames of surgical instrument images in a single loop by multiple threadsThe time to be spent; when the n frames of images are processed, t represents the total running time;
and (5) solving a three-dimensional coordinate according to the two-dimensional coordinate by using a three-dimensional coordinate reconstruction method.
Has the advantages that: the invention provides a binocular navigator hyper-threading optimization method, which is characterized in that in a graphic workstation with a multi-core processor, images of surgical instruments shot by a left camera and a right camera are respectively processed by utilizing multiple threads, and during the period from the time of obtaining the images of the left camera and the right camera to the time of obtaining two-dimensional coordinates of marking points of the left image and the right image, two paths of signals are processed concurrently, so that the problems of frequent copying, reading and writing and the like of data do not exist, online experiments show that compared with non-multi-thread processing, the time efficiency can be improved by about 38% by using a multi-thread technology, and the reason for improving the time efficiency by using the multi-thread technology is that the resources. The resource occupancy rate of the CPU in the experiment is also analyzed, and the online experiment result shows that the occupancy rate of the CPU under multithreading is 42 percent, but the occupancy rate of the CPU under non-multithreading is 23 percent.
Drawings
FIG. 1 is a flow chart of a surgical navigator in the background art;
FIG. 2 is a schematic diagram of a positioning system of the prior art;
FIG. 3 is an image of a laboratory surgical instrument in an embodiment of the present invention;
FIG. 4 is an image with only landmark points after processing in an embodiment of the invention;
FIG. 5 is a flow chart of multithreading according to an embodiment of the present invention;
FIG. 6 is a comparison of single-threaded (left) and multi-threaded (right) data processing flow diagrams in an embodiment of the invention;
FIG. 7 is an experimental scenario diagram in an embodiment of the present invention;
FIG. 8 is a graph of multi-threaded versus non-multi-threaded runtime comparison in an embodiment of the present invention (the resolution of the image is 1280 × 960);
FIG. 9 is a graph of multithreading time improvement efficiency (1280 × 960 resolution of image) according to an embodiment of the present invention;
FIG. 10 is a graph of multi-threaded versus non-multi-threaded runtime comparison (image resolution is 640 × 480) in an embodiment of the present invention;
FIG. 11 is a graph of multithreading time improvement efficiency (640 × 480 resolution of image) according to an embodiment of the present invention;
FIG. 12 is a graph of multi-threaded versus non-multi-threaded online processing runtime comparison in an embodiment of the present invention (image resolution is 640 × 480);
FIG. 13 is a graph of efficiency improvement in online multithreading (640X 480 resolution of image) according to an embodiment of the present invention.
Detailed Description
The invention is described in detail below with reference to the following figures and specific examples:
a binocular navigator hyper-threading optimization method comprises the following steps:
(1) inputting the images of the surgical instruments with the mark points shot by the two cameras of the binocular camera into a graphic workstation;
(2) acquiring an acquired image matrix by using an OpenCV (open content description language) library, and separating color channels of the designed RGB values with the color ball mark points as shown in FIG. 3 to acquire a separated image matrix;
(3) extracting the outline of each small ball mark point shown in the figure 4 by using findContours and drawContours commands in OpenCV, processing the residual noise of the image, and enabling the extracted outline of the mark point to be more accurate by using expansion and morphological methods;
(4) after the processed image only with the mark points is obtained, calculating two-dimensional coordinates of each spherical center mark point in an image coordinate system;
(5) solving a three-dimensional coordinate according to the two-dimensional coordinate obtained in the step (4);
in the above steps, the process of obtaining the two-dimensional coordinates of the center of sphere of the mark point in the steps (1) to (4) is a relatively independent process, and no data access exists between two frames of images in the process of obtaining the two-dimensional coordinates of the mark point from two camera images;
as shown in fig. 6, in steps (2) - (3), a pthread library is used to create multiple threads, images from two cameras are processed simultaneously by using different threads, and the computer releases each thread, wherein the total running time is the maximum value of the running times of all threads; as shown in fig. 5, the method specifically includes the following steps:
firstly, inputting n frames of surgical instrument images to be processed, wherein i represents the number of the image frames to be processed (i is less than or equal to n); as long as the image is not completely processed, i is not more than n, the program enters a multithreading statement block; in a multi-threaded block of statements, T is used first1Record the time, i.e. the system time at which the multithreading process is started, and then apply for two thread IDs, p1And p2IDs representing two threads of application respectively; then, a thread is created by using pthread _ create, the applied thread ID is given to the created thread, FindCircle represents a processing algorithm corresponding to the processing procedures from the step (2) to the step (3), niRepresenting the ith frame image to be processed, and assigning a function (FindCircle) to be called to the created corresponding thread at the same time by a pthread _ create command; after the thread is created, the thread is started using pthread _ join, and the system time at the moment is subtracted from the system time recorded at the beginning of the program (i.e., T) at the end of the program0) The t represents the time required for simultaneously processing two frames of surgical instrument images by multiple threads in one loop; when the n frames of images are processed, t represents the total running time;
and (5) solving a three-dimensional coordinate according to the two-dimensional coordinate by using a three-dimensional coordinate reconstruction method.
Respectively carrying out an off-line experiment and an on-line experiment:
in the off-line experiment, the hardware configuration environment of the image workstation is as follows:
176700K, 4 core 8 threads. 32G memory, 256G solid state. The software environment is a VS2010, win10 system. The image captured by the binocular camera had a resolution of 1280 × 960 (resolution adjustable), a frame rate of 24Hz, and a color camera. In the off-line processing experiment, the images of the surgical instruments shot by the binocular vision camera are input into an image workstation for processing, the time for multi-thread processing and non-multi-thread processing is recorded every 20 frames of images, in order to make the experimental result more convincing, the 20 frames of images at 50 different moments are selected in the experiment, the time for processing the multi-thread processing and the single-thread processing of the 20 frames of images for 50 times is recorded, then the average time is taken as the time for processing the 20 frames of images, the average of 50 groups of processing time at different moments is taken as the time consumed by processing a specific number of frames by multi-thread processing and single-thread processing from 20 frames to 200 frames, meanwhile, the time efficiency improvement ratio of using the multi-thread technology and the average utilization ratio of CPU resources in the multi-thread and non-thread environments are calculated, and finally the influence of the image resolution on the. And in the online experiment, multiple threads are used for processing the shot surgical instrument images in real time, and the result is compared with the result in a non-multiple thread mode.
Off-line experiments and results analysis
Fig. 7 is a view of a binocular camera tracking a surgical instrument. In the experiment, the time used for the multi-thread processing and the non-multi-thread processing is recorded once every 20 frames of images are processed, the average value of 50 groups of experiments is taken as the time consumed for processing a specific frame number, the images are increased from 20 frames to 200 frames, and the relation table of the frame number of the processed images and the time used is obtained as shown in the table 1:
TABLE 1 multithreading versus single-threading processing time (resolution of image 1280X 960)
Figure BDA0002821478850000051
As can be seen from an analysis of the flow chart of fig. 5, the relationship between single-threaded and multi-threaded environment processing is nearly linear. A least squares linear fit is used to fit the data in table i.
For a given set of data { (x)i,yi) 0, 1.., m }, solving a particular function f (x) such that:
Figure BDA0002821478850000061
the process of minimizing, i.e., minimizing the sum of the squares of the errors, is a least squares fit. Assuming the required linear expression is:
y=kx+a (2)
the solution process for the coefficients k and a is then the process of linear fitting. The solving equation for k is:
Figure BDA0002821478850000062
wherein
Figure BDA0002821478850000063
Is the average of the number of image processing frames. While
Figure BDA0002821478850000064
Is the average of the time taken to process the image. And a can be solved by the following formula:
Figure BDA0002821478850000065
respectively performing least square fitting on the experimental results under the single-thread condition and the multi-thread condition, wherein a straight line graph of the least square fitting is shown in FIG. 8;
the linear expression for least squares fit estimation for multi-threaded processing is
Ta=0.0147n+0.0025 (5)
Where n represents the total number of processed frames and TaRepresenting the time consumed in a multi-threaded environment processing n frames of images.
Similarly, for non-multithreaded processing the fitting equation is:
Tb=0.0222n-0.0082 (6)
wherein, TbRepresenting the time consumed in processing n-frame images in a non-multithreaded environment.
Assuming that n frames of images are processed, the time required for processing in a multi-thread environment is Ta0.0147n +0.0025, the time required for processing in a non-multithreaded environment is Tb0.0222 n-0.0082. Processing surgical instruments in a multi-threaded environmentThe efficiency of time improvement for an image compared to a non-multithreaded environment can be expressed as:
Figure BDA0002821478850000066
it is apparent that the efficiency of time boosting is a function of the number of frames of the processed image, and the curve of this function is shown in fig. 9, and it can be seen in fig. 9 that the efficiency of time boosting is rising as the number of frames of the processed image increases. In this experiment, when the number of frames of images is sufficiently large, the time improvement efficiency is stabilized at about 37.5%. That is, when n is a relatively large value, the formula (7) may be further approximately equal to
Figure BDA0002821478850000071
Namely, the efficiency can be improved by about 34% by using multithreading at most in the invention. Taking n 200, multithreading from Table 1 requires Ta2.949s, rather than multithreading, requires TbWhen becoming 4.451, then promote
Figure BDA0002821478850000072
Figure BDA0002821478850000073
The experimental results basically conform to the derivation of formulas.
There is a significant problem in that if the images of the surgical instrument captured by the left and right cameras are processed simultaneously using the multi-thread technique in the present invention, and the single-thread technique is processed frame by frame, the improvement should be about 50% according to this logic. That is, assuming that the time consumed for processing two frames per frame image is t, the time consumed for processing two frames per single thread is 2t, and the time consumed for processing two frames per multi-thread processing is t (because two frames of images are processed at the same time), the efficiency improvement should be 50%. In fact, the improvement of 50% is a precondition that the resources of the computer CPU are infinite, but in actual work, the resources of the computer are limited. In order to better reflect the occupancy rate of the CPU resources, the average occupancy rate of 200 groups of experiments is selected as the average occupancy rate for measuring the CPU resource occupancy conditions in the multi-thread, non-multi-thread and idle states, and table 2 shows the resource occupancy rates of the CPU in the idle, single-thread and multi-thread environments:
TABLE 2 CPU resource occupancy in different states of the computer (resolution of image 1280X 960)
Figure BDA0002821478850000074
In the computer environment of the present invention, the CPU resource occupancy rate of the computer is only 1% when the computer is not performing any data processing, and the computer performs only some simple works such as maintenance of the operating system, execution of the native application programs, and the like. When the processing of an image is performed in a non-multithreading environment, the resource occupancy of the CPU is 25%. While the resource occupancy of the CPU in a multithreaded environment is 40%. That is, the resource occupancy rate of the CPU is limited, and during the data amount calculation process, the CPU may not utilize all the resources, that is, the CPU must reserve some resources to deal with other tasks, such as the user may suddenly start some other programs or start some other tasks. That is, the limitation of CPU resource makes the present invention not reach 50% efficiency improvement under ideal condition.
Next, the resolution of the images of the surgical instruments captured by the binocular camera was reduced from 1280 × 960 to 640 × 480, keeping the other experimental conditions unchanged. The obtained relationship between the image processing time and the number of image frames is shown in table 3:
TABLE 3 multithreading versus Single threading processing time comparison (resolution of image 640X 480)
Figure BDA0002821478850000081
Similarly, the data in table 3 is fitted by least squares linear fit, and the fitted result is shown in fig. 10; for multi-threaded processing, the mathematical expression for the least squares fit estimate is:
Tc=0.0041n+0.0017 (8)
for non-multithreaded processing, the mathematical expression for the least squares estimate is:
Td=0.0067n-0.0050 (9)
then when the image resolution is 640 x 480, considering processing n frames of images, the multithread needs to be processed for T timec0.0041n +0.0017, and the time that a single thread needs to process is Td0.0067 n-0.0050. The percentage of this efficiency improvement is:
Figure BDA0002821478850000082
the efficiency of time improvement as a function of the number of processed image frames is shown in FIG. 11, when the number of processed image frames is sufficiently large, equation (10)
Figure BDA0002821478850000083
For example, when n is 200, according to table 3, multithreading requires 0.826s, whereas non-multithreading requires 1.338s, the improvement in efficiency is about 38.3%.
When the resolution of the processed image is 640 × 480, the resource occupancy of the CPU is as in table 4:
TABLE 4 CPU resource occupancy rates (image resolution 640X 480) for different states of the computer
Figure BDA0002821478850000084
It can be seen from table 4 that when the resolution of the comparison image is 1280 × 960, the resource occupancy rate of the CPU is basically kept unchanged, that is, the experiment does not change the resolution of the captured image of the surgical instrument and then performs the processing, and the resource occupancy rate of the CPU is not changed too much. However, due to the reduction of the image resolution, under the condition that the CPU resource occupancy rate is basically unchanged, the time used by the multithread processing and the non-multithread processing is reduced. That is, the resolution of the video image affects the processing time of the image and changes the time efficiency of the image processing.
On-line experiments and results analysis
And (3) processing the images of the surgical instruments shot by the binocular camera in real time in an online experiment. In the online experiment, the frame rate is 24Hz, the image resolution is 640 multiplied by 480, the hardware environment is 174790K, 4 cores and 8 threads, and 8G memory. The software environment is a VS2010, win7 system. In the experiment, the time taken for the multithread processing and the non-multithread processing was recorded every 2000 frames of images processed, the time result for processing 40000 frames of images was recorded, and the relationship between the image processing time and the number of image frames (the first 1000 frames selected) was obtained as shown in table 5:
TABLE 5 multithreading versus Single-threading Online processing time comparison (resolution of image 640X 480)
Figure BDA0002821478850000091
Fitting by using least squares, and obtaining the result shown in FIG. 12;
for multi-threaded processing, the mathematical expression for the least squares fit estimate is:
Te=0.0062n-0.2583 (11)
for non-multithreaded processing, the mathematical expression for the least squares estimate is:
Tf=0.0100n-0.8809 (12)
then when the image resolution is 640 x 480, considering processing n frames of images, the multithread needs to be processed for T timee0.0064n-1.2555, and the time required for processing by a single thread is Tf0.0101 n-1.0217. The percentage of this efficiency improvement is:
Figure BDA0002821478850000101
the efficiency of time improvement as a function of the number of frames of a processed image is shown in FIG. 13, and when the number of frames of a processed image is sufficiently large, equation (13)
Figure BDA0002821478850000102
For example, when n is 20000, the time actually consumed by the multi-thread processing is 125.847s, while the time consumed by the non-multi-thread processing is 199.894s, the improvement of the efficiency is about 37.04%, and the online experimental result substantially conforms to the least square fitting estimation result.
The foregoing is only a preferred embodiment of this invention and it should be noted that modifications can be made by those skilled in the art without departing from the principle of the invention and these modifications should also be considered as the protection scope of the invention.

Claims (5)

1. A binocular navigator hyper-threading optimization method is characterized by comprising the following steps:
(1) inputting the images of the surgical instruments with the mark points shot by the two cameras of the binocular camera into a graphic workstation;
(2) acquiring an acquired image matrix by utilizing an OpenCV (open source/sink computer vision library), and separating a color channel from a designed RGB (red, green and blue) value with color ball mark points to acquire a separated image matrix;
(3) extracting the outline of each small ball marker point by using findContours and drawContours commands in OpenCV, processing the residual noise of the image, and using an expansion and morphological method to enable the extracted outline of the marker point to be more accurate;
(4) after the processed image only with the mark points is obtained, calculating two-dimensional coordinates of each spherical center mark point in an image coordinate system;
(5) and (4) calculating a three-dimensional coordinate according to the two-dimensional coordinate obtained in the step (4).
2. The binocular navigator multithread optimization method according to claim 1, wherein the process of obtaining the two-dimensional coordinates of the center of sphere of the landmark point in the steps (1) to (4) is a relatively independent process, and no data access exists between two frames of images in the process of obtaining the two-dimensional coordinates of the landmark point from the two camera images.
3. The binocular navigator hyper-threading optimization method according to claim 1 or 2, wherein multithreading is created using a pthread library in the steps (2) to (3), images from two cameras are simultaneously processed respectively using different threads, and the computer releases each thread, and the total running time is the maximum value of the running times in all threads.
4. The binocular navigator hyper-threading optimization method according to claim 3, wherein a pthread library is used to create multiple threads in the steps (2) to (3), images from two cameras are processed simultaneously by different threads respectively, and the computer releases each thread, and the total running time is the maximum value of the running times in all the threads, and specifically comprises the following steps:
firstly, inputting n frames of surgical instrument images to be processed, wherein i represents the number of the image frames to be processed (i is less than or equal to n); as long as the image is not completely processed, i is not more than n, the program enters a multithreading statement block; in a multi-threaded block of statements, T is used first1Record the time, i.e. the system time at which the multithreading process is started, and then apply for two thread IDs, p1And p2IDs representing two threads of application respectively; then, a thread is created by using pthread _ create, the applied thread ID is given to the created thread, FindCircle represents a processing algorithm corresponding to the processing procedures from the step (2) to the step (3), niRepresenting the ith frame image to be processed, and assigning a function (FindCircle) to be called to the created corresponding thread at the same time by a pthread _ create command; after the thread is created, starting the thread by using pthread _ join, and endowing the system time at the moment minus the system time recorded when the program starts to t at the end of the program with the time required by simultaneous processing of two frames of surgical instrument images by multiple threads in one cycle; when the n frames of images are processed, t represents the total running time.
5. The binocular navigator multithreading optimization method according to claim 1, wherein the step (5) finds three-dimensional coordinates from two-dimensional coordinates using a three-dimensional coordinate reconstruction method.
CN202011438392.1A 2020-12-07 2020-12-07 Binocular navigator hyper-threading optimization method Pending CN112587235A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011438392.1A CN112587235A (en) 2020-12-07 2020-12-07 Binocular navigator hyper-threading optimization method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011438392.1A CN112587235A (en) 2020-12-07 2020-12-07 Binocular navigator hyper-threading optimization method

Publications (1)

Publication Number Publication Date
CN112587235A true CN112587235A (en) 2021-04-02

Family

ID=75192303

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011438392.1A Pending CN112587235A (en) 2020-12-07 2020-12-07 Binocular navigator hyper-threading optimization method

Country Status (1)

Country Link
CN (1) CN112587235A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102319116A (en) * 2011-05-26 2012-01-18 上海交通大学 Method for increasing three-dimensional positioning accuracy of surgical instrument by using mechanical structure
CN103631568A (en) * 2013-12-20 2014-03-12 厦门大学 Medical-image-oriented multi-thread parallel computing method
US20150051617A1 (en) * 2012-03-29 2015-02-19 Panasonic Healthcare Co., Ltd. Surgery assistance device and surgery assistance program
CN108274476A (en) * 2018-03-01 2018-07-13 华侨大学 A kind of method of anthropomorphic robot crawl sphere
CN109830031A (en) * 2019-01-21 2019-05-31 陕西科技大学 A kind of coin discriminating method based on OpenCV
CN110176041A (en) * 2019-05-29 2019-08-27 西南交通大学 A kind of new type train auxiliary assembly method based on binocular vision algorithm

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102319116A (en) * 2011-05-26 2012-01-18 上海交通大学 Method for increasing three-dimensional positioning accuracy of surgical instrument by using mechanical structure
US20150051617A1 (en) * 2012-03-29 2015-02-19 Panasonic Healthcare Co., Ltd. Surgery assistance device and surgery assistance program
CN103631568A (en) * 2013-12-20 2014-03-12 厦门大学 Medical-image-oriented multi-thread parallel computing method
CN108274476A (en) * 2018-03-01 2018-07-13 华侨大学 A kind of method of anthropomorphic robot crawl sphere
CN109830031A (en) * 2019-01-21 2019-05-31 陕西科技大学 A kind of coin discriminating method based on OpenCV
CN110176041A (en) * 2019-05-29 2019-08-27 西南交通大学 A kind of new type train auxiliary assembly method based on binocular vision algorithm

Similar Documents

Publication Publication Date Title
US7950003B1 (en) Heads-up-display software development tool for analyzing and optimizing computer software
DE102020131896A1 (en) DEEP LEARNING-BASED SELECTION OF SAMPLE VALUES FOR ADAPTIVE SUPERSAMPLING
DE102020121601A1 (en) Persistent notepad memory for exchanging data between programs
CN103593168A (en) Rendering processing apparatus and method using multiprocessing
CN115205926A (en) Lightweight robust face alignment method and system based on multitask learning
Zhou et al. Semi-supervised 6D object pose estimation without using real annotations
Li et al. Integrated registration and occlusion handling based on deep learning for augmented reality assisted assembly instruction
Li et al. Sign language letters recognition model based on improved YOLOv5
DE102019133561A1 (en) System and method for detecting changes in rendered scenes using remote-hosted graphics applications
US20070003133A1 (en) Method and system for pivot-point based adaptive soft shadow estimation
CN112587235A (en) Binocular navigator hyper-threading optimization method
US9013494B2 (en) Heads-up-display software development tool
US10212406B2 (en) Image generation of a three-dimensional scene using multiple focal lengths
CN112335258A (en) Automatic field of view estimation from the perspective of a play participant
RU2446472C2 (en) Encoding method and system for displaying digital mock-up of object on screen in form of synthesised image
Melo et al. Real-time HD image distortion correction in heterogeneous parallel computing systems using efficient memory access patterns
Wang et al. LHPE-nets: A lightweight 2D and 3D human pose estimation model with well-structural deep networks and multi-view pose sample simplification method
Beddad et al. Development and optimisation of image segmentation algorithm on an embedded DSP-platform
CN111161408B (en) Method for realizing augmented reality, application thereof and computing equipment
CN107330938A (en) A kind of visible detection method and its system for multiple measurands
DE102021121202A1 (en) Method and device for determining a front body orientation
Yen et al. Differentiated Handling of Physical Scenes and Virtual Objects for Mobile Augmented Reality
CN117237451B (en) Industrial part 6D pose estimation method based on contour reconstruction and geometric guidance
CN113362344B (en) Face skin segmentation method and equipment
Wang et al. Tele-ar system based on real-time camera tracking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210402

RJ01 Rejection of invention patent application after publication