CN111833308B - Respiratory motion monitoring method and monitoring system based on Kinect - Google Patents

Respiratory motion monitoring method and monitoring system based on Kinect Download PDF

Info

Publication number
CN111833308B
CN111833308B CN202010551881.1A CN202010551881A CN111833308B CN 111833308 B CN111833308 B CN 111833308B CN 202010551881 A CN202010551881 A CN 202010551881A CN 111833308 B CN111833308 B CN 111833308B
Authority
CN
China
Prior art keywords
image
depth
camera
target
color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010551881.1A
Other languages
Chinese (zh)
Other versions
CN111833308A (en
Inventor
周正东
刘传乐
魏士松
贾峻山
章栩苓
毛玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN202010551881.1A priority Critical patent/CN111833308B/en
Publication of CN111833308A publication Critical patent/CN111833308A/en
Application granted granted Critical
Publication of CN111833308B publication Critical patent/CN111833308B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • G06T7/44Analysis of texture based on statistical description of texture using image operators, e.g. filters, edge density metrics or local histograms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/20ICT specially adapted for the handling or processing of patient-related medical or healthcare data for electronic clinical trials or questionnaires
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Abstract

The invention discloses a respiratory motion monitoring method and a respiratory motion monitoring system based on Kinect. The image acquisition module acquires a color image and a depth image by using the KinectSDK; the image registration module generates a registration image by using a coordinate transformation matrix of a Kinect color camera and a Kinect depth camera; the target recognition module reduces noise points in the registered image by using a median filtering algorithm, feeds back a target region by using a Hough circle detection algorithm, and finally acquires the position of the target by combining RGB values in the region; the target tracking module feeds back depth data of the target area in real time by utilizing position information fed back by a target detection algorithm and applying a Camshift target tracking algorithm; and the depth data processing module performs noise reduction processing on the depth data fed back by the target tracking algorithm and outputs a stable respiratory motion signal.

Description

Respiratory motion monitoring method and monitoring system based on Kinect
Technical Field
The invention belongs to the field of computer vision, and particularly relates to a respiratory motion monitoring method and a respiratory motion monitoring system based on Kinect.
Technical Field
Currently, respiratory motion monitoring is widely used in clinical diagnosis and treatment; such as tumor radiation, lung puncture, computed tomography, etc. Because of the increasing incidence of lung and breast cancer, the need for respiratory monitoring systems in clinical applications is increasing. In clinical applications, conventional contact type respiratory belt monitoring equipment or NDI non-contact type respiratory monitoring equipment with high cost are adopted in most cases; because the breathing zone device needs to be closely attached to the periphery of the chest of the patient, the breathing zone device can cause discomfort to the patient during long-time breathing monitoring; the NDI equipment is high in price, and the medical cost brought by the NDI equipment is correspondingly increased; aiming at the problems, the literature Tahavoria, F, adams, E, et al, modeling Marker-less Patient Setup and Respiratory Motion Monitoring Using Low Cost 3D Camera Technology.Spie Medical Imaging International Society for Optics and Photonics,2015,9415,94152I provides a respiratory motion monitoring method based on a Kinect depth camera, but only utilizes a depth image to acquire average depth information in a self-defining ROI mode, and the monitoring mode is single and is not beneficial to actual operation; the respiratory motion monitoring method proposed by the literature "Silverstein, evan, snyder, M, et al, composite analysis of respiratory motion tracking using Microsoft Kinect v sensor, journal of Applied Clinical Medical Physics,2018,19,193-204.) although the respiratory motion is also custom-tracked by combining Kinect color images and depth images, the slight offset motion of the patient can cause irreversibility of the monitoring, which is unfavorable for clinical use.
Disclosure of Invention
The invention aims to: aiming at the problems of high price of clinical medical respiration monitoring equipment at the present stage and the existing respiration monitoring method, the invention provides a respiration motion monitoring method based on Kinect, which has higher cost performance and measurement accuracy.
Corresponding to the monitoring method, the invention also provides a respiratory motion monitoring system based on Kinect.
The technical scheme is as follows: in order to achieve the above purpose, the respiratory motion monitoring method based on Kinect provided by the invention adopts the following technical scheme that the respiratory motion monitoring method comprises the following steps:
step 1: connecting equipment by using a Kinect sensor and an application programming interface KinectSDK, and acquiring a color image and a depth image of an interested region; the Kinect sensor comprises a color camera and a depth camera;
step 2: image registration is carried out on the acquired color image and the depth image, a rotation and translation transformation matrix from the color camera to the depth camera is acquired, a registration image is generated based on matrix transformation, and the registration image is marked as I c
Step 3: for the registered image I in step 2 c Preprocessing, automatically identifying a circular target in the registration image by using a target identification algorithm, and feeding back a target position R i
Step 4: for the circular target region R in step 3 i Automatically tracking by using a Camshift target tracking algorithm, and feeding back depth information in a circular target area in real time;
step 5: and (3) carrying out noise reduction treatment on the depth information obtained in the step (4) to obtain stable depth data, and realizing real-time monitoring of respiratory motion.
The invention provides a respiratory motion monitoring system based on Kinect, which adopts the following technical scheme:
the system comprises an image acquisition module, an image registration module, a target identification and tracking module and a depth data processing module;
the image acquisition module is provided with a Kinect sensor, an application programming interface KinectSDK is used for connecting equipment, and a color image and a depth image of an interested region are obtained;
the image registration module is used for carrying out image registration on the acquired color image and depth image, acquiring a rotation and translation transformation matrix from the color camera to the depth camera, and generating a registration image based on matrix transformation;
the target recognition and tracking module is used for preprocessing the registration image, automatically recognizing a circular target in the registration image by using a target recognition algorithm and feeding back the target position;
the depth data processing module is used for automatically tracking the round target area by using a Camshift target tracking algorithm and feeding back the depth information in the round target area in real time; and carrying out noise reduction treatment on the acquired depth information to acquire stable depth data, so as to realize real-time monitoring of respiratory motion.
The beneficial effects are that: in the respiratory motion monitoring method and system based on Kinect, the Kinect sensor with low cost can be used for positioning and tracking the positions of a plurality of mark points, feeding back respiratory motion signals of a human body in real time, and the respiratory motion monitoring method and system based on Kinect have higher cost performance and measurement accuracy; meanwhile, the invention combines the color camera image and has better operability.
Drawings
Fig. 1 is a schematic diagram of a respiratory motion monitoring system based on Kinect.
Fig. 2 is an image registration flow chart.
Fig. 3 is a flow chart of a target detection algorithm.
Fig. 4 is a target tracking flow chart.
Fig. 5 is a flow chart of respiratory motion data processing.
Detailed Description
The invention is further elucidated below in conjunction with the accompanying drawings. It is to be understood that these examples are illustrative of the present invention and are not intended to limit the scope of the present invention. Further, it is understood that various changes and modifications may be made by those skilled in the art after reading the teachings of the present invention, and such equivalents are intended to fall within the scope of the claims appended hereto.
Referring to fig. 1, a respiratory motion monitoring system based on Kinect is provided in an embodiment of the present invention, which includes an image acquisition module 11, an image registration module 12, a target recognition and tracking module 13, and a depth data processing module 14.
The image acquisition module is provided with a Kinect v2 sensor, and an application programming interface KinectSDK2.0 is used for connecting equipment and acquiring a color image and a depth image of an interested region;
the image registration module is used for carrying out image registration on the acquired color image and depth image, acquiring a rotation and translation transformation matrix from the color camera to the depth camera, and generating a registration image based on matrix transformation;
the target recognition and tracking module is used for preprocessing the registration image, automatically recognizing a circular target in the registration image by using a target recognition algorithm and feeding back the target position;
the depth data processing module is used for automatically tracking the round target area by using a Camshift target tracking algorithm and feeding back the depth information in the round target area in real time; and carrying out noise reduction treatment on the acquired depth information to acquire stable depth data, so as to realize real-time monitoring of respiratory motion.
Meanwhile, the functions of the modules are sequentially formed based on the Kinect respiratory motion monitoring method. With reference to fig. 1 and the system, the respiratory motion monitoring method based on Kinect provided by the invention comprises the following steps:
step 1: connecting equipment by using a Kinect v2 sensor and an application programming interface Kinect SDK2.0, and acquiring color information and depth information of an interested region;
step 2: as shown in fig. 2, an initial registration image is generated by using a spatial coordinate transformation matrix from the acquired color image to the depth image, a registration image space is traversed, whether the registered color data has abnormality or not is judged, the color data meeting the requirements is reserved, a registration image is generated, and the registration image is marked as I c Obtaining a color camera to depth camera registration image from;
wherein p is rgb And p d P, which is a point in the pixel coordinate system of the color camera and the depth camera rgb And P d K is a point in the camera coordinate system of the color camera and the depth camera rgb And K d An internal reference matrix for a color camera and a depth camera, and R and T are rotation and translation matrices for the color camera to the depth camera. K (K) rgb 、K d The acquisition modes of R and T are as follows: and acquiring images of the calibration plate in different angles and different positions, and calibrating in a MATLAB tool box.
Step 3: registering the image I in the step 2 c Preprocessing, and then adopting a target recognition algorithm, as shown in fig. 3, to automatically select a target in a color image and feed back the position of a target area, wherein the method specifically comprises the following steps:
step 3.1: preprocessing the registration image by adopting median filtering, filtering noise in the registration image, converting the filtered image into a gray image and then using the gray image in a Hough circle detection algorithm.
Step 3.2: the method comprises the following specific steps of:
step 3.2.1: acquiring the positions of the circle center and the radius; registration image I by adopting Canny edge detection algorithm c Edge detection is carried out to obtain an edge binary image I e At I e The (x, y) coordinate data are transformed into a three-dimensional space expressed by polar coordinate parameters (a, b, R) through Hough transformation, a threshold value is set in a voting mode, and positions of a circle center and a radius are obtained and recorded as { R } 1 ,R 2 ,…,R n (x, y) is the coordinate of the circle in the Cartesian coordinate system, (a, b) is the position of the circle center, and r is the radius of the circle;
step 3.2.2: using the obtained set { R } of the center and radius of the target position 1 ,R 2 ,…,R n Extracting RGB value of the target area, judging whether the RGB value meets the preset requirement, storing the circle center and radius of the target area meeting the preset requirement, and marking as R i
Step 4: as shown in fig. 4, the target region R in step 3 i Registration of image I using a target tracking algorithm, the Camshift algorithm c The method comprises the steps of automatically tracking color information in a target area and feeding back depth information of the target area in real time, and specifically comprises the following steps:
step 4.1: will register image I c Converting from an RGB color space to an HSV color space;
step 4.2: counting the region R of the target i Histogram information of H channels in HSV color space, denoted as { H ] 1 ,H 2 ,…,H s And performing a back projection algorithm to obtain a probability density distribution map I g
Step 4.3: distribution map I of probability density g The Camshift algorithm is applied to acquire the real-time tracking state of the area where the target is located, judge whether the tracking position accords with the tracking position, and feed back the depth information of the position where the target is located as { d } 1 ,d 2 ,…,d j -and rendering a display in real time;
step 5: the depth information { d ] obtained in the step 4 is processed 1 ,d 2 ,…,d j Noise reduction processing is carried out, stable depth information is obtained, and the real-time monitoring of respiratory motion is realized, as shown in fig. 5, and the method specifically comprises the following steps:
step 5.1: obtaining depth data { d ] in a circular target area of a current frame 1 ,d 2 ,…,d j Acquiring an inscribed rectangle of the region where the target is located by using x, y, width and height of the position where the tracking frame is located, and storing depth data { D } of the inscribed rectangle 1 ,D 2 ,…,D k };
Step 5.2: depth data { D of inscribed rectangle in region where current target is located 1 ,D 2 ,…,D k Traversing the depth data { D } 1 ,D 2 ,…,D k Ranking according to the occurrence frequency, weighted averaging according to the following formula, obtaining weighted average depth data D in the target area w Setting a threshold zone according to the weighted average result, judging whether the depth data is in the threshold range, reserving the average value of the data meeting the requirements, and marking as D m Invalid data is deleted.
Wherein a is 1 ,a 2 ,…,a k Representing the weights. The weight is determined in the following manner: by counting target area depth data occurrencesThe weight is defined according to the occurrence frequency of the depth data, and the corresponding large weight value with high frequency is obtained.
Step 5.3: for the processed average depth data D m And outputting, and acquiring stable respiratory motion data.
The above examples merely represent individual embodiments of the invention, the description of which is more specific and which should not therefore be construed as limiting the scope of the invention. At the same time, it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the scope of the invention. Accordingly, the description is not to be taken as limiting the invention.

Claims (5)

1. A respiratory motion monitoring method based on Kinect is characterized by comprising the following steps of: the method comprises the following steps:
step 1: connecting equipment by using a Kinect sensor and an application programming interface KinectSDK, and acquiring a color image and a depth image of an interested region; the Kinect sensor comprises a color camera and a depth camera;
step 2: image registration is carried out on the acquired color image and the depth image, a rotation and translation transformation matrix from the color camera to the depth camera is acquired, a registration image is generated based on matrix transformation, and the registration image is marked as I c
Step 3: for the registered image I in step 2 c Preprocessing, automatically identifying a circular target in the registration image by using a target identification algorithm, and feeding back a target position R i
Step 4: for the circular target region R in step 3 i Automatically tracking by using a Camshift target tracking algorithm, and feeding back depth information in a circular target area in real time;
step 5: carrying out noise reduction treatment on the depth information obtained in the step 4 to obtain stable depth data, so as to realize real-time monitoring of respiratory motion;
in the step 2, the image registration is completed based on a space coordinate transformation matrix of the color camera and the depth camera; obtaining a color camera to depth camera registration image from;
wherein p is rgb And p d P, which is a point in the pixel coordinate system of the color camera and the depth camera rgb And P d K is a point in the camera coordinate system of the color camera and the depth camera rgb And K d An internal reference matrix for a color camera and a depth camera, R and T are rotation and translation matrices from the color camera to the depth camera;
the spatial coordinate transformation matrix is: and calculating the transformation relation of the world coordinate system, the camera coordinate system and the pixel coordinate system, and obtaining the rotation and translation matrix transformation matrix from the color camera to the depth camera.
2. The method according to claim 1, wherein in the step 3, a median filtering algorithm is adopted in the filtering algorithm, so that image noise is reduced, and recognition performance is improved; the target recognition algorithm adopts a Hough circle detection algorithm to detect a target area, stores the circle center and the radius of the target area, and stores the circle center and the radius position of the target area which accords with a preset RGB color value range in a color filtering mode, and is marked as R i
3. The method according to claim 1, wherein in said step 4, the target region R is selected from the group consisting of i The color image of the target area H channel is separated to carry out histogram back projection to generate a probability density distribution map, a Camshift algorithm is used for automatically tracking the target, whether the tracking position accords with the target position is judged, and depth information of the position of the target is fed back.
4. The method according to claim 1, wherein in the step 5, depth information of a circular target area is acquired, and depth data { D ] within an inscribed rectangle is retained 1 ,D 2 ,…,D k Reducing the influence of edge noise, and obtaining weighted average depth data D by using a weighted average mode w I.e.
Wherein a is 1 ,a 2 ,…,a k Representing weight, setting a threshold zone according to the weighted average result, judging whether the depth data is in the threshold range, reserving the average value of the data meeting the requirements, and marking as D m Stable respiratory motion data is acquired.
5. The respiratory motion monitoring system based on Kinect is characterized by comprising an image acquisition module, an image registration module, a target identification and tracking module and a depth data processing module;
the image acquisition module is provided with a Kinect sensor, an application programming interface KinectSDK is used for connecting equipment, and a color image and a depth image of an interested region are obtained;
the image registration module is used for carrying out image registration on the acquired color image and depth image, acquiring a rotation and translation transformation matrix from the color camera to the depth camera, and generating a registration image based on matrix transformation;
the target recognition and tracking module is used for preprocessing the registration image, automatically recognizing a circular target in the registration image by using a target recognition algorithm and feeding back the target position;
the depth data processing module is used for automatically tracking the round target area by using a Camshift target tracking algorithm and feeding back the depth information in the round target area in real time; noise reduction processing is carried out on the acquired depth information, stable depth data are acquired, and real-time monitoring of respiratory motion is achieved;
in the image registration module, the image registration is completed based on a space coordinate transformation matrix of the color camera and the depth camera; obtaining a color camera to depth camera registration image from;
wherein p is rgb And p d P, which is a point in the pixel coordinate system of the color camera and the depth camera rgb And P d K is a point in the camera coordinate system of the color camera and the depth camera rgb And K d An internal reference matrix for a color camera and a depth camera, R and T are rotation and translation matrices from the color camera to the depth camera;
the spatial coordinate transformation matrix is: and calculating the transformation relation of the world coordinate system, the camera coordinate system and the pixel coordinate system, and obtaining the rotation and translation matrix transformation matrix from the color camera to the depth camera.
CN202010551881.1A 2020-06-17 2020-06-17 Respiratory motion monitoring method and monitoring system based on Kinect Active CN111833308B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010551881.1A CN111833308B (en) 2020-06-17 2020-06-17 Respiratory motion monitoring method and monitoring system based on Kinect

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010551881.1A CN111833308B (en) 2020-06-17 2020-06-17 Respiratory motion monitoring method and monitoring system based on Kinect

Publications (2)

Publication Number Publication Date
CN111833308A CN111833308A (en) 2020-10-27
CN111833308B true CN111833308B (en) 2024-03-15

Family

ID=72899175

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010551881.1A Active CN111833308B (en) 2020-06-17 2020-06-17 Respiratory motion monitoring method and monitoring system based on Kinect

Country Status (1)

Country Link
CN (1) CN111833308B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104794737A (en) * 2015-04-10 2015-07-22 电子科技大学 Depth-information-aided particle filter tracking method
CN106826815A (en) * 2016-12-21 2017-06-13 江苏物联网研究发展中心 Target object method of the identification with positioning based on coloured image and depth image
CN107705322A (en) * 2017-09-27 2018-02-16 中北大学 Motion estimate tracking and system
CN110648367A (en) * 2019-08-15 2020-01-03 大连理工江苏研究院有限公司 Geometric object positioning method based on multilayer depth and color visual information

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104794737A (en) * 2015-04-10 2015-07-22 电子科技大学 Depth-information-aided particle filter tracking method
CN106826815A (en) * 2016-12-21 2017-06-13 江苏物联网研究发展中心 Target object method of the identification with positioning based on coloured image and depth image
CN107705322A (en) * 2017-09-27 2018-02-16 中北大学 Motion estimate tracking and system
CN110648367A (en) * 2019-08-15 2020-01-03 大连理工江苏研究院有限公司 Geometric object positioning method based on multilayer depth and color visual information

Also Published As

Publication number Publication date
CN111833308A (en) 2020-10-27

Similar Documents

Publication Publication Date Title
CN107920722B (en) Reconstruction by object detection for images captured from a capsule camera
US11576645B2 (en) Systems and methods for scanning a patient in an imaging system
US11253171B2 (en) System and method for patient positioning
CN110956635B (en) Lung segment segmentation method, device, equipment and storage medium
US11576578B2 (en) Systems and methods for scanning a patient in an imaging system
Wang et al. Evaluation and comparison of anatomical landmark detection methods for cephalometric x-ray images: a grand challenge
CN107909622B (en) Model generation method, medical imaging scanning planning method and medical imaging system
US8810640B2 (en) Intrinsic feature-based pose measurement for imaging motion compensation
CN107596578B (en) Alignment mark recognition method, alignment mark position determination method, image forming apparatus, and storage medium
CN110956633B (en) Rapid CT scanning method and system based on virtual stereotactic image
JP2008520344A (en) Method for detecting and correcting the orientation of radiographic images
CN110246580B (en) Cranial image analysis method and system based on neural network and random forest
BRPI0919448B1 (en) method for tracking a follicular unit and system for tracking a follicular unit.
US20180182091A1 (en) Method and system for imaging and analysis of anatomical features
CN108294772A (en) A kind of CT scan vision positioning method and CT system
CN110223279B (en) Image processing method and device and electronic equipment
CN110782428B (en) Method and system for constructing clinical brain CT image ROI template
CN112750531A (en) Automatic inspection system, method, equipment and medium for traditional Chinese medicine
CN112352289A (en) Method and system for providing ECG analysis interface
CN111833379B (en) Method for tracking target position in moving object by monocular camera
Banumathi et al. Diagnosis of dental deformities in cephalometry images using support vector machine
CN111833308B (en) Respiratory motion monitoring method and monitoring system based on Kinect
CN106504257B (en) A kind of radiotherapy head position attitude measuring and calculation method
CN109903264B (en) Registration method and system of digital human image and CT image
CN110197722B (en) AI-CPU system platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant