JPWO2019206819A5 - - Google Patents
Download PDFInfo
- Publication number
- JPWO2019206819A5 JPWO2019206819A5 JP2020558478A JP2020558478A JPWO2019206819A5 JP WO2019206819 A5 JPWO2019206819 A5 JP WO2019206819A5 JP 2020558478 A JP2020558478 A JP 2020558478A JP 2020558478 A JP2020558478 A JP 2020558478A JP WO2019206819 A5 JPWO2019206819 A5 JP WO2019206819A5
- Authority
- JP
- Japan
- Prior art keywords
- subject
- series
- support
- repeated images
- medical device
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Claims (15)
支持体表面を有する対象者支持体であって、前記支持体表面は前記対象者を支え、前記対象者支持体は初期位置において前記対象者を支持し、前記初期位置において、前記対象者は前記撮像ゾーンの外側にいる、対象者支持体と、
前記対象者支持体が前記初期位置にあるときに、前記支持体表面を画像化するカメラシステムと、
マシン実行可能命令を含むメモリと、
医療機器を制御するためのプロセッサと
を備える当該医療機器であって、前記マシン実行可能命令の実行は、前記プロセッサに、
前記対象者支持体を前記初期位置に配置させ、
一連の繰り返される画像を繰り返し取得するように前記カメラシステムを制御させ、
前記一連の繰り返される画像内の前記支持体表面を少なくとも部分的に遮る1つ又は複数の背景物体の配置を検出させ、
前記一連の繰り返される画像内の前記1つ又は複数の背景物体の少なくとも一部を遮る1つ又は複数の前景物体を検出させ、
前記1つ又は複数の前景物体によって遮られる背景物体を含む画像領域を置き換えるために、前記一連の繰り返される画像をつなぎ合わせることによって、背景物体表面画像を少なくとも部分的に構成させ、
前記背景物体表面画像を用いて、3次元物体表面を決定させ、
前記一連の繰り返される画像のうちの1つにおいて前記対象者を検出させ、
前記一連の繰り返される画像のうちの前記1つにおいて前記対象者の対象者セグメンテーションを計算させ、
前記対象者セグメンテーションと、前記一連の繰り返される画像のうちの前記1つとを用いて、視認可能な対象者表面を決定させ、
前記3次元物体表面及び前記視認可能な対象者表面によって画定されるボリュームを推定することによって、3次元対象者モデルを計算させる、
医療機器。 A medical imaging system for acquiring medical imaging data from a subject in the imaging zone,
A subject support having a support surface, wherein the support surface supports the subject, the subject support supports the subject in an initial position, and in the initial position the subject supports the subject. With the subject support outside the imaging zone,
A camera system that images the surface of the support when the target support is in the initial position.
Memory containing machine executable instructions,
The medical device including a processor for controlling the medical device, and the execution of the machine-executable instruction is performed by the processor.
The subject support is placed in the initial position and
The camera system is controlled to repeatedly acquire a series of repeated images.
The placement of one or more background objects that at least partially obstruct the surface of the support in the series of repeated images is detected.
Detecting one or more foreground objects that block at least a portion of the one or more background objects in the series of repeated images.
To replace the image area containing the background object obstructed by the one or more foreground objects, the background object surface image is at least partially constructed by stitching together the series of repeated images.
Using the background object surface image, a three-dimensional object surface is determined.
The subject is detected in one of the series of repeated images.
The subject segmentation of the subject is calculated in the one of the series of repeated images.
The subject segmentation and one of the series of repeated images are used to determine the visible subject surface.
The 3D subject model is calculated by estimating the volume defined by the 3D object surface and the visible subject surface.
Medical equipment.
前記関心領域を前記3次元対象者モデルに当てはめさせ、
当てはめられた前記関心領域を画像化するように前記パルスシーケンスコマンドを変更させる、請求項3に記載の医療機器。 Execution of the machine-executable instruction further causes the processor to execute.
Apply the region of interest to the 3D subject model and
The medical device of claim 3, wherein the pulse sequence command is modified to image the fitted region of interest.
前記一連の繰り返される画像内の少なくとも所定の数の順次画像内で静止している前記1つ又は複数の前景物体のうちの少なくとも一部を識別させ、
前記1つ又は複数の前景物体のうちの前記少なくとも一部が、所定の基準を用いて、前記3次元対象者モデルに対して正確に配置されるか否かを判断させる、請求項3又は4に記載の医療機器。 Execution of the machine-executable instruction further causes the processor to execute.
At least a portion of the one or more foreground objects that are stationary in at least a predetermined number of sequential images in the series of repeated images are identified.
3. The medical device described in.
前記3次元対象者モデルを用いて、比吸収モデル、末梢神経刺激モデル、音圧モデル、対象者身長、及び/又は対象者体重を選択させ、
前記比吸収モデル、末梢神経刺激モデル、音圧モデル、前記対象者身長、及び/又は前記対象者体重を少なくとも部分的に用いて前記パルスシーケンスコマンドを変更させる、請求項3から5のいずれか一項に記載の医療機器。 Execution of the machine-executable instruction further causes the processor to execute.
Using the three-dimensional subject model, a specific absorption model, a peripheral nerve stimulation model, a sound pressure model, a subject's height, and / or a subject's weight are selected.
One of claims 3 to 5, wherein the pulse sequence command is modified by using the specific absorption model, the peripheral nerve stimulation model, the sound pressure model, the subject height, and / or the subject weight at least partially. The medical device described in the section.
ローカライザの開始の定義を自動化すること、
前記ローカライザの末端又は長さの定義を自動化すること、
前記対象者の水平方向の中心合わせを決定すること、
前記対象者の垂直方向の中心合わせを決定すること、
前記3次元対象者モデルを用いて、X線吸収モデルを選択すること、
対象者支持体高を選択すること、及び
これらの組み合わせのうちのいずれか1つを実行させる、請求項1から6のいずれか一項に記載の医療機器。 The medical imaging system is a CT system, and the execution of the machine executable instruction is further performed on the processor.
To automate the definition of localizer start,
To automate the definition of the end or length of the localizer,
Determining the horizontal centering of the subject,
Determining the vertical centering of the subject,
Selecting an X-ray absorption model using the 3D subject model,
The medical device according to any one of claims 1 to 6, wherein the subject support height is selected and any one of these combinations is executed.
前記背景物体表面画像内の前記1つ又は複数の背景物体に3次元対象者モデルを割り当てさせ、
割り当てられた前記3次元対象者モデルを用いて、前記3次元物体表面を構成させる、請求項1から7のいずれか一項に記載の医療機器。 The camera system is a two-dimensional camera system, and the execution of the machine executable instruction is further performed on the processor.
A three-dimensional subject model is assigned to the one or more background objects in the background object surface image.
The medical device according to any one of claims 1 to 7, wherein the assigned 3D object model is used to form the surface of the 3D object.
前記一連の繰り返される画像内の前記支持体表面を少なくとも部分的に遮る1つ又は複数の背景物体の前記配置を検出すること、
前記一連の繰り返される画像内の前記1つ又は複数の背景物体の少なくとも一部を遮る前記1つ又は複数の前景物体を検出すること、
前記1つ又は複数の前景物体によって遮られる背景物体を含む画像領域を置き換えるために、前記一連の繰り返される画像をつなぎ合わせることによって、物体表面画像を少なくとも部分的に構成すること、
前記物体表面画像を用いて、前記3次元物体表面を決定すること、
前記一連の繰り返される画像のうちの1つにおいて前記対象者を検出すること、
前記一連の繰り返される画像のうちの前記1つにおいて前記対象者の前記対象者セグメンテーションを計算すること、
前記対象者セグメンテーションと、前記一連の繰り返される画像のうちの前記1つとを用いて、前記視認可能な対象者表面を決定すること、
前記3次元物体表面及び前記視認可能な対象者表面によって画定されるボリュームを推定することによって、3次元対象者モデルを計算すること、並びに
これらの組み合わせ
のうちのいずれか1つが絶えず更新される、請求項1から10のいずれか一項に記載の医療機器。 While the subject support is in the initial position
Detecting the arrangement of one or more background objects that at least partially obstruct the surface of the support in the series of repeated images.
Detecting the one or more foreground objects that block at least a portion of the one or more background objects in the series of repeated images.
To construct an object surface image at least partially by stitching together the series of repetitive images to replace an image area containing a background object obstructed by the one or more foreground objects.
Determining the three-dimensional object surface using the object surface image,
Detecting the subject in one of the series of repeated images,
To calculate the subject segmentation of the subject in said one of the series of repeated images.
Using the subject segmentation and said one of the series of repeated images to determine the visible subject surface.
The 3D subject model is calculated by estimating the volume defined by the 3D object surface and the visible subject surface, and any one of these combinations is constantly updated. The medical device according to any one of claims 1 to 10.
前記1つ又は複数の前景物体が所定のリストの前景物体から選択されること、及び
これらの組み合わせ
のうちのいずれか1つが実行される、請求項1から12のいずれか一項に記載の医療機器。 That one or more background objects are selected from a given list of background objects.
The medical device according to any one of claims 1 to 12, wherein the one or more foreground objects are selected from a predetermined list of foreground objects, and any one of these combinations is performed. machine.
前記マシン実行可能命令の実行は、前記プロセッサに、
前記対象者支持体を前記初期位置に配置させ、
一連の繰り返される画像を繰り返し取得するように前記カメラシステムを制御させ、
前記一連の繰り返される画像を用いて前記支持体表面を少なくとも部分的に遮る1つ又は複数の背景物体の配置を検出させ、
前記一連の繰り返される画像内の前記1つ又は複数の背景物体の少なくとも一部を遮る1つ又は複数の前景物体を検出させ、
前記1つ又は複数の前景物体によって遮られる背景物体を含む画像領域を置き換えるために、前記一連の繰り返される画像をつなぎ合わせることによって、背景物体表面画像を少なくとも部分的に構成させ、
前記背景物体表面画像を用いて、3次元物体表面を決定させ、
前記一連の繰り返される画像のうちの1つにおいて前記対象者を検出させ、
前記一連の繰り返される画像のうちの前記1つにおいて前記対象者の対象者セグメンテーションを計算させ、
前記対象者セグメンテーションと、前記一連の繰り返される画像のうちの前記1つとを用いて、視認可能な対象者表面を決定させ、
前記3次元物体表面及び前記視認可能な対象者表面によって画定されるボリュームを推定することによって、3次元対象者モデルを計算させる、
コンピュータプログラム。 A computer program comprising machine executable instructions to be executed by a processor controlling a medical device, wherein the medical device comprises a medical imaging system for acquiring medical imaging data from a subject in the imaging zone. The medical imaging system further comprises a subject support having a support surface, the support surface supporting the subject, the subject support supporting the subject in an initial position, and the subject in the initial position. The subject is outside the imaging zone, and the medical imaging system further comprises a camera system that images the surface of the support when the subject support is in the initial position.
Execution of the machine-executable instruction is performed by the processor.
The subject support is placed in the initial position and
The camera system is controlled to repeatedly acquire a series of repeated images.
The series of repeated images is used to detect the placement of one or more background objects that at least partially obstruct the surface of the support.
Detecting one or more foreground objects that block at least a portion of the one or more background objects in the series of repeated images.
To replace the image area containing the background object obstructed by the one or more foreground objects, the background object surface image is at least partially constructed by stitching together the series of repeated images.
Using the background object surface image, a three-dimensional object surface is determined.
The subject is detected in one of the series of repeated images.
The subject segmentation of the subject is calculated in the one of the series of repeated images.
The subject segmentation and one of the series of repeated images are used to determine a visible subject surface.
The 3D subject model is calculated by estimating the volume defined by the 3D object surface and the visible subject surface.
Computer program.
前記方法は、
前記対象者支持体を前記初期位置に配置するステップと、
一連の繰り返される画像を繰り返し取得するように前記カメラシステムを制御するステップと、
前記一連の繰り返される画像を用いて前記支持体表面を少なくとも部分的に遮る1つ又は複数の背景物体の配置を検出するステップと、
前記一連の繰り返される画像内の前記1つ又は複数の背景物体の少なくとも一部を遮る1つ又は複数の前景物体を検出するステップと、
前記1つ又は複数の前景物体によって遮られる背景物体を含む画像領域を置き換えるために、前記一連の繰り返される画像をつなぎ合わせることによって、背景物体表面画像を少なくとも部分的に構成するステップと、
前記背景物体表面画像を用いて、3次元物体表面を決定するステップと、
前記一連の繰り返される画像のうちの1つにおいて前記対象者を検出するステップと、
前記一連の繰り返される画像のうちの前記1つにおいて前記対象者の対象者セグメンテーションを計算するステップと、
前記対象者セグメンテーションと、前記一連の繰り返される画像のうちの前記1つとを用いて、視認可能な対象者表面を決定するステップと、
前記3次元物体表面及び前記視認可能な対象者表面によって画定されるボリュームを推定することによって、3次元対象者モデルを計算するステップとを有する、
方法。 A method of operating a medical device, wherein the medical device comprises a medical imaging system for acquiring medical imaging data from a subject in the imaging zone, and the medical device is a subject support having a support surface. The support surface supports the subject, the subject support supports the subject in an initial position, and in the initial position the subject is outside the imaging zone, said. The medical device further comprises a camera system that images the surface of the support when the subject support is in the initial position.
The method is
The step of arranging the subject support in the initial position and
A step of controlling the camera system to repeatedly acquire a series of repeated images,
A step of detecting the placement of one or more background objects that at least partially obstruct the surface of the support using the series of repeated images.
A step of detecting one or more foreground objects that block at least a portion of the one or more background objects in the series of repeated images.
A step of at least partially constructing a background object surface image by stitching together the series of repeated images to replace an image area containing a background object obstructed by the one or more foreground objects.
A step of determining a three-dimensional object surface using the background object surface image,
A step of detecting the subject in one of the series of repeated images,
A step of calculating the subject segmentation of the subject in said one of the series of repeated images.
A step of determining a visible subject surface using the subject segmentation and the one of the series of repeated images.
It comprises a step of calculating a 3D subject model by estimating the volume defined by the 3D object surface and the visible subject surface.
Method.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP18168714.6A EP3561771A1 (en) | 2018-04-23 | 2018-04-23 | Automated subject monitoring for medical imaging |
EP18168714.6 | 2018-04-23 | ||
PCT/EP2019/060157 WO2019206819A1 (en) | 2018-04-23 | 2019-04-18 | Automated subject monitoring for medical imaging |
Publications (3)
Publication Number | Publication Date |
---|---|
JP2021521942A JP2021521942A (en) | 2021-08-30 |
JPWO2019206819A5 true JPWO2019206819A5 (en) | 2022-04-22 |
JP7252256B2 JP7252256B2 (en) | 2023-04-04 |
Family
ID=62046722
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
JP2020558478A Active JP7252256B2 (en) | 2018-04-23 | 2019-04-18 | Automated subject monitoring for medical imaging |
Country Status (5)
Country | Link |
---|---|
US (1) | US11972857B2 (en) |
EP (2) | EP3561771A1 (en) |
JP (1) | JP7252256B2 (en) |
CN (1) | CN112204616A (en) |
WO (1) | WO2019206819A1 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112891098B (en) * | 2021-01-19 | 2022-01-25 | 重庆火后草科技有限公司 | Body weight measuring method for health monitor |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008004222A2 (en) | 2006-07-03 | 2008-01-10 | Yissum Research Development Company Of The Hebrew University Of Jerusalem | Computer image-aided method and system for guiding instruments through hollow cavities |
US7813528B2 (en) | 2007-04-05 | 2010-10-12 | Mitsubishi Electric Research Laboratories, Inc. | Method for detecting objects left-behind in a scene |
US8235530B2 (en) * | 2009-12-07 | 2012-08-07 | C-Rad Positioning Ab | Object positioning with visual feedback |
JP5274526B2 (en) | 2010-09-09 | 2013-08-28 | 三菱電機株式会社 | Skin dose display device and skin dose display method |
DE102012209190A1 (en) | 2012-05-31 | 2013-12-05 | Siemens Aktiengesellschaft | Method for detecting information of at least one object arranged on a patient support device in a medical imaging device and a medical imaging device for carrying out the method |
US10493298B2 (en) | 2013-08-02 | 2019-12-03 | Varian Medical Systems, Inc. | Camera systems and methods for use in one or more areas in a medical facility |
JP6345468B2 (en) | 2014-04-09 | 2018-06-20 | キヤノンメディカルシステムズ株式会社 | Medical diagnostic imaging equipment |
DE102014210051A1 (en) * | 2014-05-27 | 2015-12-03 | Carl Zeiss Meditec Ag | Method and device for determining a surface topography of a body |
KR101946019B1 (en) | 2014-08-18 | 2019-04-22 | 삼성전자주식회사 | Video processing apparatus for generating paranomic video and method thereof |
US10092191B2 (en) | 2015-01-16 | 2018-10-09 | Siemens Healthcare Gmbh | Joint visualization of 3D reconstructed photograph and internal medical scan |
CN105827946B (en) | 2015-11-26 | 2019-02-22 | 东莞市步步高通信软件有限公司 | A kind of generation of panoramic picture and playback method and mobile terminal |
-
2018
- 2018-04-23 EP EP18168714.6A patent/EP3561771A1/en not_active Withdrawn
-
2019
- 2019-04-18 WO PCT/EP2019/060157 patent/WO2019206819A1/en unknown
- 2019-04-18 JP JP2020558478A patent/JP7252256B2/en active Active
- 2019-04-18 US US17/049,605 patent/US11972857B2/en active Active
- 2019-04-18 EP EP19717945.0A patent/EP3785227B1/en active Active
- 2019-04-18 CN CN201980034488.0A patent/CN112204616A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102049328B1 (en) | Object positioning apparatus, object positioning method, object positioning program, and radiation therapy system | |
US9909854B2 (en) | Image processing apparatus and image processing method | |
JP6333979B2 (en) | Intervention X-ray system with automatic isocentering | |
JP6964309B2 (en) | Radiation therapy tracking device | |
JP6831242B2 (en) | Deployment modeling | |
US8977028B2 (en) | Method of displaying virtual ruler on separate image or medical image of object, medical image obtaining apparatus, and method and apparatus for displaying separate image or medical image with virtual ruler | |
EP1926434B1 (en) | Low-dose iso-centering | |
WO2009147841A1 (en) | Projection image creation device, method, and program | |
JP2016093546A5 (en) | ||
JP2016519995A5 (en) | ||
US11423554B2 (en) | Registering a two-dimensional image with a three-dimensional image | |
JP2012196259A5 (en) | ||
EP3273409A1 (en) | Image processing apparatus and image processing method | |
KR20160087772A (en) | Method for an exchange of data between a medical imaging apparatus and a user and a medical imaging apparatus therefor | |
US20180268558A1 (en) | Method and system for recording a change in position of an object | |
JP2018522624A5 (en) | ||
JP2015167725A (en) | Image-processing apparatus, image-capturing system, and image-processing program | |
JP2019150486A5 (en) | ||
JP6840481B2 (en) | Image processing device and image processing method | |
JP2021078692A5 (en) | ||
JP7055860B2 (en) | Imaging method to acquire the skeleton of the human body | |
JPWO2019238462A5 (en) | ||
EP3520687A1 (en) | Camera-based coil configuration in mri | |
JPWO2019206819A5 (en) | ||
CN112137621A (en) | Determination of patient motion during medical imaging measurements |