WO2021103316A1 - Procédé, dispositif et système de détermination de région cible d'une image - Google Patents

Procédé, dispositif et système de détermination de région cible d'une image Download PDF

Info

Publication number
WO2021103316A1
WO2021103316A1 PCT/CN2020/075056 CN2020075056W WO2021103316A1 WO 2021103316 A1 WO2021103316 A1 WO 2021103316A1 CN 2020075056 W CN2020075056 W CN 2020075056W WO 2021103316 A1 WO2021103316 A1 WO 2021103316A1
Authority
WO
WIPO (PCT)
Prior art keywords
attention
area
determining
picture
target
Prior art date
Application number
PCT/CN2020/075056
Other languages
English (en)
Chinese (zh)
Inventor
王纯亮
Original Assignee
天津拓影科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 天津拓影科技有限公司 filed Critical 天津拓影科技有限公司
Publication of WO2021103316A1 publication Critical patent/WO2021103316A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems

Definitions

  • the present disclosure relates to the field of image processing technology, and in particular to a method for determining an image target area, an image target area device, an image target area system, and a computer-readable storage medium.
  • computer technology can be used as an important auxiliary processing method in various technical fields.
  • computer-aided diagnosis technology and computer-aided detection technology can use imaging, medical image processing technology, and other possible physiological and biochemical methods, combined with computer analysis and calculation, to assist in the discovery of lesions, thereby improving the accuracy of diagnosis. Therefore, the extraction of various information in the image is particularly important as the object of analysis and calculation.
  • relevant methods of artificial intelligence can be used to extract important information in images.
  • Chinese patents CN108090903A and CN110276741A extract important information from medical images through CNN (Convolutional Neural Networks). Or, by training the CNN model to track the user's eyeballs to determine the important information in the image.
  • CNN Convolutional Neural Networks
  • a method for determining an image target area includes: determining the user's attention positions on the target picture according to the user's eye movement information in the process of observing the target picture; using a machine learning model, Extract each area of interest on the target picture; determine the target area on the target picture according to each position of interest and each area of interest.
  • determining the target area on the target picture according to each attention location and each attention area includes: determining the user's position attention degree for each attention position according to the motion information; according to the attention degree of each position, in each attention area Determine the target area.
  • determining the target area in each attention area according to the attention degree of each location includes: determining the area attention degree of the attention area according to the position attention degree of each attention location contained in the attention area; In the case of a threshold, the corresponding area of interest is determined as the target area.
  • determining the position attention degree of the user for each attention position according to the motion information includes: determining the gaze time of the user for each attention position according to the motion information to determine the position attention degree.
  • determining each attention position of the user on the target picture according to the motion information includes: determining each gaze point of the user on the target picture according to the motion information; and determining each attention position according to the trajectory formed by each gaze point.
  • the movement information of the eyeball includes at least one of the movement of the eyeball relative to the head or the position of the eyeball.
  • the target picture is a medical imaging picture
  • the location of interest is the location of the diagnoser's attention
  • the area of interest is a suspected lesion area.
  • the machine learning model is trained by the following steps: Obtain at least one of the user's attention position and the corresponding position attention on each training picture as the attention information, and the training picture is a picture of the same type as the target picture ; Take each training picture and attention information as input, and use each attention area of each training picture as the annotation result to train the machine learning model.
  • an apparatus for determining a target area of an image including: a position determining unit for determining the user's attention positions on the target picture according to the user's eye movement information in the process of observing the target picture
  • the extraction unit is used to extract each area of interest on the target picture by using a machine learning model; the area determination unit is used to determine the target area on the target picture according to each position of interest and each area of interest.
  • the area determining unit determines the user's degree of attention to the position of each attention position according to the motion information, and determines the target area in each attention area according to the degree of attention of each position.
  • the area determining unit determines the area attention degree of the attention area according to the position attention degree of each attention position contained in the attention area, and when the area attention degree is less than the threshold, the corresponding area of interest is determined as the target area.
  • the area determining unit determines the gaze time of the user for each attention location according to the motion information to determine the attention degree of the location.
  • the position determining unit determines each gaze point of the user on the target picture according to the motion information, and determines each attention position according to the trajectory formed by each gaze point.
  • the movement information of the eyeball includes at least one of the movement of the eyeball relative to the head or the position of the eyeball.
  • the target picture is a medical imaging picture
  • the location of interest is the location of the diagnoser's attention
  • the area of interest is a suspected lesion area.
  • the machine learning model is trained by the following steps: Obtain at least one of the user's attention position and the corresponding position attention on each training picture as the attention information, and the training picture is a picture of the same type as the target picture ; Take each training picture and attention information as input, and use each attention area of each training picture as the annotation result to train the machine learning model.
  • a device for determining an image target area including: a memory; and a processor coupled to the memory, and the processor is configured to execute any one of the above based on instructions stored in the memory device.
  • the method for determining the image target area in the embodiment including: a memory; and a processor coupled to the memory, and the processor is configured to execute any one of the above based on instructions stored in the memory device.
  • a computer-readable storage medium having a computer program stored thereon, and when the program is executed by a processor, the method for determining an image target area in any of the foregoing embodiments is implemented.
  • a system for determining an image target area including: the device for determining an image target area in any of the above embodiments; and an eye tracker, which is used to obtain a user in the process of observing the target picture Eye movement information.
  • FIG. 1 shows a flowchart of some embodiments of the method for determining an image target area of the present disclosure
  • FIG. 2 shows a flowchart of some embodiments of step 130 in FIG. 1;
  • FIG. 3 shows a flowchart of some embodiments of step 1320 in FIG. 2;
  • FIG. 4 shows a flowchart of other embodiments of the method for determining an image target area of the present disclosure
  • FIG. 5 shows a block diagram of some embodiments of an apparatus for determining an image target area of the present disclosure
  • Fig. 6 shows a block diagram of other embodiments of the device for determining an image target area of the present disclosure
  • FIG. 7 shows a block diagram of still other embodiments of the device for determining an image target area of the present disclosure
  • FIG. 8 shows a block diagram of some embodiments of a system for determining an image target area of the present disclosure.
  • the inventors of the present disclosure have discovered that the above-mentioned related technologies have the following problems: the extracted important image information is often not what is needed, resulting in low accuracy and low efficiency of image processing.
  • the present disclosure proposes a technical solution for determining an image target area, which can improve the accuracy and efficiency of image processing.
  • FIG. 1 shows a flowchart of some embodiments of the method for determining an image target area of the present disclosure.
  • the method includes: step 110, determining each attention location by eye movements; step 120, determining each attention area by a machine learning model; and step 130, determining a target area.
  • step 110 according to the user's eye movement information in the process of observing the target picture, the user's attention positions on the target picture are determined.
  • the movement information of the eyeball includes at least one of the movement of the eyeball relative to the head or the position of the eyeball.
  • eye tracking may be performed on the user, by measuring the position of the gaze point of the eye or the movement of the eye relative to the head to realize the tracking of the eye movement.
  • an eye tracker such as a video shooting device
  • a screen-based eye tracker may be used to obtain the target picture and track and measure the movement information of the eyeball.
  • You can also use eye tracking glasses (Eye Tracking Glasses) and other eye trackers that can take pictures of the target from the observer's perspective to obtain the target pictures and track and measure the movement information of the eyeballs.
  • the eye tracker can be tobii, imotion, smarteye, etc.
  • the Screen-Based Eye Tracker can use the medical imaging picture displayed on the screen as the target picture, and track the eye movement of the diagnoser, so as to perform computer-aided detection; Eye Tracking Glasses can be used to obtain real-time information about the vehicle or vehicle that the driver sees. Target images of traffic signs and track the driver’s eye movements for computer-assisted driving.
  • each gaze point of the user on the target picture is determined according to the motion information; each attention position is determined according to the trajectory formed by each gaze point.
  • the machine learning model is used to extract the regions of interest on the target picture.
  • the machine learning model can be trained to extract the face area in the portrait picture, or extract the diseased area in the medical picture.
  • the machine learning model may be various neural network models capable of extracting image features.
  • a convolutional neural network model can be used to determine the regions of interest on the target picture.
  • the machine learning model is trained by the following steps: Obtain at least one of the user's attention position and the corresponding position attention on each training picture as the attention information, and the training picture is a picture of the same type as the target picture ; Take each training picture and attention information as input, and use each attention area of each training picture as the annotation result to train the machine learning model.
  • heat maps For example, for the application scenario of computer-aided inspection, when multiple medical experts are observing multiple medical imaging pictures, it is possible to record the heat maps (heat maps) visually tracked by each medical expert; then, the obtained heat maps (for example, heat The graph can be thresholded) as the output of the machine learning model for training.
  • the heat map includes each attention position of the medical expert on each training picture and the corresponding position attention degree.
  • the training After the training is completed, input a set of medical imaging pictures to the machine learning model to infer the locations that "experts in the field” will pay attention to.
  • the "expert in the field” may also be the user himself.
  • the inferred result is the correct observation result of the user when he is awake or not tired.
  • the machine learning model can still point out the “key points” that they missed during the observation, that is, the places that “experts in the field” will pay attention to.
  • step 130 the target area on the target picture is determined according to each attention position and each attention area.
  • the region of interest is determined as the target region.
  • the target area is not only the important information required by the user, but also the important area screened out by the artificial intelligence method.
  • the target area can be used as important information for further processing such as face recognition, target tracking, and medical diagnosis.
  • step 130 may be performed through the embodiment in FIG. 2.
  • FIG. 2 shows a flowchart of some embodiments of step 130 in FIG. 1.
  • step 130 includes: step 1310, determining the degree of location attention; and step 1320, determining the target area.
  • the user's degree of attention to the location of each attention location is determined.
  • the user's gaze time for each attention location can be determined according to the motion information to determine the location attention degree.
  • the degree of attention of the corresponding attention position can also be determined according to other factors such as the change of the pupil and the rotation of the eyeball.
  • step 1320 a target area is determined in each area of interest according to the degree of attention of each location.
  • the attention area when the attention degree of the corresponding position of the attention area is greater than the threshold, the attention area may be determined as the target area.
  • the target area is both important information for user needs and artificial intelligence method screening.
  • Important area The target area can be used as important information for further processing such as face recognition, target tracking, and medical diagnosis.
  • step 1320 may be performed through the embodiment in FIG. 3.
  • FIG. 3 shows a flowchart of some embodiments of step 1320 in FIG. 2.
  • step 1320 includes: step 310, determining the area attention degree; and step 320, determining the target area.
  • the area attention degree of the attention area is determined according to the position attention degree of each attention position contained in the attention area. For example, in a case where the overlapping area of the attention position and the attention area is greater than the area threshold, it is determined that the attention area includes the attention position.
  • step 320 in a case where the area attention degree is less than the threshold, the corresponding attention area is determined as the target area.
  • the target area determined by the artificial intelligence method may be important information for the user's needs, but the user does not pay enough attention.
  • the target area can be provided to the user as important information for further processing such as face recognition, target tracking, and medical diagnosis, thereby improving the accuracy and efficiency of image processing.
  • the target picture is a surveillance picture
  • the attention position is a position that the monitor pays attention to
  • the attention area is a suspected face area.
  • the target picture is a medical imaging picture
  • the location of interest is the location of the diagnoser's attention
  • the area of interest is a suspected lesion area.
  • the embodiment of FIG. 4 can be used to give medical imaging pictures for computer-aided detection.
  • FIG. 4 shows a flowchart of other embodiments of the method for determining an image target area of the present disclosure.
  • the method includes: step 410, input medical image pictures; step 420, perform eye tracking; step 430, artificial intelligence detection; step 440, obtain a heat map; step 450, determine the suspected lesion area; and step 460. Determine the prompt area.
  • the medical imaging pictures are input into the system so that the imaging physician can read the pictures through the display device, and the computer can perform corresponding processing.
  • medical imaging pictures may be images generated by nuclear magnetic equipment, CT (Computed Tomography) equipment, DR (Digital Radiography) equipment, ultrasound equipment, X-ray machines, and the like.
  • the eye tracker is used to record the running track of the doctor's gaze point (Gaze Point) during the entire image reading process; according to the running track, determine the doctor's attention at each point or certain area degree. For example, the degree of attention can be determined based on the conscious gaze time.
  • the gaze point is the basic measurement unit of the eye tracker. A gaze point is equal to an original sample captured by the eye tracker.
  • a heat map is generated according to the degree of attention. For example, in the process of reading a medical image, the longer the doctor looks at a certain position on the medical image, the darker the color of the area in the corresponding heat map of the medical image.
  • the heat map can be divided into multiple doctors' attention areas according to the color depth of the heat map (for example, clustering can be used).
  • an artificial intelligence method (such as a neural network) is used to process the medical image picture, and one or more machine attention regions (regions of interest) are extracted.
  • a neural network model can be trained to recognize disease-related areas in an image for processing medical images. Step 420 and step 440 have no order of execution.
  • step 450 the area of interest of each machine is determined as a suspected lesion area.
  • step 460 the output areas of the two systems (eye tracking system and artificial intelligence system) are compared.
  • the area of interest of each physician can be matched with the area of interest of each machine according to the location information (for example, it can be based on the overlapping area of the area).
  • the attention degree of the attention area of the machine can be determined according to the attention degree of the doctor's attention area matching the attention area of the machine.
  • the attention area of the machine is prompted to the physician. For example, by displaying eye-catching marks, pop-up floating windows, and sound prompts in the corresponding area of the medical image picture.
  • the attention threshold may be set for each machine attention area according to at least one of the anatomical structure of the machine attention area and the characteristics of the disease, the physician's scanning habits (which can be extracted from training data).
  • the doctor's attention areas 1 to 4 are matched with the machine attention areas 1 to 4, respectively, and the attention degree of the doctor's attention area 4 corresponding to the machine attention area 4 is less than the attention threshold. In this case, the physician can be prompted to focus on the machine's focus area 4 to improve accuracy and efficiency.
  • a physician performs a diagnosis of lung nodules.
  • the doctor found 4 lung nodules (which can be obtained through eye tracking); through artificial intelligence detection, 5 lung nodules were found in the medical image.
  • Four of the lung nodule areas discovered by artificial intelligence are the same as the lung nodule areas discovered by the doctor.
  • the doctor can be prompted to read only one lung nodule area discovered by artificial intelligence. .
  • the physician does not have to look at all the five lung nodules discovered by artificial intelligence, which greatly reduces the time for reading the picture and improves the processing efficiency and accuracy of the system.
  • the important information in the picture is determined by combining the attention position obtained according to the user's eye movements and the attention area extracted by the machine learning model. In this way, it is possible to combine the actual attention needs of users and the high performance of artificial intelligence to improve the accuracy and efficiency of image processing.
  • FIG. 5 shows a block diagram of some embodiments of an apparatus for determining an image target area of the present disclosure.
  • the device 5 for determining an image target area includes a position determining unit 51, an extracting unit 52 and an extracting unit 53.
  • the position determining unit 51 determines each attention position of the user on the target picture according to the eye movement information of the user in the process of observing the target picture.
  • the movement information of the eyeball includes at least one of the movement of the eyeball relative to the head or the position of the eyeball.
  • the position determining unit 51 determines each gaze point of the user on the target picture according to the motion information; determines each attention position according to the trajectory formed by each gaze point.
  • the extraction unit 52 uses a machine learning model to extract each region of interest on the target picture.
  • the machine learning model is trained by the following steps: Obtain at least one of the user's attention position and the corresponding position attention on each training picture as the attention information, and the training picture is a picture of the same type as the target picture ; Take each training picture and attention information as input, and use each attention area of each training picture as the annotation result to train the machine learning model.
  • the area determining unit 53 determines the target area on the target picture according to each attention position and each attention area.
  • the area determining unit 53 determines the user's degree of attention to each location of interest according to the motion information, and determines the target area in each area of interest according to the degree of attention of each location.
  • the area determining unit 53 determines the area attention degree of the attention area according to the position attention degree of each attention position contained in the attention area; in the case that the area attention degree is less than the threshold, the corresponding attention area is determined as target area.
  • the area determining unit 53 determines the user's gaze time for each attention position according to the motion information to determine the position attention degree.
  • the target picture is a medical imaging picture
  • the location of interest is the location of the diagnoser's attention
  • the area of interest is a suspected lesion area.
  • the important information in the picture is determined by combining the attention position obtained according to the user's eye movements and the attention area extracted by the machine learning model. In this way, it is possible to combine the actual attention needs of users and the high performance of artificial intelligence to improve the accuracy and efficiency of image processing.
  • Fig. 6 shows a block diagram of other embodiments of the device for determining an image target area of the present disclosure.
  • the device 6 for determining the image target area of this embodiment includes: a memory 61 and a processor 62 coupled to the memory 61, and the processor 62 is configured to execute the present invention based on instructions stored in the memory 61.
  • the memory 61 may include, for example, a system memory, a fixed non-volatile storage medium, and the like.
  • the system memory stores an operating system, application programs, boot loader, database, and other programs, for example.
  • FIG. 7 shows a block diagram of still other embodiments of the apparatus for determining an image target area of the present disclosure.
  • the device 7 for determining the image target area of this embodiment includes a memory 710 and a processor 720 coupled to the memory 710.
  • the processor 720 is configured to execute the aforementioned instructions based on instructions stored in the memory 710. The method for determining the image target area in any one of the embodiments.
  • the memory 710 may include, for example, a system memory, a fixed non-volatile storage medium, and the like.
  • the system memory stores, for example, an operating system, an application program, a boot loader, and other programs.
  • the device 7 for determining the image target area may also include an input/output interface 730, a network interface 740, a storage interface 750, and the like. These interfaces 730, 740, 750, and the memory 710 and the processor 720 may be connected by a bus 760, for example.
  • the input and output interface 730 provides a connection interface for input and output devices such as a display, a mouse, a keyboard, and a touch screen.
  • the network interface 740 provides a connection interface for various networked devices.
  • the storage interface 750 provides a connection interface for external storage devices such as SD cards and U disks.
  • FIG. 8 shows a block diagram of some embodiments of a system for determining an image target area of the present disclosure.
  • the image target area determination system 8 includes the image target area determination device 81 and the eye tracker 82 in any of the above embodiments.
  • the eye tracker 82 is used to obtain the eye movement information of the user in the process of observing the target picture.
  • the embodiments of the present disclosure can be provided as a method, a system, or a computer program product. Therefore, the present disclosure may adopt the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware. Moreover, the present disclosure may take the form of a computer program product implemented on one or more computer-usable non-transitory storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program codes. .
  • the method and system of the present disclosure may be implemented in many ways.
  • the method and system of the present disclosure can be implemented by software, hardware, firmware or any combination of software, hardware, and firmware.
  • the above-mentioned order of the steps for the method is only for illustration, and the steps of the method of the present disclosure are not limited to the order specifically described above, unless specifically stated otherwise.
  • the present disclosure can also be implemented as programs recorded in a recording medium, and these programs include machine-readable instructions for implementing the method according to the present disclosure.
  • the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Biomedical Technology (AREA)
  • Medical Informatics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Public Health (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Epidemiology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Primary Health Care (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Human Computer Interaction (AREA)
  • Ophthalmology & Optometry (AREA)
  • Databases & Information Systems (AREA)
  • Pathology (AREA)
  • Image Analysis (AREA)
  • Eye Examination Apparatus (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

L'invention concerne un procédé, un dispositif et un système pour déterminer une région cible d'une image, se rapportant au domaine technique du traitement d'image. Le procédé consiste : à déterminer sur l'image cible, en fonction d'informations de mouvement des yeux d'un utilisateur lorsqu'il observe une image cible, des emplacements d'intérêt respectifs de l'utilisateur ; à extraire des régions d'intérêt respectives depuis l'image cible au moyen d'un modèle d'apprentissage automatique ; et à déterminer une région cible de l'image cible en fonction des emplacements d'intérêt respectifs et des régions d'intérêt respectives.
PCT/CN2020/075056 2019-11-29 2020-02-13 Procédé, dispositif et système de détermination de région cible d'une image WO2021103316A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911195964.5A CN112885435B (zh) 2019-11-29 2019-11-29 图像目标区域的确定方法、装置和系统
CN201911195964.5 2019-11-29

Publications (1)

Publication Number Publication Date
WO2021103316A1 true WO2021103316A1 (fr) 2021-06-03

Family

ID=76038289

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/075056 WO2021103316A1 (fr) 2019-11-29 2020-02-13 Procédé, dispositif et système de détermination de région cible d'une image

Country Status (2)

Country Link
CN (1) CN112885435B (fr)
WO (1) WO2021103316A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113485555B (zh) * 2021-07-14 2024-04-26 上海联影智能医疗科技有限公司 医学影像阅片方法、电子设备和存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105426399A (zh) * 2015-10-29 2016-03-23 天津大学 一种基于眼动的提取图像兴趣区域的交互式图像检索方法
CN105677024A (zh) * 2015-12-31 2016-06-15 北京元心科技有限公司 一种眼动检测跟踪方法、装置及其用途
CN107656613A (zh) * 2017-09-08 2018-02-02 国网山东省电力公司电力科学研究院 一种基于眼动追踪的人机交互系统及其工作方法
CN109887583A (zh) * 2019-03-11 2019-06-14 数坤(北京)网络科技有限公司 基于医生行为的数据获取方法/系统、医学图像处理系统

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101714153A (zh) * 2009-11-16 2010-05-26 杭州电子科技大学 基于视觉感知的交互式乳腺钼靶图像检索方法
CN102521595B (zh) * 2011-12-07 2014-01-15 中南大学 一种基于眼动数据和底层特征的图像感兴趣区域提取方法
KR20160071242A (ko) * 2014-12-11 2016-06-21 삼성전자주식회사 안구 움직임에 기반한 컴퓨터 보조 진단 장치 및 방법
EP3367879A4 (fr) * 2015-10-30 2019-06-12 University of Massachusetts Système et procédés pour évaluer des images et d'autres sujets
CN106095089A (zh) * 2016-06-06 2016-11-09 郑黎光 一种获取感兴趣目标信息的方法
CN107563123A (zh) * 2017-09-27 2018-01-09 百度在线网络技术(北京)有限公司 用于标注医学图像的方法和装置
US10593118B2 (en) * 2018-05-04 2020-03-17 International Business Machines Corporation Learning opportunity based display generation and presentation
CN109886780B (zh) * 2019-01-31 2022-04-08 苏州经贸职业技术学院 基于眼球跟踪的商品目标检测方法及装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105426399A (zh) * 2015-10-29 2016-03-23 天津大学 一种基于眼动的提取图像兴趣区域的交互式图像检索方法
CN105677024A (zh) * 2015-12-31 2016-06-15 北京元心科技有限公司 一种眼动检测跟踪方法、装置及其用途
CN107656613A (zh) * 2017-09-08 2018-02-02 国网山东省电力公司电力科学研究院 一种基于眼动追踪的人机交互系统及其工作方法
CN109887583A (zh) * 2019-03-11 2019-06-14 数坤(北京)网络科技有限公司 基于医生行为的数据获取方法/系统、医学图像处理系统

Also Published As

Publication number Publication date
CN112885435B (zh) 2023-04-21
CN112885435A (zh) 2021-06-01

Similar Documents

Publication Publication Date Title
Münzer et al. Content-based processing and analysis of endoscopic images and videos: A survey
Stember et al. Eye tracking for deep learning segmentation using convolutional neural networks
US8165368B2 (en) Systems and methods for machine learning based hanging protocols
JP5222082B2 (ja) 情報処理装置およびその制御方法、データ処理システム
Chadebecq et al. Computer vision in the surgical operating room
US9295372B2 (en) Marking and tracking an area of interest during endoscopy
US10248756B2 (en) Anatomically specific movie driven medical image review
JP6532287B2 (ja) 医療診断支援装置、情報処理方法及びプログラム
JP6230708B2 (ja) 撮像データセットの間の所見のマッチング
US10083278B2 (en) Method and system for displaying a timing signal for surgical instrument insertion in surgical procedures
JP2006034585A (ja) 画像表示装置、画像表示方法およびそのプログラム
JP6253085B2 (ja) X線動画像解析装置、x線動画像解析プログラム及びx線動画像撮像装置
Phillips et al. Method for tracking eye gaze during interpretation of endoluminal 3D CT colonography: technical description and proposed metrics for analysis
US10726548B2 (en) Confidence determination in a medical imaging video clip measurement based upon video clip image quality
JP2022548237A (ja) Vats及び低侵襲手術での術中仮想アノテーションのためのインタラクティブ内視鏡検査
KR102146672B1 (ko) 수술결과에 대한 피드백 제공방법 및 프로그램
JP5539478B2 (ja) 情報処理装置および情報処理方法
Jiang et al. Video processing to locate the tooltip position in surgical eye–hand coordination tasks
CN113485555B (zh) 医学影像阅片方法、电子设备和存储介质
WO2021103316A1 (fr) Procédé, dispositif et système de détermination de région cible d'une image
WO2019146358A1 (fr) Système, procédé et programme d'apprentissage
Du-Crow Computer Aided Detection in Mammography
Atkins et al. Eye monitoring applications in medicine
US20230145531A1 (en) Systems and methods for registering visual representations of a surgical space
US20240087304A1 (en) System for medical data analysis

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20893335

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 13/10/2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20893335

Country of ref document: EP

Kind code of ref document: A1