CN116364284A - Image display adjusting method, system, electronic device and storage medium - Google Patents

Image display adjusting method, system, electronic device and storage medium Download PDF

Info

Publication number
CN116364284A
CN116364284A CN202310239466.6A CN202310239466A CN116364284A CN 116364284 A CN116364284 A CN 116364284A CN 202310239466 A CN202310239466 A CN 202310239466A CN 116364284 A CN116364284 A CN 116364284A
Authority
CN
China
Prior art keywords
image
eye
pupil
data
eyeball
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310239466.6A
Other languages
Chinese (zh)
Inventor
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Microport Medbot Group Co Ltd
Original Assignee
Shanghai Microport Medbot Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Microport Medbot Group Co Ltd filed Critical Shanghai Microport Medbot Group Co Ltd
Priority to CN202310239466.6A priority Critical patent/CN116364284A/en
Publication of CN116364284A publication Critical patent/CN116364284A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/361Image-producing devices, e.g. surgical cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • A61B2034/301Surgical robots for introducing or steering flexible instruments inserted into the body, e.g. catheters or endoscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B2090/364Correlation of different images or relation of image positions in respect to the body
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Public Health (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Biomedical Technology (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Pathology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Primary Health Care (AREA)
  • Ophthalmology & Optometry (AREA)
  • Human Computer Interaction (AREA)
  • Epidemiology (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Robotics (AREA)
  • Geometry (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The invention provides an image display adjusting method, an image display adjusting system, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring eyeball condition data of an operator and operation condition data related to an operation; according to the eyeball condition data and the operation condition data, searching eyeball health comfort parameter indexes matched with the eyeball condition data and the operation condition data in a pre-established human eye health comfort index database; acquiring target visual parameters according to the eyeball health comfort parameter index, eyeball condition data and operation condition data; and adjusting the visual effect of the in-vivo scene image to be displayed according to the target visual parameters. The invention can automatically adjust the display effect of the in-vivo scene image, thereby relieving the eyeball fatigue of doctors, protecting the eyes of the doctors and being more beneficial to the doctors to perform operation.

Description

Image display adjusting method, system, electronic device and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image display adjustment method, an image display adjustment system, an electronic device, and a storage medium.
Background
The surgical robot is designed to accurately implement complex surgical operations in a minimally invasive manner. Under the condition that the traditional operation faces various limitations, a surgical robot is developed to replace the traditional operation, the surgical robot breaks through the limitations of human eyes, and a stereoscopic imaging technology is adopted to clearly present internal organs to an operator. In the area that original hand is not stretched into, 360 degrees rotations, move, every single move, centre gripping can be accomplished to the robot to avoid the shake. The surgical robot has the advantages of small wound, less bleeding and quick recovery when performing surgery, can greatly shorten the postoperative hospitalization time of a patient, and can obviously improve the postoperative survival rate and recovery rate of the patient. As a high-end medical device, surgical robots are popular with a large number of doctors and patients, and have been widely used in various clinical operations.
The endoscope robot is one of the operation robots widely applied, at present, images collected by an endoscope of the endoscope robot are manually adjusted by doctors and operated by means of visual comfort, the visual effect can not be automatically adjusted according to the eyeball fatigue degree in the operation, and the following problems can be caused:
1. the long operation time and a visual effect are easy to cause eye fatigue, dizziness or damage of doctors, and the fine operation of the operation is not facilitated;
2. The existing vision adjustment needs to be adjusted by a nurse beside or by a doctor himself or herself manually, the time required for achieving the ideal effect is too long, and the operation rhythm is easy to influence;
3. the ideal effect is expected to be checked at different organ parts of the human body, the required vivid contrast of colors is different, and the manual adjustment is effective in adapting to the scene degree, so that the viewing is not facilitated.
It should be noted that the information disclosed in this background section is only for enhancement of understanding of the general background of the invention and should not be taken as an acknowledgement or any form of suggestion that this information forms the prior art already known to a person skilled in the art.
Disclosure of Invention
The invention aims to provide an image display adjusting method, an image display adjusting system, electronic equipment and a storage medium, which can automatically adjust the display effect of an in-vivo scene image, thereby relieving the eyeball fatigue of a doctor, protecting the eyes of the doctor and being more beneficial to the operation of the doctor.
In order to achieve the above object, the present invention provides an image display adjustment method applied to a surgical robot system including an endoscope for acquiring an in-vivo scene image, the image display adjustment method comprising:
Acquiring eyeball condition data of an operator and operation condition data related to an operation;
according to the eyeball condition data and the operation condition data, searching eyeball health comfort parameter indexes matched with the eyeball condition data and the operation condition data in a pre-established human eye health comfort index database;
acquiring target visual parameters according to the eyeball health comfort parameter index, the eyeball condition data and the operation condition data;
and adjusting the visual effect of the in-vivo scene image to be displayed according to the target visual parameters.
Optionally, the acquiring eyeball condition data of the operator includes:
and acquiring eyeball fatigue degree data of the operator and eyeball use duration data of the operator in operation.
Optionally, the acquiring the eyeball fatigue degree data of the operator includes:
acquiring a first image related to the eyes of the operator acquired in a latest preset time period including the current acquisition time;
for each first image acquired in the preset time period, identifying the eye characteristics of the first image to acquire eye characteristic data, and acquiring eyeball fatigue degree grade corresponding to the first image according to the acquired eye characteristic data;
And taking the eyeball fatigue degree grade corresponding to the first image with the highest eyeball fatigue degree grade in the preset time period as the eyeball fatigue degree data of the operator.
Optionally, the eyestrain degree data of the operator includes left eyestrain degree data and right eyestrain degree data of the operator, and the eye feature data includes left eye feature data and/or right eye feature data;
the step of obtaining the eyeball fatigue degree grade corresponding to the first image according to the obtained eye feature data comprises the following steps:
acquiring a left eye eyestrain degree grade corresponding to the first image according to the acquired left eye feature data; and/or
Acquiring a right eye eyestrain degree grade corresponding to the first image according to the acquired right eye feature data;
the step of using the eyeball fatigue level corresponding to the first image with the highest eyeball fatigue level in the preset time period as the eyeball fatigue level data of the operator includes:
and taking the left eye eyeball fatigue degree grade corresponding to the first image with the highest left eye eyeball fatigue degree grade in the preset time period as left eye eyeball fatigue degree data of the operator, and/or taking the right eye eyeball fatigue degree grade corresponding to the first image with the highest right eye eyeball fatigue degree grade as right eye eyeball fatigue degree data of the operator.
Optionally, the first image is a facial image of the operator, and the identifying the eye feature of the first image to obtain eye feature data includes:
identifying left eye feature points and/or right eye feature points of the first image respectively to identify left eye feature points and/or right eye feature points, wherein the left eye feature points comprise two first left eye feature points related to the length of a left eye and four second left eye feature points related to the width of the left eye, and the right eye feature points comprise two first right eye feature points related to the length of a right eye and four second right eye feature points related to the width of the right eye;
the step of obtaining the left eye eyestrain level corresponding to the first image according to the obtained left eye feature data comprises the following steps:
calculating the length of the left eye according to the two first left eye characteristic points;
calculating the left eye width according to the four second left eye characteristic points;
calculating a left eye width-to-length ratio according to the left eye length and the left eye width;
judging whether the left eye width-to-length ratio is smaller than a first threshold value or not;
if yes, determining a left eye fatigue degree grade corresponding to the first image according to the left eye width-to-length ratio and a preset first eye fatigue degree grade classification rule; and/or
The step of obtaining the right eye eyestrain level corresponding to the first image according to the obtained right eye feature data comprises the following steps:
calculating the length of the right eye according to the two first right eye characteristic points;
calculating the right eye width according to the four second right eye feature points;
calculating a right eye width-to-length ratio according to the right eye length and the right eye width;
judging whether the width-to-length ratio of the right eye is smaller than the first threshold value;
if yes, determining the right eye fatigue degree grade corresponding to the first image according to the right eye width-to-length ratio and a preset first eye fatigue degree grade classification rule.
Optionally, the first image is a facial image of the operator, and the identifying the eye feature of the first image to obtain eye feature data includes:
identifying eye features of the first image to identify a left eye region and/or a right eye region;
acquiring the total number of pixel points of the left eye area in the transverse direction and the total number of pixel points of the left eye area in the longitudinal direction; and/or
Acquiring the total number of pixel points of the right eye area in the transverse direction and the total number of pixel points of the right eye area in the longitudinal direction;
The step of obtaining the left eye eyestrain level corresponding to the first image according to the obtained left eye feature data comprises the following steps:
calculating a left eye aspect ratio according to the ratio between the total number of pixels of the left eye area in the longitudinal direction and the total number of pixels of the left eye area in the transverse direction;
determining whether the left eye aspect ratio is less than a second threshold;
if yes, determining a left eye eyestrain level corresponding to the first image according to the left eye aspect ratio and a preset second eyestrain level classification rule; and/or
The step of obtaining the right eye eyestrain level corresponding to the first image according to the obtained right eye feature data comprises the following steps:
calculating a right-eye aspect ratio according to the ratio between the total number of pixels of the right-eye area in the longitudinal direction and the total number of pixels of the right-eye area in the transverse direction;
determining whether the right eye aspect ratio is less than the second threshold;
if yes, determining the right eye fatigue degree grade corresponding to the first image according to the right eye aspect ratio and a preset second eye fatigue degree grade dividing rule.
Optionally, the first image includes a first pupil image and a second pupil image that are acquired simultaneously, where the first pupil image is an infrared image and the second pupil image is a visible light image;
The identifying the eye feature of the first image to obtain eye feature data includes:
performing differential processing on the first pupil image and the second pupil image which are acquired simultaneously to acquire a pupil differential image;
identifying the pupil difference image to identify a left eye pupil and/or a right eye pupil;
acquiring left-eye pupil aspect ratio data according to the identified left-eye pupil, and/or acquiring right-eye pupil aspect ratio data according to the identified right-eye pupil;
the step of obtaining the left eye eyestrain level corresponding to the first image according to the obtained left eye feature data comprises the following steps:
calculating a left eye closing degree value according to the left eye pupil height-width ratio data and the pre-stored left eye maximum pupil height-width ratio data;
judging whether the left eye closing degree value is larger than a third threshold value or not;
if yes, determining a left eye eyestrain level corresponding to the first image according to the left eye closure level value and a preset third eyestrain level classification rule; and/or
The step of obtaining the right eye eyestrain level corresponding to the first image according to the obtained right eye feature data comprises the following steps:
Calculating a right eye closing degree value according to the right eye pupil height-width ratio data and the pre-stored right eye maximum pupil height-width ratio data;
judging whether the right eye closing degree value is larger than the third threshold value or not;
if yes, determining the right eye eyestrain degree grade corresponding to the first image according to the right eye closure degree value and a preset third eyestrain degree grade classification rule.
Optionally, the acquiring the time length data of the operating operator in the eyeball in operation includes:
acquiring pupil videos and view field videos of an operator, wherein the pupil videos and the view field videos are acquired in a time period from the beginning of an operation to the current moment, and pupil images in the pupil videos correspond to view field images in the view field videos one by one;
the pupil image for each frame in the pupil video performs the following operations:
according to the pupil image, acquiring the position information of the left eye pupil center point under the pupil image coordinate system and/or the position information of the right eye pupil center point under the pupil image coordinate system;
registering the pupil image and the view field image corresponding to the pupil image to obtain a space transformation matrix between the pupil image and the view field image;
Superposing the left eye pupil center point and/or the right eye pupil center point on the view field image according to the space transformation matrix and the position information of the left eye pupil center point and/or the right eye pupil center point under the pupil image coordinate system; judging whether a first gazing point corresponding to the left eye pupil center point and/or a second gazing point corresponding to the right eye pupil center point exist in a view field area in the view field image according to the superposition result, if the first gazing point exists, judging that the left eye of the operator is in a use state at the acquisition time of the pupil image, and/or if the second gazing point exists, judging that the right eye of the operator is in a use state at the acquisition time of the pupil image;
the left eye use time length of the operator is calculated according to the acquisition time of pupil images of each frame, which are judged to be in use state of the left eye, and/or the right eye use time length of the operator is calculated according to the acquisition time of pupil images of each frame, which are judged to be in use state of the right eye.
Optionally, the acquiring surgical condition data related to surgery includes:
Surgical scene data and surgical duration data associated with a procedure are acquired.
Optionally, the acquiring surgical scene data includes:
and acquiring in-vivo scene images, and identifying the in-vivo scene images to acquire surgical position information and scene color information, thereby acquiring surgical scene data.
Optionally, the acquiring the operation duration data includes:
acquiring a surgical scene video acquired in a time period from the beginning of a surgery to the current moment;
and inputting the surgical scene video into a pre-trained surgical procedure identification model for time sequence detection so as to acquire surgical duration data.
Optionally, the target visual parameter includes at least one of target brightness, target contrast, target resolution, and target saturation.
Optionally, the image display adjustment method further includes:
comparing the adjustment result of the display effect of the in-vivo scene image based on the target visual parameter with the image display effect of the corresponding standard surgical scene in the pre-established standard surgical scene image display effect database to judge whether the current image display effect accords with the standard surgical scene;
if not, continuing to adjust the display effect of the in-vivo scene image based on the image display effect of the corresponding standard operation scene.
To achieve the above object, the present invention also provides an image display adjustment system including a controller configured to implement the above-described image display adjustment method.
Optionally, the image display adjustment system further comprises a vision acquisition module in communication with the controller, the vision acquisition module comprising a first vision acquisition unit for acquiring a first image related to the eyes of the operator and a second vision acquisition unit for acquiring a field of view image.
In order to achieve the above object, the present invention further provides an electronic device, including a processor and a memory, wherein the memory stores a computer program, and the computer program, when executed by the processor, implements the image display adjustment method described above.
To achieve the above object, the present invention also provides a readable storage medium having stored therein a computer program which, when executed by a processor, implements the above-described image display adjustment method.
Compared with the prior art, the image display adjusting method, the system, the electronic equipment and the storage medium provided by the invention have the following advantages:
The image display adjusting method provided by the invention comprises the steps of firstly acquiring eyeball condition data of an operator and operation condition data related to an operation; then according to the eyeball condition data and the operation condition data, searching eyeball health comfort parameter indexes matched with the eyeball condition data and the operation condition data in a pre-established human eye health comfort index database; acquiring target visual parameters according to the eyeball health comfort parameter index, the eyeball condition data and the operation condition data; finally, according to the target visual parameters, the visual effect of the in-vivo scene image to be displayed is adjusted, so that the eye health and comfort parameter index matched with the eye condition of an operator and the operation performed by the operator is searched by utilizing big data, the target visual parameters meeting the eye health and comfort parameter index are calculated based on the eye health and comfort parameter index, the eye condition data and the operation condition data, and the visual effect (namely, the display effect) of the in-vivo scene image acquired by the endoscope is adjusted based on the target visual parameters, thereby relieving eye fatigue of a doctor, protecting eyes of the doctor and being more beneficial to the operation of the doctor.
Because the image display adjusting system, the electronic device and the storage medium provided by the invention belong to the same inventive concept as the image display adjusting method provided by the invention, the image display adjusting system, the electronic device and the storage medium provided by the invention have all the advantages of the image display adjusting method provided by the invention, and therefore the beneficial effects of the image display adjusting system, the electronic device and the storage medium provided by the invention are not repeated one by one.
Drawings
Fig. 1 is a schematic view of an application scenario of a surgical robot system according to an embodiment;
fig. 2 is a schematic structural diagram of an image trolley according to an embodiment;
FIG. 3 is a schematic diagram of a doctor console according to an embodiment;
FIG. 4 is a flowchart of an image display adjustment method according to an embodiment of the present invention;
FIG. 5a is a schematic illustration of eye feature points with the eyes open;
FIG. 5b is a schematic illustration of eye feature points when the eye is closed;
FIG. 6 is a flowchart for determining the level of eyestrain according to the first embodiment of the present invention;
FIG. 7 is a flowchart for determining the level of eyestrain according to a second embodiment of the present invention;
FIG. 8 is a flowchart for determining the level of eyestrain according to a third embodiment of the present invention;
FIG. 9 is a training flowchart of a scene image recognition model according to an embodiment of the present invention;
FIG. 10 is a flowchart of predicting the duration of a surgical procedure using a surgical procedure identification model according to an embodiment of the present invention;
FIG. 11 is a flowchart for adjusting the image display effect according to the first embodiment of the present invention;
FIG. 12 is a flowchart for adjusting the image display effect according to a second embodiment of the present invention;
FIG. 13 is a block diagram of an image display adjustment system according to an embodiment of the present invention;
fig. 14 is a schematic diagram of a connection relationship between a first vision collecting unit and a controller according to an embodiment of the present invention;
fig. 15 is a schematic block diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The image display adjusting method, the system, the electronic device and the storage medium according to the present invention are further described in detail below with reference to the accompanying drawings and detailed description. The advantages and features of the present invention will become more apparent from the following description. It should be noted that the drawings are in a very simplified form and are all to a non-precise scale, merely for the purpose of facilitating and clearly aiding in the description of embodiments of the invention. For a better understanding of the invention with objects, features and advantages, refer to the drawings. It should be understood that the structures, proportions, sizes, etc. shown in the drawings are shown only in connection with the present disclosure for the understanding and reading of the present disclosure, and are not intended to limit the scope of the invention, which is defined by the appended claims, and any structural modifications, proportional changes, or dimensional adjustments, which may be made by the present disclosure, should fall within the scope of the present disclosure under the same or similar circumstances as the effects and objectives attained by the present invention.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Furthermore, in the description herein, reference to the terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the various embodiments or examples described in this specification and the features of the various embodiments or examples may be combined and combined by those skilled in the art without contradiction.
The invention provides an image display adjusting method, an image display adjusting system, electronic equipment and a storage medium, which can automatically adjust the display effect of in-vivo scene images, thereby relieving the eyeball fatigue of doctors, protecting the eyes of the doctors and being more beneficial to the doctors to perform operation.
It should be noted that the image display adjustment method provided by the embodiment of the present invention may be applied to the image display adjustment system and the electronic device provided by the embodiment of the present invention, where the electronic device may be a personal computer, a mobile terminal, etc., and the mobile terminal may be a hardware device with various operating systems, such as a mobile phone, a tablet computer, etc. It should be further noted that, for convenience of distinction, a "first feature point" of the left eye is denoted by a "first left eye feature point", a "second feature point" of the left eye is denoted by a "second left eye feature point", a "first feature point" of the right eye is denoted by a "first right eye feature point", and a "second feature point" of the right eye is denoted by a "second right eye feature point", as will be understood by those skilled in the art.
In order to realize the above thought, the invention provides an image display adjusting method which is applied to a surgical robot system. For ease of understanding, the surgical robot system will be described before describing the image display adjustment method provided in the present embodiment. Referring to fig. 1, a schematic application scenario diagram of a surgical robot system according to an embodiment is shown. As shown in fig. 1, the surgical robotic system includes a physician console 100, a patient trolley 200, and an image trolley 300 that are communicatively connected.
With continued reference to fig. 1, as shown in fig. 1, the patient trolley 200 includes a first base 210 and at least one mechanical arm 220 mounted on the first base 210, wherein a surgical instrument 400 is mounted on a distal end of at least one mechanical arm 220, and an endoscope 500 (see fig. 2) is mounted on a distal end of at least one mechanical arm 220. It should be noted that, as will be appreciated by those skilled in the art, when only one mechanical arm 220 is provided on the first base 210, the surgical instrument 400 and the endoscope may be mounted on the same mechanical arm 220; when only a plurality of robotic arms 220 are provided on the first base 210, the surgical instrument 400 and the endoscope 500 may be mounted on different robotic arms 220.
With continued reference to fig. 1 and 2, fig. 2 schematically illustrates a schematic structural diagram of an image trolley according to an embodiment. As shown in fig. 1 and 2, the image dolly 300 includes a first display unit 310 (for convenience of distinction, a display part on the image dolly 300 is represented by the first display unit 310, and a display part on the doctor console 100 is represented by the second display unit 120). In particular, the surgical instrument 400 and the endoscope 500 may extend into the patient through a poking hole in the patient's surface. In-vivo scene images can be acquired through the endoscope 500, specifically including acquiring in-vivo scene image information such as human tissue and organs, surgical instruments 400, blood vessels, and body fluids, and the acquired in-vivo scene images can be transferred to the first display unit 310 of the image trolley 300 for display.
With continued reference to fig. 3, a schematic structural diagram of a doctor console according to an embodiment is schematically shown. As shown in FIG. 3, the physician console 100 includes at least one master control arm 110. During the operation, an operator (i.e., doctor) sitting on the doctor control console 100 can control the surgical instrument 400 and the endoscope 500 positioned on the mechanical arm 220 to perform various operations by manipulating the main control arm 110, thereby achieving the purpose of performing the operation on the patient. In actual operation, the operator views the returned in-vivo scene image through the second display unit 120 on the doctor console 100, and controls the surgical instrument 400 and the endoscope 500 positioned on the robot arm 220 to move by manipulating the main control arm 110.
Further, as shown in fig. 3, the doctor console 100 further includes a second base 130, the main control arm 110 and the second display unit 120 are both disposed on the second base 130, and a foot switch (not shown in the figure) is disposed on the second base 130 to detect a switching value control signal sent by an operator. The operator can control part action through the foot switch, for example, input of relevant operations such as electric cutting, electric coagulation and the like is completed through the foot switch.
With continued reference to fig. 1, in one exemplary embodiment, as shown in fig. 1, the surgical robot further includes a tool trolley 600 for storing surgical instruments 400 and an auxiliary trolley 700 (including a ventilator and an anesthesia machine) for use during surgery. It should be noted that, as those skilled in the art can understand, those skilled in the art may select and configure the auxiliary carriages 700 according to the prior art, so they will not be further described herein. In addition, it should be noted that, for the relevant content regarding the more working principles of the surgical robot, reference may be made to the prior art, and a detailed description thereof will not be given here.
It should be noted that, as will be understood by those skilled in the art, the image display adjustment method provided by the present invention may be used to adjust not only the display effect of the in-vivo scene image on the second display unit 120 of the doctor console 100, but also the display effect of the in-vivo scene image on the first display unit 310 of the image trolley 300. With continued reference to fig. 4, a flowchart of an image display adjustment method according to an embodiment of the present invention is schematically shown. As shown in fig. 4, the image display adjustment method provided by the invention comprises the following steps:
Step S100, acquiring eyeball condition data of an operator and operation condition data related to an operation.
Step 200, according to the eyeball condition data and the operation condition data, searching eyeball health comfort parameter indexes matched with the eyeball condition data and the operation condition data in a pre-established human eye health comfort index database.
And step 300, acquiring target visual parameters according to the eyeball health comfort parameter index, the eyeball condition data and the operation condition data.
And step 400, adjusting the visual effect of the in-vivo scene image to be displayed according to the target visual parameters.
According to the invention, the eyeball health and comfort parameter index matched with the eyeball condition of the operator and the operation performed by the operator is searched by utilizing the big data, the target visual parameter meeting the eyeball health and comfort parameter index is calculated based on the eyeball health and comfort parameter index, the eyeball condition data and the operation condition data, and the visual effect (namely, the display effect) of the in-vivo scene image acquired by the endoscope 500 is adjusted based on the target visual parameter, so that the eye fatigue of a doctor can be relieved, the eyes of the doctor can be protected, and the operation of the doctor can be more facilitated.
In particular, the eye health comfort parameters include, but are not limited to, color sensitivity.
In an exemplary embodiment, the acquiring eyeball condition data of the operator includes:
and acquiring eyeball fatigue degree data of the operator and eyeball use duration data of the operator in operation.
Therefore, by acquiring the eyeball fatigue degree data and the eyeball use time length data of the operator, the searched eyeball health and comfort degree parameter index can be matched with the eyeball fatigue degree and the eyeball use time length of the operator, so that the finally determined target visual parameter can more meet the visual requirement of the operator on in-vivo scene images.
In an exemplary embodiment, the acquiring the eyeball fatigue degree data of the operator includes:
acquiring a first image related to the eyes of the operator acquired in a latest preset time period including the current acquisition time;
for each first image acquired in the preset time period, identifying the eye characteristics of the first image to acquire eye characteristic data, and acquiring eyeball fatigue degree grade corresponding to the first image according to the acquired eye characteristic data;
And taking the eyeball fatigue degree grade corresponding to the first image with the highest eyeball fatigue degree grade in the preset time period as the eyeball fatigue degree data of the operator.
Specifically, the current acquisition time refers to the current acquisition time of the first image, that is, the first image acquired in the preset time period includes the first image acquired at the current acquisition time. As the eyeball fatigue degree grade corresponding to the first image with the highest eyeball fatigue degree grade in the preset time period is used as the eyeball fatigue degree data of the operator, the finally determined target visual parameter can be effectively ensured to effectively relieve the eye fatigue of the operator. It should be noted that, as those skilled in the art will understand, the preset time period may be set according to circumstances, and the present invention is not limited thereto, and the preset time period may be set as the nearest one minute, for example.
In one exemplary embodiment, the eyestrain degree data of the operator includes left and right eyestrain degree data of the operator, the eye feature data comprises left eye feature data and/or right eye feature data;
The step of obtaining the eyeball fatigue degree grade corresponding to the first image according to the obtained eye feature data comprises the following steps:
acquiring a left eye eyestrain degree grade corresponding to the first image according to the acquired left eye feature data; and/or
Acquiring a right eye eyestrain degree grade corresponding to the first image according to the acquired right eye feature data;
the step of using the eyeball fatigue level corresponding to the first image with the highest eyeball fatigue level in the preset time period as the eyeball fatigue level data of the operator includes:
and taking the left eye eyeball fatigue degree grade corresponding to the first image with the highest left eye eyeball fatigue degree grade in the preset time period as left eye eyeball fatigue degree data of the operator, and/or taking the right eye eyeball fatigue degree grade corresponding to the first image with the highest right eye eyeball fatigue degree grade as right eye eyeball fatigue degree data of the operator.
Therefore, the eyeball fatigue degree grade is respectively determined for the left eye and the right eye of the operator, so that the fatigue degree of the left eye and the fatigue degree of the right eye of the operator can be fully considered when the eyeball health comfort parameter index is determined, and the finally determined target visual parameters can be effectively used for effectively relieving the left eye fatigue and the right eye fatigue of the operator. It should be noted that, as will be understood by those skilled in the art, when only one of the left eye and right eye fatigue degree data of the operator is acquired, the eye fatigue degree data may be used as both the left eye and right eye fatigue degree data of the operator.
In an exemplary embodiment, the searching the pre-created human eye health and comfort index database according to the eyeball condition data and the operation condition data for the eyeball health and comfort parameter index matched with the eyeball condition data and the operation condition data includes:
and searching eyeball health comfort level parameter indexes matched with the eyeball health level data and the operation condition data in a pre-established human eye health comfort level index database according to one of the eyeball fatigue level data of the left eye, the eyeball fatigue level data of the right eye and the operation condition data, wherein the eyeball fatigue level grade is higher.
Specifically, if the right eye fatigue level corresponding to the right eye fatigue level data is higher than the left eye fatigue level corresponding to the left eye fatigue level data, and otherwise, the matched eyeball health comfort parameter index is searched according to the left eye eyeball fatigue degree data and the operation condition data.
In another exemplary embodiment, the searching the eyeball health and comfort parameter index matched with the eyeball health and comfort data in the pre-created human eye health and comfort index database according to the eyeball health and comfort data comprises the following steps:
According to the eyeball fatigue degree data of the left eye and the operation condition data, searching eyeball health comfort level parameter indexes matched with the left eye in a pre-established human eye health comfort level index database; and/or according to the eyeball fatigue degree data of the right eye and the operation condition data, searching eyeball health comfort level parameter indexes matched with the right eye in a pre-established human eye health comfort level index database.
In an exemplary embodiment, the first image is a facial image of the operator, and the identifying the eye feature of the first image to obtain eye feature data includes:
and respectively identifying left eye feature points and/or right eye feature points of the first image to identify left eye feature points and/or right eye feature points, wherein the left eye feature points comprise two first left eye feature points related to the length of a left eye and four second left eye feature points related to the width of the left eye, and the right eye feature points comprise two first right eye feature points related to the length of a right eye and four second right eye feature points related to the width of the right eye.
The step of obtaining the left eye eyestrain level corresponding to the first image according to the obtained left eye feature data comprises the following steps:
calculating the length of the left eye according to the two first left eye characteristic points;
calculating the left eye width according to the four second left eye characteristic points;
calculating a left eye width-to-length ratio according to the left eye length and the left eye width;
judging whether the left eye width-to-length ratio is smaller than a first threshold value or not;
if yes, determining the left eye fatigue degree grade corresponding to the first image according to the left eye width-to-length ratio and a preset first eye fatigue degree grade classification rule.
The step of obtaining the right eye eyestrain level corresponding to the first image according to the obtained right eye feature data comprises the following steps:
calculating the length of the right eye according to the two first right eye characteristic points;
calculating the right eye width according to the four second right eye feature points;
calculating a right eye width-to-length ratio according to the right eye length and the right eye width;
judging whether the width-to-length ratio of the right eye is smaller than the first threshold value;
if yes, determining the right eye fatigue degree grade corresponding to the first image according to the right eye width-to-length ratio and a preset first eye fatigue degree grade classification rule.
Specifically, referring to fig. 5a and 5b, fig. 5a schematically shows eye feature points when the eyes are open; fig. 5b schematically shows the eye feature points when the eye is closed. As shown in fig. 5a and 5b, if the eye in the figure is the left eye, point P 1 Sum point P 4 Representing two first left eye feature points, point P 2 Point P 3 Point P 5 Sum point P 6 Representing four second left eye feature points (an area surrounded by the four second left eye feature points is an area where a left eye pupil is located), the left eye length may be represented as |p1-p4|, and the left eye width may be represented as 0.5 (|p) 2 -P 6 ||+||P 3 -P 5 ||) where ||p 1 -P 4 The I represents a point P 1 Sum point P 4 Euclidean distance between, ||p 2 -P 6 The I represents a point P 2 Sum point P 6 Euclidean distance between, ||p 3 -P 5 The I represents a point P 3 Sum point P 5 The Euclidean distance between them, the left eye width to length ratio EAR Left side The method comprises the following steps:
EAR left side =0.5*(||P 2 -P 6 ||+||P 3 -P 5 ||)/||P 1 -P 4 ||
It should be noted that, as will be understood by those skilled in the art, if the eyes in fig. 5a and 5b are right eyes, point P 1 Sum point P 4 Representing two first right eye feature points, point P 2 Point P 3 Point P 5 Sum point P 6 And four second right eye feature points are represented (the area surrounded by the four second right eye feature points is the area where the pupil of the right eye is located). It should be noted that, the calculation method of the width of the right eye is similar to the calculation direction of the width of the left eye, the calculation method of the length of the right eye is similar to the calculation method of the length of the left eye, the calculation method of the width-to-length ratio of the right eye is similar to the calculation method of the width-to-length ratio of the left eye, and reference may be made to the above description, so that the description is omitted.
It should be further noted that, as those skilled in the art can understand, the first threshold value and the first eye fatigue level classification rule may be set according to a specific situation, which is not limited by the present invention. For example, the first threshold may be set to 0.6, and the first eye fatigue level classification rule is as follows in table 1:
TABLE 1 first eye fatigue level grading rule
EAR 0.4<EAR<0.6 0.2<EAR<0.4 EAR<0.2
Grade of fatigue degree Grade 1 fatigue (mild) Grade 2 fatigue (moderate) Grade 3 fatigue (severe)
Further, before acquiring the first image, the method further includes:
and carrying out face recognition, and if the face recognition is successful, acquiring the first image.
Specifically, please refer to fig. 6, which schematically illustrates a flowchart for determining the level of eyestrain according to a first embodiment of the present invention. As shown in fig. 6, by performing face recognition on the acquired video, it can be determined whether the face recognition is successful, if the face recognition is successful, a first image is acquired and eye detection is performed, and the left eye width to length ratio EAR is calculated based on the eye detection result Left side And right eye aspect ratio EAR Right side If the left eye width to length ratio EAR Left side If the first threshold value is smaller than the first threshold value, judging that the left eye of the operator is in a fatigue state at the acquisition time of the first image, and further determining the fatigue degree grade of the left eye based on a first eye fatigue degree grade dividing rule; similarly, if the width to length ratio EAR of the right eye Right side And if the first threshold value is smaller than the first threshold value, judging that the right eye of the operator is in a fatigue state at the acquisition time of the first image, and further determining the fatigue degree grade of the right eye eyeball based on the first eyeball fatigue degree grade grading rule.
Further, if the light environment does not meet the predetermined condition when the first image is acquired, the first image is a fusion image of a visible light image and an infrared light image, and if the light environment meets the predetermined condition when the first image is acquired, the first image is a visible light image.
Further, the eye condition data further includes blink frequency. Specifically, when the human eye is open, the eye width-to-length ratio EAR (Eye Aspect Ratio) fluctuates up and down at a certain value, and when the human eye is closed, the eye width-to-length ratio EAR drops rapidly, theoreticallyNear 0. Thus, when the left eye width to length ratio EAR Left side Below a fourth threshold (e.g., 0.3), then the left eye is determined to be in a closed state, and when the right eye aspect ratio EAR Right side Below a fourth threshold (e.g., 0.3), then the right eye is determined to be in a closed state. Therefore, the number of times of blinking of the left eye in the preset time period can be determined by counting the number of frames of the first image with the left eye in the closed state in the preset time period, and then the blink frequency is determined. Similarly, the number of times of blinking of the right eye in the preset time period can be determined by counting the number of frames of the first image in which the right eye is in a closed state in the preset time period. Since blinking is generally performed in 1 to 3 frames at a relatively high blinking speed, it is determined that the left eye blinks once if all of the left eyes in 1 to 3 consecutive frames are in a closed state, and similarly, it is determined that the right eye blinks once if all of the right eyes in 1 to 3 consecutive frames are in a closed state.
In another exemplary embodiment, the first image is a facial image of the operator, and the identifying the eye feature of the first image to obtain eye feature data includes:
identifying eye features of the first image to identify a left eye region and/or a right eye region;
acquiring the total number of pixel points of the left eye area in the transverse direction and the total number of pixel points of the left eye area in the longitudinal direction; and/or
And acquiring the total number of pixel points of the right-eye area in the transverse direction and the total number of pixel points of the right-eye area in the longitudinal direction.
The step of obtaining the left eye eyestrain level corresponding to the first image according to the obtained left eye feature data comprises the following steps:
calculating a left eye aspect ratio according to the ratio between the total number of pixels of the left eye area in the longitudinal direction and the total number of pixels of the left eye area in the transverse direction;
determining whether the left eye aspect ratio is less than a second threshold;
if yes, determining the left eye eyestrain degree grade corresponding to the first image according to the left eye aspect ratio and a preset second eyestrain degree grade dividing rule.
The step of obtaining the right eye eyestrain level corresponding to the first image according to the obtained right eye feature data comprises the following steps:
Calculating a right-eye aspect ratio according to the ratio between the total number of pixels of the right-eye area in the longitudinal direction and the total number of pixels of the right-eye area in the transverse direction;
determining whether the right eye aspect ratio is less than the second threshold;
if yes, determining the right eye fatigue degree grade corresponding to the first image according to the right eye aspect ratio and a preset second eye fatigue degree grade dividing rule.
Specifically, please refer to fig. 7, which schematically illustrates a flowchart for determining the level of eyestrain according to a second embodiment of the present invention. As shown in fig. 7, for each eye (left or right eye) of the operator, by identifying the first image, the total number of pixels X in the lateral direction and the total number of pixels Y in the longitudinal direction of the eye (left or right eye) are identified to calculate the aspect ratio p=y/X of the eye, and if so, it is determined whether the calculated aspect ratio P is smaller than the second threshold, and if so, it is determined that the eye (left or right eye) is in a fatigued state at the time of acquisition of the first image, and further, the eyestrain level of the eye (left or right eye) is determined based on the second eyestrain level classification rule.
It should be noted that, as those skilled in the art can understand, the second threshold value and the second eyestrain level classification rule may be set according to a specific situation, which is not limited by the present invention. For example, the second threshold may be set to 0.6, and the second eyestrain level classification rule is as follows in table 2:
TABLE 2 second eye fatigue level classification rule
P 0.4<EAR<0.6 0.3<EAR<0.4 EAR<0.3
Grade of fatigue degree Grade 1 fatigue (mild) Grade 2 fatigue (moderate) Grade 3 fatigue (severe)
In yet another exemplary embodiment, the first image includes a first pupil image and a second pupil image acquired simultaneously, wherein the first pupil image is an infrared image and the second pupil image is a visible image;
the identifying the eye feature of the first image to obtain eye feature data includes:
performing differential processing on the first pupil image and the second pupil image which are acquired simultaneously to acquire a pupil differential image;
identifying the pupil difference image to identify a left eye pupil and/or a right eye pupil;
left eye pupil aspect ratio data is obtained from the identified left eye pupil and/or right eye pupil aspect ratio data is obtained from the identified right eye pupil.
The step of obtaining the left eye eyestrain level corresponding to the first image according to the obtained left eye feature data comprises the following steps:
calculating a left eye closing degree value according to the left eye pupil height-width ratio data and the pre-stored left eye maximum pupil height-width ratio data;
judging whether the left eye closing degree value is larger than a third threshold value or not;
if yes, determining the left eye eyestrain degree grade corresponding to the first image according to the left eye closure degree value and a preset third eyestrain degree grade dividing rule.
The step of obtaining the right eye eyestrain level corresponding to the first image according to the obtained right eye feature data comprises the following steps:
calculating a right eye closing degree value according to the right eye pupil height-width ratio data and the pre-stored right eye maximum pupil height-width ratio data;
judging whether the right eye closing degree value is larger than the third threshold value or not;
if yes, determining the right eye eyestrain degree grade corresponding to the first image according to the right eye closure degree value and a preset third eyestrain degree grade classification rule.
Specifically, please refer to fig. 8, which schematically illustrates a flowchart for determining the level of eyestrain according to a third embodiment of the present invention. The first pupil image and the second pupil image acquired in most cases are in a state in which the human eye is open, i.e., the first pupil image and the second pupil image have a division of a bright pupil and a dark pupil. As shown in fig. 8, by performing filtering processing on the first pupil image (i.e., bright pupil image) and the second pupil image (i.e., dark pupil image) respectively, noise signals on the first pupil image (i.e., bright pupil image) and the second pupil image (i.e., dark pupil image) can be effectively eliminated, by performing differential processing (i.e., subtracting corresponding pixel values) on the first pupil image (i.e., bright pupil image) and the second pupil image (i.e., dark pupil image) after filtering processing, similar parts of the two images can be weakened, and changing parts of the two images can be highlighted, so that pupil differential images highlighting left pupil and right pupil can be obtained, so that human eyes (i.e., left pupil and right pupil) can be better positioned, thus characteristic parameters (including left eye pupil aspect ratio data and right eye pupil aspect ratio data) can be extracted, a left eye closing degree value can be calculated according to the extracted left eye pupil ratio data, if the left eye closing degree value is greater than a third threshold value, then the left eye closing degree is determined to be in a state of the first eye, and the eye fatigue degree is further determined according to the eye fatigue degree rule of the eye operation. And similarly, calculating a right eye closing degree value according to the extracted right eye pupil height-width ratio data, judging that the right eye eyeball of the operator is in a fatigue state at the acquisition time of the first image if the right eye closing degree value is larger than a third threshold value, and further determining the right eye eyeball fatigue degree level according to a third eyeball fatigue degree level dividing rule.
Further, the left eye closure degree value P (t) can be calculated according to the following formula Left side
P(t) Left side =1-h(t) Left side /Amax Left side
Wherein h (t) Left side Amax, the aspect ratio of the pupils of the left eye Left side Is the maximum pupil aspect ratio for the left eye.
Similarly, a left eye closure degree value P (t) is calculated according to the following formula Right side
P(t) Right side =1-h(t) Right side /Amax Right side
Wherein h (t) Right side For the right eye pupil aspect ratio, amax Right side Is the maximum pupil aspect ratio for the right eye.
It should be noted that, as can be understood by those skilled in the art, the maximum left-eye pupil aspect ratio data is the maximum left-eye pupil aspect ratio in the left-eye pupil aspect ratio data corresponding to the first image acquired 10 times before the operation starts; the right eye maximum pupil height-width ratio data is the maximum right eye pupil height-width ratio in the right eye pupil height-width ratio data corresponding to the first image acquired for the first 10 times after the operation starts. It should be noted that, as will be understood by those skilled in the art, when the human eye is completely closed, the first pupil image and the second pupil image acquired simultaneously do not have a bright pupil and a dark pupil, and this is to treat the undetected human eye as a feature, and if the human eye is not detected for 5 consecutive times, it is determined that the human eye is still in a closed state, and then the human eye is fatigued.
It should be noted that, as those skilled in the art can understand, the third threshold value and the third eyestrain level classification rule may be set according to specific situations, which is not limited by the present invention. For example, the third threshold may be set to 0.15, and the third eyestrain level classification rule is as follows in table 3:
TABLE 3 third eye fatigue level classification rule
P(t) 0.15<P(t)<0.2 0.2<P(t)<0.6 P(t)>0.6
Grade of fatigue degree Grade 1 fatigue (mild) Grade 2 fatigue (moderate) Grade 3 fatigue (severe)
In an exemplary embodiment, the acquiring the time-length data of the operator in use by the intraoperative eyeball comprises:
acquiring pupil videos and view field videos of an operator, wherein the pupil videos and the view field videos are acquired in a time period from the beginning of an operation to the current moment, and pupil images in the pupil videos correspond to view field images in the view field videos one by one;
the pupil image for each frame in the pupil video performs the following operations:
according to the pupil image, acquiring the position information of the left eye pupil center point under the pupil image coordinate system and/or the position information of the right eye pupil center point under the pupil image coordinate system;
Registering the pupil image and the view field image corresponding to the pupil image to obtain a space transformation matrix between the pupil image and the view field image;
superposing the left eye pupil center point and/or the right eye pupil center point on the view field image according to the space transformation matrix and the position information of the left eye pupil center point and the right eye pupil center point under the pupil image coordinate system; and
judging whether a first gazing point corresponding to the left eye pupil center point and/or a second gazing point corresponding to the right eye pupil center point exist in a view field area in the view field image according to the superposition result, if the first gazing point exists, judging that the left eye of the operator is in a use state at the acquisition time of the pupil image, and/or if the second gazing point exists, judging that the right eye of the operator is in a use state at the acquisition time of the pupil image;
the left eye use time length of the operator is calculated according to the acquisition time of pupil images of each frame, which are judged to be in use state of the left eye, and/or the right eye use time length of the operator is calculated according to the acquisition time of pupil images of each frame, which are judged to be in use state of the right eye.
Specifically, an infrared camera can be utilized to obtain pupil videos of an operator under the assistance of an infrared light source; and obtaining a visual field video by using a common camera. By superimposing the pupil (including the left pupil and the right pupil) center coordinates on the field of view image, it is possible to intuitively mark which position of the field of view (i.e., the gaze point) the pupil center is aligned with at a certain moment, and the movement route of the gaze point over time. It should be noted that, as will be understood by those skilled in the art, when only one of the left-eye use period and the right-eye use period of the operator is acquired, the use period may be used as both the left-eye use period and the right-eye use period of the operator.
In one exemplary embodiment, the acquiring surgical condition data associated with a surgery includes:
surgical scene data and surgical duration data associated with a procedure are acquired.
Therefore, by acquiring the operation scene data and the operation duration data, the searched eyeball health comfort parameter index can be ensured to meet the requirements of different positions of a human body and the length of an operation process, so that the determined target visual parameter can be further ensured to meet the visual requirement of an operator on in-vivo scene images.
In an exemplary embodiment, the acquiring surgical scene data includes:
and acquiring in-vivo scene images, and identifying the in-vivo scene images to acquire surgical position information and scene color information, thereby acquiring surgical scene data.
Specifically, the in-vivo scene image obtained by the method can be identified by using a scene image identification model trained in advance so as to identify the surgical tissue organ and the surrounding tissue of the surgical tissue organ, thereby obtaining the surgical position information and the scene color information.
With continued reference to fig. 9, a training flowchart of a scene image recognition model according to an embodiment of the present invention is schematically shown. As shown in fig. 9, before training, the existing case images need to be preprocessed by region segmentation, key organ labeling and the like to obtain training data, then a loss function is designed based on a clustering thought, a scene image recognition model (deep neural network structure) is constructed, and then the training is performed on the scene image recognition model according to a training data set and the designed loss function, so that a trained scene image recognition model can be obtained. The surgical scene can be identified by inputting the in-vivo scene image to be identified into the trained scene image identification model, so that the surgical position information and the scene color information are obtained.
In an exemplary embodiment, the acquiring the operation duration data includes:
acquiring a surgical scene video acquired in a time period from the beginning of a surgery to the current moment;
and inputting the surgical scene video into a pre-trained surgical procedure identification model for time sequence detection so as to acquire surgical duration data.
Specifically, a multi-view mode can be adopted, a mainstream training mode of a deep neural network is combined, CNN (convolutional neural network)/RNN (cyclic neural network) and other space and out-of-order behavior analysis technologies are integrated, an operation flow identification model is established, and training is performed by utilizing space-time feature embedding generated by a deep residual error network with triplet loss. Further, the medical image set with the pre-marked operation time length is divided into a training set and a verification set according to the ratio of 3:1, before training, the size of each frame of image in the medical image is required to be uniformly scaled to a preset size (for example, 1980 multiplied by 1020), the image format is uniformly adjusted to a preset format (for example, RGB format), the pixel values of the image are normalized to [0,1], and preprocessing such as geometric enhancement, color enhancement, noise enhancement, brightness enhancement, contrast enhancement, random inversion and the like is performed so as to improve the stability and convergence speed in the training process of the operation flow identification model. With continued reference to fig. 10, a flowchart of predicting a surgical duration of a surgical procedure recognition model according to an embodiment of the present invention is schematically shown. As shown in fig. 10, preprocessing similar to training samples is performed on each frame of image in the surgical scene video, then the preprocessed surgical scene video can be divided into each stage of the surgical procedure (i.e. extracting multi-view data features) through the depth residual error network in the surgical procedure recognition model, which is conducive to feature extraction, and the extracted multi-view features are subjected to weighted fusion (i.e. weighted fusion of the integrity of the surgical procedure and natural images) and time sequence detection, so that the logic of the surgical procedure can be rationalized, the field of view and the scene image of each surgical stage can be extracted through time sequence detection, thus the whole surgical procedure can be completely analyzed, and the maximum duration of the whole surgery can be estimated, and the surgical duration data can be obtained.
In one exemplary embodiment, modeling may be performed using deep learning to obtain target visual parameters based on the eye health comfort parameter index and the eye condition data and the surgical condition data.
Further, the left eye and the right eye can be separately modeled to fully consider the situation that the eyestrain degree and the health comfort index of the left eye and the right eye are different. Specifically, the acquiring the target visual parameter according to the eyeball health comfort parameter index, the eyeball condition data and the operation condition data includes:
acquiring left eye visual parameter information according to the left eye health comfort parameter index, the eye condition data and the operation condition data;
acquiring right eye visual parameter information according to the right eye health comfort parameter index, the eye condition data and the operation condition data;
and acquiring target visual parameters according to the left eye visual parameter information and the right eye visual parameter information.
Still further, the obtaining the target visual parameter according to the left eye visual parameter information and the right eye visual parameter information includes:
if the difference value of the left eye visual parameter and the right eye visual parameter is in the preset range, taking the visual parameter corresponding to the one with the higher fatigue degree level in the left eye and the right eye as the target visual parameter;
And if the difference value of the left eye visual parameter and the right eye visual parameter exceeds the preset range, taking the average value of the left eye visual parameter and the right eye visual parameter as the target visual parameter.
It should be noted that, as will be understood by those skilled in the art, if the visual parameters are plural, the above operations are performed for each item. For example, if the difference between the left-eye luminance and the right-eye luminance is within the preset luminance range, the luminance corresponding to the one of the left-eye and the right-eye having the higher fatigue level is taken as the target luminance; if the difference value between the left-eye brightness and the right-eye brightness exceeds the preset brightness range, taking the average value of the left-eye brightness and the right-eye brightness as the target brightness.
In an exemplary embodiment, the target visual parameter includes at least one of a target brightness, a target contrast, a target resolution, and a target saturation.
With continued reference to fig. 11, a flowchart for adjusting the image display effect according to the first embodiment of the present invention is schematically shown. As shown in fig. 11, taking image brightness adjustment as an example, the brightness of an image (i.e., an in-vivo scene image) is adjusted based on a target brightness using a PID controller so that the image (i.e., the in-vivo scene image) is displayed at the target brightness. Specifically, the brightness adjustment is actually adjusted by the pixel value of each pixel point, and the larger the pixel value is, the closer to white, the brighter the pixel value is, and the smaller the pixel value is, the closer to black, the darker the pixel value is. So adjusting the brightness is in fact adding a variable value to the pixel value, the variable value being positive, i.e. dimming, the variable value being negative, i.e. dimming. The formula is as follows:
g(x)=f(x)+B
Where g (x) represents the pixel value after adjustment, f (x) represents the pixel value before adjustment, and B is a variable value.
The contrast is adjusted by pixel distinction, and in general, the difference between the pixel values of adjacent positions in the image is small, so that the difference needs to be enlarged by multiple in order to enhance the contrast, and the difference needs to be reduced in proportion in order to reduce the contrast. So adjusting the contrast is adding a multiple to the pixel value, the multiple being greater than 1, the contrast being enhanced, the multiple being less than 1, the contrast being reduced. The formula is as follows:
g(x)=Af(x)
where g (x) represents the pixel value after adjustment, f (x) represents the pixel value before adjustment, and a is a multiple.
Using a unified formula to represent it, brightness and contrast can be adjusted simultaneously, as follows:
g(x)=Af(x)+B
in order to achieve the desired effect, gamma correction is required, which is a method of editing a gamma curve of an image to perform nonlinear tone editing on the image, detecting dark color portions and light color portions in an image signal, and increasing the ratio of the dark color portions and the light color portions, thereby improving the image contrast effect.
In an exemplary embodiment, the image display adjustment method further includes:
comparing the adjustment result of the display effect of the in-vivo scene image based on the target visual parameter with the image display effect of the corresponding standard surgical scene in the pre-established standard surgical scene image display effect database to judge whether the current image display effect accords with the standard surgical scene;
If not, continuing to adjust the display effect of the in-vivo scene image based on the image display effect of the corresponding standard operation scene.
Specifically, please refer to fig. 12, which schematically illustrates a flowchart for adjusting the image display effect according to the second embodiment of the present invention. As shown in fig. 12, by adding secondary feedback adjustment based on adjustment of the target visual parameter, that is, comparing the adjustment result of the display effect of the in-vivo scene image based on the target visual parameter with the image display effect of the corresponding standard surgical scene in the pre-created standard surgical scene image display effect database, if the adjustment result based on the target visual parameter does not meet the surgical requirement, performing secondary adjustment until the adjustment result meets the standard surgical scene requirement.
In an exemplary embodiment, the human health and comfort index database is a distributed database based on semantic search. Specifically, by acquiring dynamic eye condition data and operation condition data in real time, a distributed eye health comfort index database based on semantic search is established, dynamic data storage, cleaning, classification and summarization are performed, so that the eye health comfort index database can be searched by adopting a vector space model (SVM) based on semantic similarity calculation aiming at the eye condition data and operation condition data input by a user, and thus, the eye health comfort parameter index matched with the eye condition data and operation condition data input by the user is acquired from an associated database. Compared with a common database, the semantic search-based distributed human eye health comfort index database has the advantages of large data volume, high processing speed and quick data acquisition.
Based on the same inventive concept, the present invention further provides an image display adjusting system, please refer to fig. 13, which schematically shows a block structure diagram of the image display adjusting system according to an embodiment of the present invention. As shown, the image display adjustment system provided by the present invention includes a controller 810, and the controller 810 is configured to implement the image display adjustment method described above. Therefore, the image display and adjustment system provided by the invention can be used for retrieving the eyeball health and comfort parameter index matched with the eyeball condition of an operator and the operation performed by the operator by utilizing the big data, calculating the target visual parameter meeting the eyeball health and comfort parameter index based on the eyeball health and comfort parameter index, eyeball condition data and operation condition data, and adjusting the visual effect (namely, the display effect) of the in-vivo scene image acquired by the endoscope 500 based on the target visual parameter, so that the eye fatigue of a doctor can be relieved, the eyes of the doctor can be protected, and the operation of the doctor can be more facilitated. It should be noted that, as those skilled in the art will appreciate, the controller 810 may be disposed at the doctor console 100, the image trolley 300, or the patient trolley 200, which is not limited in this regard.
With continued reference to fig. 13, as shown in fig. 13, the image display adjustment system further includes a vision acquisition module in communication with the controller 810, the vision acquisition module including a first vision acquisition unit 820 for acquiring a first image related to the eye of the operator and a second vision acquisition unit 830 for acquiring a field of view image.
In particular, the first vision collecting unit 820 may include a first infrared camera 821 and/or a first visible camera 822 (i.e., an RGB camera, i.e., a general camera) for collecting facial images of the operator, or a second infrared camera for collecting first pupil images of the operator and a second visible camera for collecting second pupil images of the operator. The second vision collecting unit 830 is a common camera (i.e., RGB camera).
With continued reference to fig. 14, a schematic diagram of a connection relationship between the first vision collecting unit 820 and the controller 810 according to an embodiment of the present invention is shown. As shown in fig. 14, the first vision acquisition unit 820 further includes a TOF camera 823 and an infrared emission end 824 (for emitting infrared light sources). The TOF camera 823 is used for collecting depth information, the controller 810 is used for processing the depth information collected by the TOF camera 823, whether the depth information meets the face features or not (namely face recognition) can be judged, if the judgment result meets the face features, whether the light environment meets the preset conditions or not is judged, if the judgment result does not meet the preset conditions, the first infrared camera 821, the infrared emission end 824 and the first visible light camera 822 are triggered, and the controller 810 fuses an infrared light image collected by the first infrared camera 821 and a visible light image shot by the first visible light camera 822 to obtain the first image. It should be noted that, as those skilled in the art will understand, the number of the infrared emitting terminals 824 may be one or more, and preferably more.
Based on the same inventive concept, the present invention further provides an electronic device, please refer to fig. 15, which schematically shows a block structure schematic diagram of the electronic device according to an embodiment of the present invention. As shown in fig. 15, the electronic device includes a processor 101 and a memory 103, the memory 103 having stored thereon a computer program which, when executed by the processor 101, implements the image display adjustment method described above. Because the electronic device provided by the invention and the image display adjustment method provided by the invention belong to the same inventive concept, the electronic device provided by the invention has all advantages of the image display adjustment method provided by the invention, and the description thereof can be referred to in the above, and the details are not repeated here.
As shown in fig. 15, the electronic device further comprises a communication interface 102 and a communication bus 104, wherein the processor 101, the communication interface 102, and the memory 103 communicate with each other via the communication bus 104. The communication bus 104 may be a peripheral component interconnect standard (Peripheral Component Interconnect, PCI) bus, or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, among others. The communication bus 104 may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus. The communication interface 102 is used for communication between the electronic device and other devices.
The processor 101 of the present invention may be a central processing unit (Central Processing Unit, CPU), other general purpose processor, digital signal processor (Digital Signal Processor, DSP), application specific integrated circuit (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, and the processor 101 is a control center of the electronic device, and connects various parts of the entire electronic device using various interfaces and lines.
The memory 103 may be used to store the computer program, and the processor 101 may implement various functions of the electronic device by running or executing the computer program stored in the memory 103 and invoking data stored in the memory 103.
The memory 103 may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The present invention also provides a readable storage medium having stored therein a computer program which, when executed by a processor, can implement the image display adjustment method described above. Since the readable storage medium provided by the present invention and the image display adjustment method provided by the present invention belong to the same inventive concept, the readable storage medium provided by the present invention has all the advantages of the image display adjustment method provided by the present invention, and the description thereof will not be repeated herein.
The readable storage media of embodiments of the present invention may take the form of any combination of one or more computer-readable media. The readable medium may be a computer readable signal medium or a computer readable storage medium. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable computer hard disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
In summary, compared with the prior art, the image display adjustment method, the system, the electronic device and the storage medium provided by the invention have the following advantages: the invention obtains eyeball condition data of an operator and operation condition data related to operation; then according to the eyeball condition data and the operation condition data, searching eyeball health comfort parameter indexes matched with the eyeball condition data and the operation condition data in a pre-established human eye health comfort index database; acquiring target visual parameters according to the eyeball health comfort parameter index, the eyeball condition data and the operation condition data; finally, according to the target visual parameters, the visual effect of the in-vivo scene image to be displayed is adjusted, so that the eye health and comfort parameter index matched with the eye condition of an operator and the operation performed by the operator is searched by utilizing big data, the target visual parameters meeting the eye health and comfort parameter index are calculated based on the eye health and comfort parameter index, the eye condition data and the operation condition data, and the visual effect (namely, the display effect) of the in-vivo scene image acquired by the endoscope is adjusted based on the target visual parameters, thereby relieving eye fatigue of a doctor, protecting eyes of the doctor and being more beneficial to the operation of the doctor.
It should be noted that computer program code for carrying out operations of the present invention may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
It should be noted that the apparatus and methods disclosed in the embodiments herein may be implemented in other ways. The apparatus embodiments described above are merely illustrative, for example, flow diagrams and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments herein. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. In addition, the functional modules in the embodiments herein may be integrated together to form a single part, or the modules may exist alone, or two or more modules may be integrated to form a single part.
The above description is only illustrative of the preferred embodiments of the present invention and is not intended to limit the scope of the present invention, and any alterations and modifications made by those skilled in the art based on the above disclosure shall fall within the scope of the present invention. It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, the present invention is intended to include such modifications and alterations insofar as they come within the scope of the invention or the equivalents thereof.

Claims (12)

1. An image display adjustment method applied to a surgical robot system including an endoscope for acquiring an in-vivo scene image, comprising:
acquiring eyeball condition data of an operator and operation condition data related to an operation;
according to the eyeball condition data and the operation condition data, searching eyeball health comfort parameter indexes matched with the eyeball condition data and the operation condition data in a pre-established human eye health comfort index database;
acquiring target visual parameters according to the eyeball health comfort parameter index, the eyeball condition data and the operation condition data;
And adjusting the visual effect of the in-vivo scene image to be displayed according to the target visual parameters.
2. The image display adjustment method according to claim 1, wherein the acquiring eyeball condition data of the operator includes:
and acquiring eyeball fatigue degree data of the operator and eyeball use duration data of the operator in operation.
3. The image display adjustment method according to claim 2, characterized in that the acquiring of the eyeball fatigue degree data of the operator includes:
acquiring a first image related to the eyes of the operator acquired in a latest preset time period including the current acquisition time;
for each first image acquired in the preset time period, identifying the eye characteristics of the first image to acquire eye characteristic data, and acquiring eyeball fatigue degree grade corresponding to the first image according to the acquired eye characteristic data;
and taking the eyeball fatigue degree grade corresponding to the first image with the highest eyeball fatigue degree grade in the preset time period as the eyeball fatigue degree data of the operator.
4. The image display adjustment method according to claim 3, wherein the eyeball fatigue degree data of the operator includes left-eye eyeball fatigue degree data and right-eye eyeball fatigue degree data of the operator, the eye feature data including left-eye feature data and/or right-eye feature data;
the step of obtaining the eyeball fatigue degree grade corresponding to the first image according to the obtained eye feature data comprises the following steps:
acquiring eyeball fatigue degree grade corresponding to the first image according to the acquired left eye and/or right eye feature data;
the step of using the eyeball fatigue level corresponding to the first image with the highest eyeball fatigue level in the preset time period as the eyeball fatigue level data of the operator includes:
and taking the eyeball fatigue degree grade of the left eye or the right eye corresponding to the first image with the highest eyeball fatigue degree grade of the left eye or the right eye in the preset time period as the eyeball fatigue degree data of the operator.
5. The image display adjustment method according to claim 4, wherein the first image is a facial image of the operator, the identifying of the ocular feature of the first image to obtain ocular feature data includes:
Identifying left eye feature points and/or right eye feature points of the first image respectively to identify the left eye feature points and/or the right eye feature points, wherein the eye feature points comprise two first feature points related to the length of the left eye or the right eye and four second feature points related to the width of the left eye or the right eye;
the obtaining the eyeball fatigue degree grade corresponding to the first image according to the obtained eye feature data of the left eye and/or the right eye comprises the following steps:
calculating the length of the left eye and/or the right eye according to the two first characteristic points of the left eye and/or the two first characteristic points of the right eye;
calculating the width of the left eye and/or the right eye according to the four second characteristic points of the left eye and/or the four second characteristic points of the right eye;
calculating the width-to-length ratio of the left eye according to the length and the width of the left eye, and/or calculating the width-to-length ratio of the right eye according to the length and the width of the right eye;
judging whether the width-to-length ratio is smaller than a first threshold value or not;
if yes, determining the eyeball fatigue degree grade corresponding to the first image according to the width-to-length ratio and a preset first eyeball fatigue degree grade dividing rule.
6. The image display adjustment method according to claim 4, wherein the first image is a facial image of the operator, the identifying of the ocular feature of the first image to obtain ocular feature data includes:
identifying eye features of the first image to identify a left eye region and/or a right eye region;
acquiring the total number of pixels of the left eye area and/or the right eye area in the transverse direction and the total number of pixels of the left eye area and/or the right eye area in the longitudinal direction;
the obtaining the eyeball fatigue degree grade corresponding to the first image according to the obtained left eye and/or right eye feature data comprises the following steps:
calculating the aspect ratio of the left eye and/or the right eye according to the ratio between the total number of the pixel points of the left eye area and/or the right eye area in the longitudinal direction and the total number of the pixel points of the corresponding area in the transverse direction;
determining whether the aspect ratio is less than a second threshold;
if yes, determining the eyeball fatigue degree grade corresponding to the first image according to the aspect ratio and a preset second eyeball fatigue degree grade dividing rule.
7. The image display adjustment method according to claim 4, wherein the first image includes a first pupil image and a second pupil image that are simultaneously acquired, wherein the first pupil image is an infrared image and the second pupil image is a visible image;
The identifying the eye feature of the first image to obtain eye feature data includes:
performing differential processing on the first pupil image and the second pupil image which are acquired simultaneously to acquire a pupil differential image;
identifying the pupil difference image to identify a left eye pupil and a right eye pupil;
acquiring corresponding pupil aspect ratio data according to the identified left eye pupil and/or right eye pupil;
the obtaining the eyeball fatigue degree grade corresponding to the first image according to the obtained eye feature data of the left eye and/or the right eye comprises the following steps:
calculating a left eye and/or right eye closing degree value according to the pupil height-width ratio data and the pre-stored maximum pupil height-width ratio data of the left eye and/or the right eye;
judging whether the closing degree value is larger than a third threshold value or not;
if yes, determining the eyeball fatigue degree grade corresponding to the first image according to the closing degree value and a preset third eyeball fatigue degree grade dividing rule.
8. The image display adjustment method according to claim 2, characterized in that the acquiring the time-length data of the operating operator in-operation eyeball use includes:
Acquiring pupil videos and view field videos of an operator, wherein the pupil videos and the view field videos are acquired in a time period from the beginning of an operation to the current moment, and pupil images in the pupil videos correspond to view field images in the view field videos one by one;
the pupil image for each frame in the pupil video performs the following operations:
according to the pupil image, acquiring position information of pupil center points of the left eye and/or the right eye under the pupil image coordinate system;
registering the pupil image and the view field image corresponding to the pupil image to obtain a space transformation matrix between the pupil image and the view field image;
superposing the left eye pupil center point and/or the right eye pupil center point on the view field image according to the space transformation matrix and the position information of the left eye pupil center point and/or the right eye pupil center point under the pupil image coordinate system; and
judging whether a first gazing point corresponding to the left eye pupil center point and a second gazing point corresponding to the right eye pupil center point exist in a view field area in the view field image according to the superposition result, if the first gazing point exists, judging that the left eye of the operator is in a use state at the acquisition time of the pupil image, and/or if the second gazing point exists, judging that the right eye of the operator is in a use state at the acquisition time of the pupil image;
And calculating the using time length of the left eye and/or the right eye of the operator according to the acquisition time of each frame of pupil image of the left eye and/or the right eye in the using state.
9. The image display adjustment method according to claim 1, wherein the acquiring surgical condition data related to a surgery includes:
acquiring surgical scene data and surgical duration data related to a surgery; the acquiring surgical scene data includes:
acquiring an in-vivo scene image, and identifying the in-vivo scene image to acquire surgical position information and scene color information, thereby acquiring surgical scene data; and/or
The obtaining operation duration data comprises the following steps:
acquiring a surgical scene video acquired in a time period from the beginning of a surgery to the current moment;
and inputting the surgical scene video into a pre-trained surgical procedure identification model for time sequence detection so as to acquire surgical duration data.
10. The image display adjustment method according to claim 1, wherein the target visual parameter includes at least one of a target brightness, a target contrast, a target resolution, and a target saturation; the image display adjustment method further includes:
Comparing the adjustment result of the display effect of the in-vivo scene image based on the target visual parameter with the image display effect of the corresponding standard surgical scene in the pre-established standard surgical scene image display effect database to judge whether the current image display effect accords with the standard surgical scene;
if not, continuing to adjust the display effect of the in-vivo scene image based on the image display effect of the corresponding standard operation scene.
11. An image display adjustment system, characterized by comprising a controller configured to implement the image display adjustment method of any one of claims 1 to 10.
12. A readable storage medium, characterized in that the readable storage medium has stored therein a computer program which, when executed by a processor, implements the image display adjustment method of any one of claims 1 to 10.
CN202310239466.6A 2023-03-13 2023-03-13 Image display adjusting method, system, electronic device and storage medium Pending CN116364284A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310239466.6A CN116364284A (en) 2023-03-13 2023-03-13 Image display adjusting method, system, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310239466.6A CN116364284A (en) 2023-03-13 2023-03-13 Image display adjusting method, system, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN116364284A true CN116364284A (en) 2023-06-30

Family

ID=86934002

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310239466.6A Pending CN116364284A (en) 2023-03-13 2023-03-13 Image display adjusting method, system, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN116364284A (en)

Similar Documents

Publication Publication Date Title
WO2021068523A1 (en) Method and apparatus for positioning macular center of eye fundus image, electronic device, and storage medium
KR102014385B1 (en) Method and apparatus for learning surgical image and recognizing surgical action based on learning
US9805469B2 (en) Marking and tracking an area of interest during endoscopy
KR102027276B1 (en) Image processing apparatus, image processing method, and program
EP3373798B1 (en) Method and system for classifying optic nerve head
KR102203565B1 (en) Method and system for content management of video images in anatomical regions
WO2018201632A1 (en) Artificial neural network and system for recognizing lesion in fundus image
JP3673834B2 (en) Gaze input communication method using eye movement
de San Roman et al. Saliency Driven Object recognition in egocentric videos with deep CNN: toward application in assistance to Neuroprostheses
CN110531853B (en) Electronic book reader control method and system based on human eye fixation point detection
US20220020118A1 (en) Digital Image Optimization For Ophthalmic Surgery
CN112734776B (en) Minimally invasive surgical instrument positioning method and system
CN107564048A (en) Based on bifurcation feature registration method
Speidel et al. Image-based tracking of the suturing needle during laparoscopic interventions
JP2007163864A (en) Display control apparatus, display control method, display control program, and display control program recording medium
CA3202916A1 (en) Automatic annotation of condition features in medical images
Speidel et al. Recognition of risk situations based on endoscopic instrument tracking and knowledge based situation modeling
CN112704566B (en) Surgical consumable checking method and surgical robot system
CN116364284A (en) Image display adjusting method, system, electronic device and storage medium
US11790537B2 (en) Tracking device, endoscope system, and tracking method
Lahane et al. Detection of unsafe action from laparoscopic cholecystectomy video
Chakravarty et al. An assistive annotation system for retinal images
US10796147B1 (en) Method and apparatus for improving the match performance and user convenience of biometric systems that use images of the human eye
US20220300652A1 (en) Method, system, and software program product for generating training data for endoscopic applications
CN115881315B (en) Interactive medical visualization system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination