CN117274139A - Method, device, equipment and storage medium for identifying oral cavity part - Google Patents

Method, device, equipment and storage medium for identifying oral cavity part Download PDF

Info

Publication number
CN117274139A
CN117274139A CN202210679521.9A CN202210679521A CN117274139A CN 117274139 A CN117274139 A CN 117274139A CN 202210679521 A CN202210679521 A CN 202210679521A CN 117274139 A CN117274139 A CN 117274139A
Authority
CN
China
Prior art keywords
hidden
view
oral mucosa
oral
marking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210679521.9A
Other languages
Chinese (zh)
Inventor
秦亚茜
穆龙伟
王琪
王涛
康丽君
王嘉浩
赵大平
贾丽
贾文亮
谷越
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Winning Health Technology Group Co Ltd
Original Assignee
Winning Health Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Winning Health Technology Group Co Ltd filed Critical Winning Health Technology Group Co Ltd
Priority to CN202210679521.9A priority Critical patent/CN117274139A/en
Publication of CN117274139A publication Critical patent/CN117274139A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Medical Treatment And Welfare Office Work (AREA)

Abstract

The application provides an oral cavity part identification method, an oral cavity part identification device, an oral cavity part identification equipment and a storage medium, and relates to the technical field of medical treatment. Displaying an oral mucosa front view on a graphical user interface of an electronic device, the oral mucosa front view displaying a plurality of sites in an oral cavity, the method comprising: controlling the movable part to swing in response to a first trigger operation for the movable part in the oral mucosa front view so as to display a hidden part hidden by the movable part on a graphical user interface; in response to a marking operation for the hidden location, a marking indicia is displayed on the hidden location. By applying the embodiment of the application, the process of marking the lesion position in the oral mucosa by a doctor can be simplified, and the operation complexity of the doctor is avoided.

Description

Method, device, equipment and storage medium for identifying oral cavity part
Technical Field
The present application relates to the field of medical technology, and in particular, to a method, apparatus, device, and storage medium for identifying an oral cavity site.
Background
With the increasing level of living, there is an increasing need for oral health, which presents a greater challenge to current medical resources and medical technology.
Currently, doctors often manually mark the patient with a location of a large lesion on medical records based on a mucosal map seal.
However, some doctors at the lesion position cannot intuitively mark the lesion position for the patient in the mucosa map seal, and a simple map is required to be drawn on the medical record for explanation, so that the operation of the doctors is complicated.
Disclosure of Invention
The present application aims to provide a method, a device, equipment and a storage medium for identifying an oral cavity part, which can simplify the operation process of doctors, aiming at the defects in the prior art.
In order to achieve the above purpose, the technical solution adopted in the embodiment of the present application is as follows:
in a first aspect, an embodiment of the present application provides a method for identifying an oral cavity portion, where an oral mucosa front view is displayed on a graphical user interface of an electronic device, and a plurality of portions in an oral cavity are displayed in the oral mucosa front view, the method includes:
controlling the movable part to swing in response to a first trigger operation for the movable part in the oral mucosa front view so as to display a hidden part hidden by the movable part on the graphical user interface;
in response to a marking operation for the hidden location, a marking indicia is displayed on the hidden location.
Optionally, the movable part comprises a plurality of regions;
the controlling the movable part to swing in response to a first trigger operation for the movable part in the oral mucosa front view comprises:
and controlling the movable part to swing in a first preset direction corresponding to a first area of the movable part in response to a first trigger operation for the first area, wherein the first area is any one area of the plurality of areas.
Optionally, the displaying, in response to the marking operation for the hidden portion, a marking identifier on the hidden portion includes:
determining a sliding track of the sliding operation in response to a marking operation for the hidden part and a sliding operation on the graphical user interface;
and displaying a mark on the hidden part according to the sliding track.
Optionally, the method further comprises:
and responding to a second triggering operation for the selectable part in the oral mucosa main view, and displaying the selectable part on the graphical user interface in a preset display state, wherein the preset display state is used for indicating that the selectable part is in a selected state.
Optionally, the method further comprises:
after the selectable part is selected, responding to a third triggering operation for the remark control, and acquiring input remark information;
the remark information is stored in association with the selectable part.
Optionally, the graphical user interface further displays a plurality of site controls, each site control corresponding to a site in the oral cavity;
the method further comprises the steps of:
and responding to triggering operation of the hidden part control in the plurality of part controls, displaying the hidden part corresponding to the hidden part control in the oral mucosa front view in a preset display state, and displaying a mark identifier on the hidden part.
Optionally, the method further comprises:
responding to a fifth triggering operation for generating a medical record control, and converting the marked oral mucosa front view into a line graph;
the line graph is displayed in a medical record.
In a second aspect, an embodiment of the present application further provides an oral cavity site identification apparatus, where an oral cavity mucosa front view is displayed on a graphical user interface of an electronic device, and a plurality of sites in an oral cavity are displayed in the oral cavity mucosa front view, where the apparatus includes:
a control module for controlling the movable part to swing in response to a first trigger operation for the movable part in the oral mucosa main view, so as to display a hidden part hidden by the movable part on the graphical user interface;
and the marking module is used for displaying a marking mark on the hidden part in response to marking operation on the hidden part.
Optionally, the movable part comprises a plurality of regions;
correspondingly, the control module is specifically configured to control the movable part to swing in a first preset direction corresponding to a first area of the movable part in response to a first trigger operation for the first area, where the first area is any one area of the plurality of areas.
Optionally, the marking module is specifically configured to determine a sliding track of the sliding operation in response to the marking operation for the hidden part and the sliding operation on the graphical user interface; and displaying a mark on the hidden part according to the sliding track.
Optionally, the apparatus further comprises: a display module;
the display module is used for responding to a second triggering operation for the selectable part in the oral mucosa main view, and displaying the selectable part on the graphical user interface in a preset display state, wherein the preset display state is used for indicating that the selectable part is in a selected state.
Optionally, the apparatus further comprises: an acquisition module;
the acquisition module is used for responding to a third triggering operation for the remark control after the selectable part is selected, and acquiring input remark information; the remark information is stored in association with the selectable part.
Optionally, the graphical user interface further displays a plurality of site controls, each site control corresponding to a site in the oral cavity;
correspondingly, the display module is further configured to respond to a triggering operation for a hidden part control in the plurality of part controls, display a hidden part corresponding to the hidden part control in the oral mucosa main view in a preset display state, and display a mark identifier on the hidden part.
Optionally, the apparatus further comprises: a conversion module;
the conversion module is used for responding to a fifth triggering operation for generating a medical record control, and converting the marked oral mucosa main view into a line graph; the line graph is displayed in a medical record.
In a third aspect, an embodiment of the present application provides an electronic device, including: a processor, a storage medium and a bus, the storage medium storing machine-readable instructions executable by the processor, the processor and the storage medium communicating over the bus when the electronic device is operating, the processor executing the machine-readable instructions to perform the steps of the oral site identification method of the first aspect described above.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium having a computer program stored thereon, which when executed by a processor performs the steps of the oral site identification method of the first aspect described above.
The beneficial effects of this application are:
the embodiment of the application provides an oral cavity part identification method, an oral cavity part identification device and a storage medium, wherein an oral cavity mucous membrane main view is displayed on a graphical user interface of electronic equipment, and a plurality of parts in an oral cavity are displayed in the oral cavity mucous membrane main view, and the method comprises the following steps: controlling the movable part to swing in response to a first trigger operation for the movable part in the oral mucosa front view so as to display a hidden part hidden by the movable part on a graphical user interface; in response to a marking operation for the hidden location, a marking indicia is displayed on the hidden location.
By adopting the method for marking the oral cavity part, the front view of the oral cavity mucous membrane can be displayed on the graphical user interface of the electronic device in a graphical mode, when the lesion position is positioned at a part which cannot be directly exposed on the interface by the front view of the oral cavity mucous membrane, namely, the lesion position is positioned at a hidden part hidden by a movable part in the front view of the oral cavity mucous membrane, the movable part is subjected to a first triggering operation based on the attribute that the movable part can swing, so that the hidden part can be displayed on the front view of the oral cavity mucous membrane, and then the marking mark corresponding to the lesion position on the hidden part can be displayed in the front view of the oral cavity mucous membrane according to the marking operation of a doctor on a certain position of the hidden part. That is, even if the lesion position is located at a hidden position in the oral mucosa main view, the lesion position can be marked on the oral mucosa main view directly, so that the process of marking the lesion position in the oral mucosa by a doctor can be simplified, the operation complexity of the doctor is avoided, and the intellectualization of marking the lesion position of the oral position is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of a graphical user interface displaying a front view of an oral mucosa according to an embodiment of the present application;
fig. 2 is a schematic flow chart of an oral cavity part identification method according to an embodiment of the present application;
fig. 3 is a schematic diagram of a front view of an oral mucosa corresponding to a "right lingual margin" area according to an embodiment of the present application;
fig. 4 is a schematic diagram of a front view of an oral mucosa corresponding to a "left lingual margin" area provided in an embodiment of the present application;
fig. 5 is a schematic diagram of a front view of an oral mucosa corresponding to a "tongue tip" area according to an embodiment of the present application;
fig. 6 is a schematic diagram showing a marking mark of a bottom of an oral cavity on a front view of an oral mucosa according to an embodiment of the present application;
fig. 7 is a schematic diagram of an optional portion in a front view of an oral mucosa according to an embodiment of the present application;
fig. 8 is a schematic diagram of an electronic medical record according to an embodiment of the present application;
fig. 9 is a schematic structural view of an oral cavity portion identifier according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, as provided in the accompanying drawings, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
Before explaining the embodiments of the present application in detail, an application scenario of the present application will be described first. The application scenario may specifically be a scenario of identifying a lesion position on a portion in the oral mucosa, where the portion may specifically be a portion of the tongue, the sole of the mouth, etc., and the lesion position may specifically be a certain position on a "sole of the tongue", for example, which is not limited in the present application. Fig. 1 is a schematic structural diagram of a graphical user interface for displaying a front view of an oral mucosa according to an embodiment of the present application, as shown in fig. 1, the graphical user interface may be displayed on an electronic device, where the graphical user interface may display an editing area 10 and a medical record display area 20, where the editing area 10 may be located on the left side of the graphical user interface, and the medical record display area 20 is located on the right side of the graphical user interface. The editing area 10 includes an oral mucosa front view 100, and the oral mucosa front view 100 shows a plurality of portions in the oral cavity, and the portions may be divided into non-hidden portions and hidden portions according to whether the portions are directly observed, the non-hidden portions may be, for example, the tongue, the upper lip, and the lower lip, and the hidden portions may be, for example, the bottom of the tongue and the bottom of the mouth, and it is noted that the non-hidden portions and the hidden portions are not limited in the present application.
It should be understood that, each portion of the oral mucosa main view 100 corresponds to attribute information, where the attribute information may include picture information, a touch event, and the like, where the touch event is used to indicate that the corresponding portion thereof may swing when triggered, that is, the portion is a movable portion. An exemplary case is if the doctor makes a facial diagnosis of the patient: the patient's oral cavity is damaged the position and is located "under the tongue", then based on the following example, this application can utilize electronic equipment to assist doctor to carry out the sign to the disease position on patient "under the tongue", can simplify the doctor like this and carry out the process of sign to the disease position in the oral mucosa, avoid doctor's operation complicacy, wherein, electronic equipment specifically can be desktop computer, notebook computer etc. have the equipment of demonstration and data processing function.
Further, after the lesion position identification of the patient is completed, the lesion position, the identification information of the lesion position, and the remark information can be associated and stored, and after the triggering operation of the medical record generation control is performed, an electronic medical record corresponding to the patient can be generated, wherein the electronic medical record is displayed in the medical record display area 20, and the electronic medical record comprises a line chart corresponding to the marked oral mucosa main view, the marking information of the lesion position, and the like, so that the efficiency of generating the electronic medical record can be improved.
The oral site identification method referred to in the present application is exemplified in the following with reference to the accompanying drawings. Fig. 2 is a flowchart of a method for identifying an oral cavity portion according to an embodiment of the present application, as shown in fig. 2, the method may include:
s201, responding to a first triggering operation for a movable part in the oral mucosa front view, and controlling the movable part to swing so as to display a hidden part hidden by the movable part on a graphical user interface.
The front view of the oral mucosa may be displayed on the graphical user interface of the electronic device in the manner of fig. 1. It will be appreciated that the graphical user interface of fig. 1 is specifically for an oral mucosa map. An exemplary display interface of the electronic device includes a dental bitmap space and an oral mucosa map control, and after the electronic device receives a trigger operation instruction for the oral mucosa map control, a graphical user interface corresponding to the oral mucosa map control can be displayed on the interface, as shown in fig. 1.
As described in connection with fig. 1, the front view 100 of the oral mucosa includes non-hidden portions, which may be understood as portions of the front view 100 of the oral mucosa that are directly exposed on the graphical user interface, such as the tongue, the upper lip, and the lower lip, and hidden portions, which may be understood as portions of the front view 100 of the oral mucosa that are not directly exposed on the graphical user interface, such as the tongue bottom and the mouth bottom.
As can be seen from the above description, the corresponding attribute information may be configured for each portion of the oral mucosa main view 100, and the attribute information may include picture information, touch events, and the like, and it should be understood that the non-hidden portions may include a plurality of non-hidden portions, where the non-hidden portions including the touch events in the attribute information are used as movable portions, that is, the movable portions may swing when triggered.
Taking the "tongue" portion in the oral mucosa main view 100 as an example, after a doctor performs a facial diagnosis on a patient, if the lesion position of the patient is located in an area covered by the "tongue" portion, for example, a certain position in the "tongue bottom" portion and a certain position in the "mouth bottom" portion, the doctor may perform a first triggering operation on the "tongue" portion in the oral mucosa main view, where the first triggering operation may specifically be a double click operation, a single click operation, or the like, and the application is not limited thereto. After the electronic device receives the first trigger instruction corresponding to the first trigger operation, the tongue part can be controlled to swing according to the trigger area and the swing direction, and the swing direction of the tongue part can be, for example, the upper direction, the left direction or the right direction, and the tongue part is not limited in the application. Based on this, other portions hidden by the "tongue" portion, herein the "other portions" are hidden portions, which may specifically be the "bottom of tongue" portions and the "bottom of mouth" portions mentioned above, may be displayed on the oral mucosa front view 100.
S202, in response to marking operation for the hidden part, a marking mark is displayed on the hidden part.
After the hidden site is exposed to the graphical user interface, an exemplary physician may trigger a marking control displayed in the editing area 10, which may in turn mark the specific location of the lesion on the hidden site. That is, the electronic device may display a marker identifier on the lesion position of the hidden portion in the oral mucosa main view 100 based on the received marker trigger instruction of the marker control and the marking operation of the lesion position of the hidden portion by the doctor, where the marker identifier may include shape information, color information, and the like, and the user (such as doctor, patient) may intuitively know the specific position of the lesion in the oral cavity through the marker identifier.
It should be understood that if the lesion position of the patient is located on the non-hidden portion (upper jaw) in the oral mucosa main view 100, the doctor may directly perform the marking operation on the non-hidden portion (upper jaw) based on the lesion position, and further display the marking mark on the lesion position of the non-hidden portion (upper jaw) in the oral mucosa main view 100.
In summary, in the method for identifying an oral cavity portion provided by the present application, a graphical manner is used to display a front view of an oral cavity mucosa on a graphical user interface of an electronic device, when a lesion position is located at a portion where the front view of the oral cavity mucosa cannot be directly exposed on the interface, that is, when the lesion position is located at a hidden portion hidden by a movable portion in the front view of the oral cavity mucosa, a first triggering operation is performed on the movable portion based on an attribute that the movable portion can swing, so that the hidden portion can be displayed on the front view of the oral cavity mucosa, and then a marking operation of a doctor on a certain position of the hidden portion can be performed, so that a marking identifier corresponding to the lesion position on the hidden portion is displayed in the front view of the oral cavity mucosa. That is, even if the lesion position is located at a hidden position in the oral mucosa main view, the lesion position can be marked on the oral mucosa main view directly, so that the process of marking the lesion position in the oral mucosa by a doctor can be simplified, the operation complexity of the doctor is avoided, and the intellectualization of marking the lesion position of the oral position is improved.
Continuing with the description by way of example of the movable portion being a "tongue" portion, the "tongue" portion may include a plurality of regions therein.
Based on this, the above-described control of the movable site swing in response to the first trigger operation for the movable site in the oral mucosa front view includes: and controlling the movable part to swing in a first preset direction corresponding to a first area of the movable part in response to a first trigger operation for the first area, wherein the first area is any one area of the plurality of areas.
The first trigger operation may be, for example, a double click operation, and the first region may be, for example, a "right margin" region, a "left margin" region, or a "tip" region on the "tongue" portion. Referring to fig. 1, fig. 3, fig. 4, and fig. 5, fig. 3 is a schematic diagram of an oral mucosa front view corresponding to a "right tongue edge" area provided in this embodiment of the present application, referring to fig. 1 and fig. 3, a doctor performs a double-click operation on an arbitrary position on the "right tongue edge" area 301, and the electronic device may control the "tongue" portion to swing in a left direction based on a double-click operation instruction corresponding to the "right tongue edge" area 301, so that a hidden portion hidden in the "right tongue edge" area may be displayed in the oral mucosa front view, where the hidden portion may specifically be a right edge corresponding to the "tongue" portion, and if the lesion position is located at the right edge corresponding to the "tongue" portion, the doctor may directly mark on the right edge corresponding to the "tongue" portion displayed on the oral mucosa front view.
Fig. 4 is a schematic diagram of an oral mucosa front view corresponding to a "left lingual margin" area provided in an embodiment of the present application, referring to fig. 1 and fig. 4, a doctor performs a double-click operation on any position on the "left lingual margin" area 302, and the electronic device may control the "tongue" portion to swing in a rightward direction based on a double-click operation instruction corresponding to the "left lingual margin" area 302, so as to display a hidden portion hidden in the oral mucosa front view, where the hidden portion may specifically be a left edge corresponding to the "tongue" portion, and if the lesion position is located at the left edge corresponding to the "tongue" portion, the doctor may directly mark on the left edge corresponding to the "tongue" portion displayed on the oral mucosa front view.
Fig. 5 is a schematic diagram of an oral mucosa front view corresponding to a "tongue tip" area provided in this embodiment of the present application, referring to fig. 1 and fig. 5, a doctor performs a double-click operation on any position on the "tongue tip" area 303, and the electronic device may control the "tongue" portion to swing upwards, that is, tilt upwards, based on a double-click operation instruction corresponding to the "tongue tip" area 303, so as to display a hidden portion hidden in the "tongue tip" area in the oral mucosa front view, where the hidden portion is specific to the bottom of the mouth, and if the lesion position is located at the bottom of the mouth, the doctor may directly mark on the bottom portion displayed on the oral mucosa front view.
It can be seen that, by dividing the area of the movable part and the corresponding relation between each preconfigured area and the preset direction, the method and the device enable the mark marks corresponding to more lesion positions to be displayed on the front view of the oral mucosa, and are closer to the actual diagnosis scene of the oral mucosa.
Optionally, the displaying the mark on the hidden part in response to the marking operation on the hidden part includes: determining a sliding track of the sliding operation in response to the marking operation for the hidden part and in response to the sliding operation on the graphical user interface; and displaying the mark on the hidden part according to the sliding track.
Referring to fig. 6, fig. 6 is a schematic diagram of displaying a marking identifier of a bottom portion of an oral mucosa on a front view of the oral mucosa, and as shown in fig. 6, after receiving a marking operation instruction for a marking control 601, an electronic device may display a marking toolbar 602 on the editing area 10, where the marking toolbar 602 may include a zoom control, a shape control, a color control, and so on. The scaling control may be used to scale the oral mucosa main view 100, the shape control and the color control may be used to determine the state of the mark, for example, a doctor triggers the custom mark shape control and the color control corresponding to red, and then the electronic device determines the sliding track corresponding to the sliding operation in response to the sliding operation on the graphical user interface, that is, the mark 600 displayed on the bottom of the mouth is the sliding track, and the color of the sliding track is red. Of course, the marking toolbar 602 may also include shape controls corresponding to a particular shape, such as shape controls corresponding to a circle. It should be noted that fig. 6 is only an example, and the marking toolbar 602 may further include controls such as backspace, forward, delete, remark, etc., which are not limited in the type of control in the marking toolbar 602.
It can be seen that the marking indicia can be customized through controls in the marking toolbar 602, the operating habits of the physician can be adapted, and the lesion location can be marked by selecting the appropriate marking indicia according to the actual diagnostic scenario.
Optionally, the method further comprises: and responding to a second triggering operation for the selectable part in the oral mucosa main view, and displaying the selectable part on the graphical user interface in a preset display state, wherein the preset display state is used for indicating that the selectable part is in a selected state.
It should be appreciated that the optional location is any location in the oral mucosa front view 100, i.e., the optional location includes the hidden location, non-hidden location mentioned above.
Exemplary selectable sites include a "soft palate" site, a "dorsum lingual" site, and an "upper lip" site, the second trigger operation being a single click operation, the preset display state being a preset display brightness. Referring to fig. 7 for explanation, fig. 7 is a schematic diagram of an embodiment of the present application when a selectable part in a front view of an oral mucosa is selected, and as shown in fig. 7, the electronic device may display the "soft palate" part 701, the "dorsum lingual" part 702 and the "upper lip" part 703 in the front view of the oral mucosa 100 with preset display brightness based on a single click operation instruction to the "soft palate" part 701, the "dorsum lingual" part 702 and the "upper lip" part 703, respectively, so that a doctor can more intuitively distinguish the currently selected part from the front view of the oral mucosa 100.
Optionally, the method may further include: after the selectable part is selected, responding to a third triggering operation for the remark control, and acquiring input remark information; the remark information is stored in association with the selectable location.
Continuing with the description of fig. 7, the editing area 10 in fig. 7 may further include a selected portion recording area 704, and a remark recording area 705, where the selected portion recording area 704 is used for recording the selected portion in the selected state, such as the "soft palate" portion, the "dorsum lingual" portion and the "upper lip" portion, and the remark recording area 705 is used for recording remark information corresponding to the selected portion.
An exemplary graphical user interface includes a remark control, and after the selectable part is selected, the electronic device may display a text box in an area associated with the selected selectable part based on a third trigger operation instruction of the remark control, and the doctor may input remark information through the text box. As another example, as can be seen from the foregoing description, the marking toolbar 602 may include a remark control therein, and after marking the lesion position on the selected selectable portion, upon receiving a third trigger operation of the remark control in the marking toolbar 602, a text box may be displayed on an associated area of the lesion position, through which the doctor may input remark information. After the remarking operation is completed, remarking information associated with each selectable part may be displayed in the remarking recording area 705, that is, the selectable part and the remarking information are associated and stored.
It can be understood that the remark information can specifically include the lesion type, the analysis reason of the lesion, the treatment advice and the like, and the patient can pertinently know the reason of the lesion of the oral mucosa and perform effective treatment based on the remark information.
Optionally, the graphical user interface further displays a plurality of site controls, each site control corresponding to a site in the oral cavity, such as an upper lip site control corresponding to an "upper lip" site, a soft palate site control corresponding to a "soft palate" site, and a bottom mouth site control corresponding to a "bottom mouth" site.
Based on this, the above method further comprises: responding to triggering operation of hidden part controls in the plurality of part controls, displaying hidden parts corresponding to the hidden part controls in a preset display state in the oral mucosa front view, and displaying mark identifiers on the hidden parts.
An exemplary graphical user interface includes an oral site control display area, where the oral site control display area includes a plurality of site controls, where the site controls may include hidden site controls, such as a bottom of the mouth control, a bottom of the tongue control. If the lesion position is located at the upper lip position, the doctor can trigger the upper lip position control, such as clicking, double clicking and the like, and the electronic device can display the upper lip position corresponding to the upper lip position control based on a preset display state and display the mark on the upper lip position based on the trigger operation of the mark control; if the lesion position is located at the "bottom of mouth" position, the doctor can trigger the bottom of mouth position control, such as clicking, double clicking and the like, and the electronic device can display the "bottom of mouth" position corresponding to the bottom of mouth position control based on a preset display state, wherein the preset display state can comprise preset display brightness and preset magnification, and the "bottom of mouth" position can be highlighted on the oral mucosa main view based on the preset display brightness and the preset magnification, so that the later marking operation of the lesion position can be facilitated. The step of displaying the marking indicia on the "bottom of the mouth" portion is described in detail with reference to the relevant portions above and will not be described again.
In another exemplary embodiment, if the lesion position is located in a hidden position, such as a "bottom of mouth" position, after the trigger operation for the bottom of mouth position control is responded, on one hand, the oral cavity position of the hidden "bottom of mouth" position may be hidden, i.e. the oral cavity position of the hidden "bottom of mouth" position, such as a "tongue" position, is not displayed in the oral mucosa front view, and the "bottom of mouth" position corresponding to the bottom of mouth position control is displayed in a preset display state.
It should be noted that, if there are more hidden portions, the touch mode of displaying the hidden portions by using the portion control may be combined with the above-mentioned touch mode of displaying the hidden portions directly on the oral mucosa main view, so as to display each hidden portion on the oral mucosa main view.
Optionally, the method may further include: responding to a fifth triggering operation for generating a medical record control, and converting the marked oral mucosa front view into a line graph; the line graph is displayed in a medical record.
Referring to fig. 7, the editing area 10 in fig. 7 may further include a medical record generating control 706, and after the doctor performs a double-click operation on the medical record generating control 706, the labeled front view of the oral mucosa may be converted into a line graph according to a corresponding relationship between the mucosa graph and the line graph. Referring to fig. 8, fig. 8 is a schematic diagram of an electronic medical record according to an embodiment of the present application, and as can be seen from fig. 8, a transformed line graph 800 may include a marker 801 of a lesion position.
Referring to fig. 7 and 8, after the doctor performs a double-click operation on the medical record generating control 706 in fig. 7, the doctor may generate the electronic medical record in fig. 8 according to the information in the selected part recording area 704, the remark recording area 705 and the marked oral mucosa front view, where the electronic medical record includes not only the line graph 800 corresponding to the marked oral mucosa front view, but also the contents of the selected part recording area 704 and the remark recording area 705, such as a lesion type, and the lesion type may be displayed behind the "complaint" theme. It should be noted that the content in the electronic medical record is not limited in this application.
It can be seen that the oral mucosa image can be inserted into the electronic medical record, in the inserting process, the marked oral mucosa main view can be converted into the line image to be displayed in the electronic medical record, and meanwhile, the line image carries marking information which can comprise marking marks and remarking information. Therefore, the lesion position at the hidden part can be marked for the patient visually in the line graph corresponding to the oral mucosa front view, the patient can know the situation in time and perfectly, and the situation that a doctor marks the lesion position for the patient on the mucosa graph seal in a manual mode can be avoided.
Fig. 9 is a schematic structural view of an oral cavity portion marking device according to an embodiment of the present application. As shown in fig. 9, the apparatus includes:
the control module 901 is used for responding to a first triggering operation for a movable part in the front view of the oral mucosa and controlling the movable part to swing so as to display a hidden part hidden by the movable part on the graphical user interface;
a marking module 902 for displaying a marking indicia on the hidden location in response to a marking operation for the hidden location.
Optionally, the movable part comprises a plurality of regions;
accordingly, the control module 901 is specifically configured to control, in response to a first trigger operation for a first area of the movable portion, the movable portion to swing in a first preset direction corresponding to the first area, where the first area is any one area of the plurality of areas.
Optionally, the marking module 902 is specifically configured to determine a sliding track of the sliding operation in response to the marking operation for the hidden portion and the sliding operation on the graphical user interface; and displaying the mark on the hidden part according to the sliding track.
Optionally, the apparatus further comprises: a display module;
the display module is used for responding to a second triggering operation for the selectable part in the oral mucosa front view, displaying the selectable part on the graphical user interface in a preset display state, wherein the preset display state is used for indicating that the selectable part is in a selected state.
Optionally, the apparatus further comprises: an acquisition module;
the acquisition module is used for responding to a third triggering operation for the remark control after the selectable part is selected, and acquiring input remark information; the remark information is stored in association with the selectable location.
Optionally, the graphical user interface further displays a plurality of site controls, each site control corresponding to a site in the oral cavity;
correspondingly, the display module is further used for responding to triggering operation of the hidden part control in the plurality of part controls, displaying the hidden part corresponding to the hidden part control in the oral mucosa front view in a preset display state, and displaying the mark on the hidden part.
Optionally, the apparatus further comprises: a conversion module;
the conversion module is used for responding to a fifth triggering operation for generating the medical record control and converting the marked oral mucosa front view into a line graph; the line graph is displayed in a medical record.
The foregoing apparatus is used for executing the method provided in the foregoing embodiment, and its implementation principle and technical effects are similar, and are not described herein again.
The above modules may be one or more integrated circuits configured to implement the above methods, for example: one or more application specific integrated circuits (Application Specific Integrated Circuit, abbreviated as ASIC), or one or more microprocessors (Digital Signal Processor, abbreviated as DSP), or one or more field programmable gate arrays (Field Programmable Gate Array, abbreviated as FPGA), or the like. For another example, when a module above is implemented in the form of a processing element scheduler code, the processing element may be a general-purpose processor, such as a central processing unit (Central Processing Unit, CPU) or other processor that may invoke the program code. For another example, the modules may be integrated together and implemented in the form of a system-on-a-chip (SOC).
Fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application, as shown in fig. 10, the electronic device may include: processor 1001, storage medium 1002, and bus 1003, storage medium 1002 storing machine-readable instructions executable by processor 1001, processor 1001 and storage medium 1002 communicating over bus 1003 when the electronic device is operating, processor 1001 executing machine-readable instructions to perform the steps of the method embodiments described above. The specific implementation manner and the technical effect are similar, and are not repeated here.
Optionally, the present application further provides a computer readable storage medium, on which a computer program is stored, which when being executed by a processor performs the steps of the above-described method embodiments.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in hardware plus software functional units.
The integrated units implemented in the form of software functional units described above may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium, and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (english: processor) to perform part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: u disk, mobile hard disk, read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk, etc.
It should be noted that in this document, relational terms such as "first" and "second" and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing description is only of the preferred embodiments of the present application and is not intended to limit the same, but rather, various modifications and variations may be made by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principles of the present application should be included in the protection scope of the present application. It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures. The foregoing description is only of the preferred embodiments of the present application and is not intended to limit the same, but rather, various modifications and variations may be made by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principles of the present application should be included in the protection scope of the present application.

Claims (10)

1. An oral site identification method, characterized in that an oral mucosa front view is displayed on a graphical user interface of an electronic device, and a plurality of sites in an oral cavity are displayed in the oral mucosa front view, the method comprising:
controlling the movable part to swing in response to a first trigger operation for the movable part in the oral mucosa front view so as to display a hidden part hidden by the movable part on the graphical user interface;
in response to a marking operation for the hidden location, a marking indicia is displayed on the hidden location.
2. The method of claim 1, wherein the movable region comprises a plurality of regions;
the controlling the movable part to swing in response to a first trigger operation for the movable part in the oral mucosa front view comprises:
and controlling the movable part to swing in a first preset direction corresponding to a first area of the movable part in response to a first trigger operation for the first area, wherein the first area is any one area of the plurality of areas.
3. The method of claim 1, wherein the displaying a marker identification on the hidden location in response to a marking operation for the hidden location comprises:
determining a sliding track of the sliding operation in response to a marking operation for the hidden part and a sliding operation on the graphical user interface;
and displaying a mark on the hidden part according to the sliding track.
4. The method according to claim 1, wherein the method further comprises:
and responding to a second triggering operation for the selectable part in the oral mucosa main view, and displaying the selectable part on the graphical user interface in a preset display state, wherein the preset display state is used for indicating that the selectable part is in a selected state.
5. The method according to claim 4, wherein the method further comprises:
after the selectable part is selected, responding to a third triggering operation for the remark control, and acquiring input remark information;
the remark information is stored in association with the selectable part.
6. The method of any one of claims 1-5, wherein the graphical user interface further displays a plurality of site controls, each site control corresponding to a site in the oral cavity;
the method further comprises the steps of:
and responding to triggering operation of the hidden part control in the plurality of part controls, displaying the hidden part corresponding to the hidden part control in the oral mucosa front view in a preset display state, and displaying a mark identifier on the hidden part.
7. The method according to any one of claims 1-5, further comprising:
responding to a fifth triggering operation for generating a medical record control, and converting the marked oral mucosa front view into a line graph;
the line graph is displayed in a medical record.
8. An oral site identification apparatus, characterized in that an oral mucosa front view is displayed on a graphical user interface of an electronic device, in which a plurality of sites in an oral cavity are displayed, the apparatus comprising:
a control module for controlling the movable part to swing in response to a first trigger operation for the movable part in the oral mucosa main view, so as to display a hidden part hidden by the movable part on the graphical user interface;
and the marking module is used for displaying a marking mark on the hidden part in response to marking operation on the hidden part.
9. An electronic device, comprising: a processor, a storage medium, and a bus, the storage medium storing machine-readable instructions executable by the processor, the processor and the storage medium in communication over the bus when the electronic device is operating, the processor executing the machine-readable instructions to perform the steps of the oral site identification method of any one of claims 1-7.
10. A computer readable storage medium, characterized in that the computer readable storage medium has stored thereon a computer program which, when executed by a processor, performs the steps of the oral site identification method according to any of claims 1-7.
CN202210679521.9A 2022-06-15 2022-06-15 Method, device, equipment and storage medium for identifying oral cavity part Pending CN117274139A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210679521.9A CN117274139A (en) 2022-06-15 2022-06-15 Method, device, equipment and storage medium for identifying oral cavity part

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210679521.9A CN117274139A (en) 2022-06-15 2022-06-15 Method, device, equipment and storage medium for identifying oral cavity part

Publications (1)

Publication Number Publication Date
CN117274139A true CN117274139A (en) 2023-12-22

Family

ID=89206881

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210679521.9A Pending CN117274139A (en) 2022-06-15 2022-06-15 Method, device, equipment and storage medium for identifying oral cavity part

Country Status (1)

Country Link
CN (1) CN117274139A (en)

Similar Documents

Publication Publication Date Title
US9465920B2 (en) Providing assistance with reporting
US6736776B2 (en) Method for diagnosing and interpreting dental conditions
US20120176408A1 (en) Image interpretation report generation apparatus, method and program
US20060173858A1 (en) Graphical medical data acquisition system
RU2711305C2 (en) Binding report/image
US20050107690A1 (en) Medical image interpretation system and interpretation report generating method
US20110150420A1 (en) Method and device for storing medical data, method and device for viewing medical data, corresponding computer program products, signals and data medium
JP2007516011A (en) Data entry system for endoscopy
JP2005301453A (en) Medical report preparing device, medical report reference device, medical report preparing method, and program therefor
JP2004305289A (en) Medical system
US9996971B2 (en) System providing companion images
JP2014123181A (en) Program, and medical treatment recording and display device
US10521909B2 (en) Information processing system, information processing method, and program
KR20100129010A (en) Electronic chart system for dental clinic and display method thereof
KR20150085943A (en) Method and apparatus of generating structured report including region of interest information in medical image reading processing
US20160055321A1 (en) Systems and methods for tooth charting
JP6401402B2 (en) Interpretation report creation support system
CN108352184B (en) Medical image processing apparatus, program to be installed in medical image processing apparatus, and medical image processing method
KR102354002B1 (en) Method for searching progress note in electronic chart and electronic chart providing apparatus for the same
CN112700828A (en) Nursing document management system
CN112420150A (en) Medical image report processing method and device, storage medium and electronic equipment
CN117274139A (en) Method, device, equipment and storage medium for identifying oral cavity part
JP6128883B2 (en) Endoscopic image management apparatus and endoscopic image display method
JP2004287732A (en) Medical information display device
CN116596919B (en) Endoscopic image quality control method, endoscopic image quality control device, endoscopic image quality control system, endoscopic image quality control computer device and endoscopic image quality control storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination