CN115981511A - Medical information processing method and terminal equipment - Google Patents

Medical information processing method and terminal equipment Download PDF

Info

Publication number
CN115981511A
CN115981511A CN202310086832.9A CN202310086832A CN115981511A CN 115981511 A CN115981511 A CN 115981511A CN 202310086832 A CN202310086832 A CN 202310086832A CN 115981511 A CN115981511 A CN 115981511A
Authority
CN
China
Prior art keywords
information
human body
target
dimensional human
treated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310086832.9A
Other languages
Chinese (zh)
Inventor
张伟
孙前方
江少勤
殷雄初
周炜
陈义平
俞丹丹
许媛媛
黄琳
汪云雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Winning Health Technology Group Co Ltd
Original Assignee
Winning Health Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Winning Health Technology Group Co Ltd filed Critical Winning Health Technology Group Co Ltd
Priority to CN202310086832.9A priority Critical patent/CN115981511A/en
Publication of CN115981511A publication Critical patent/CN115981511A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention provides a medical information processing method and terminal equipment, and relates to the technical field of computers. The medical information processing method includes: determining a target three-dimensional human body model aiming at the object to be treated from a plurality of preset three-dimensional human body models according to the attribute information of the object to be treated; displaying the target three-dimensional human body model in a graphical user interface; responding to the part selection operation input by a first medical user through a graphical user interface, and determining training part information aiming at the object to be treated; and displaying the marking information at the position corresponding to the training part information in the target three-dimensional human body model to obtain the marked target three-dimensional human body model so as to indicate the training part information. The first medical user can determine the training part information through part selection operation input by the graphical user interface, mark information is displayed at the position corresponding to the training part information in the target three-dimensional human body model, the marked training part in the target three-dimensional human body model can be visually displayed, and user experience is improved.

Description

Medical information processing method and terminal equipment
Technical Field
The invention relates to the technical field of computers, in particular to a medical information processing method and terminal equipment.
Background
Rehabilitation medicine is based on the science of functional rehabilitation of organisms, so the treatment process of the rehabilitation medicine does not pay attention to a human body figure, such as the rehabilitation treatment of physical therapy aiming at human body parts, the rehabilitation treatment of acupuncture aiming at human body acupuncture points, and the rehabilitation treatment of musculoskeletal muscles and bones.
In the related art, according to the condition of the patient, the diagnostician records the part of the patient, which needs to be rehabilitated, in a text mode, and the therapist can perform rehabilitation on the patient according to the text information recorded by the therapist.
However, in the related art, the positions of the patient needing rehabilitation therapy are recorded by characters, so that the positions are not intuitive enough, and the user experience is reduced.
Disclosure of Invention
The present invention is directed to provide a medical information processing method and a terminal device, so as to solve the above technical problems in the related art.
In order to achieve the above purpose, the embodiment of the present invention adopts the following technical solutions:
in a first aspect, an embodiment of the present invention provides a medical information processing method, including: determining a target three-dimensional human body model aiming at an object to be treated from a plurality of preset three-dimensional human body models according to the attribute information of the object to be treated;
displaying the target three-dimensional human body model in a graphical user interface;
responding to a part selection operation input by a first medical user through the graphical user interface, and determining training part information aiming at the object to be treated;
and displaying mark information at a position corresponding to the training part information in the target three-dimensional human body model to obtain the marked target three-dimensional human body model so as to indicate the training part information.
Optionally, before displaying the labeled information at the position corresponding to the training part information in the target three-dimensional human body model to obtain a labeled target three-dimensional human body model to indicate the training part information, the method further includes:
determining a position corresponding to the training part information in the target three-dimensional human body model according to a preset mapping relation and the training part information;
the preset mapping relationship is a mapping relationship between a plurality of preset human body part information and positions in the target three-dimensional human body model.
Optionally, the graphical user interface further displays: a plurality of site selection frames; the determining training part information aiming at the object to be treated in response to the part selection operation input by the first medical user through the graphical user interface comprises the following steps:
responding to a first selection operation input by the first medical user for the plurality of part selection boxes, and displaying a part list in an area corresponding to a target selection box selected by the first selection operation;
determining the training part information in response to a second selection operation for a target part among the plurality of parts of the part list.
Optionally, the graphical user interface further displays: at least one perspective control; the method further comprises the following steps:
and responding to a second selection operation input aiming at the at least one visual angle control, and adjusting the display visual angle of the marked target three-dimensional human body model by adopting the visual angle corresponding to the target visual angle control selected by the second selection operation.
Optionally, before the determining, according to the attribute information of the object to be treated, a target three-dimensional human body model for the object to be treated from a plurality of preset three-dimensional human body models, the method further includes:
creating a plurality of initial three-dimensional human body models according to the attribute information of the sample objects, wherein the initial three-dimensional human body models are used for representing the shape of a human body;
and projecting various types of human body information on the plurality of initial three-dimensional human body models to obtain a plurality of preset three-dimensional human body models.
Optionally, the multiple types of human body information include at least two of the following items: human body part information, human body muscle information, human body skeleton information, human body acupuncture point information and human body channel information.
Optionally, the graphical user interface further displays: a plurality of treatment information selection regions corresponding to the plurality of part selection frames, respectively; the method further comprises the following steps:
and editing the rehabilitation therapy information aiming at the object to be treated in the target therapy information selection area in response to the editing operation input aiming at the target therapy information selection area in the plurality of therapy information selection areas.
Optionally, the method further includes:
responding to the confirmation operation input by the first medical user through the graphical user interface, and generating a rehabilitation treatment application form aiming at the object to be treated according to the attribute information of the object to be treated, the marked target three-dimensional human body model and the rehabilitation treatment information;
and sending the rehabilitation application form to a terminal device corresponding to a second medical user, wherein the rehabilitation application form is used for indicating the second medical user to execute rehabilitation training service on the object to be treated.
Optionally, the method further includes:
and sending the attribute information of the object to be treated, the marked target three-dimensional human body model and the rehabilitation treatment information to a terminal device corresponding to the object to be treated so that the terminal device corresponding to the object to be treated displays the attribute information of the object to be treated, the marked target three-dimensional human body model and the rehabilitation treatment information.
In a second aspect, an embodiment of the present invention further provides a terminal device, including: a memory storing a computer program executable by the processor, and a processor implementing the medical information processing method according to any one of the first aspect described above when the processor executes the computer program.
In a third aspect, an embodiment of the present invention further provides a medical information processing apparatus, including:
the device comprises a first determining module, a second determining module and a control module, wherein the first determining module is used for determining a target three-dimensional human body model aiming at an object to be treated from a plurality of preset three-dimensional human body models according to attribute information of the object to be treated;
the first display module is used for displaying the target three-dimensional human body model in a graphical user interface;
the second determination module is used for responding to the part selection operation input by the first medical user through the graphical user interface and determining the training part information aiming at the object to be treated;
and the second display module is used for displaying mark information at a position corresponding to the training part information in the target three-dimensional human body model to obtain the marked target three-dimensional human body model so as to indicate the training part information.
Optionally, the apparatus further comprises:
the third determining module is used for determining the position corresponding to the training part information in the target three-dimensional human body model according to a preset mapping relation and the training part information;
the preset mapping relationship is a mapping relationship between a plurality of preset human body part information and positions in the target three-dimensional human body model.
Optionally, the graphical user interface further displays: a plurality of site selection frames; the second determining module is specifically configured to, in response to a first selection operation input by the first medical user for the plurality of site selection boxes, display a site list in an area corresponding to a target selection box selected by the first selection operation; determining the training site information in response to a second selection operation for a target site among the plurality of sites of the site list.
Optionally, the graphical user interface further displays: at least one perspective control; the device further comprises:
and the adjusting module is used for responding to a second selection operation input aiming at the at least one visual angle control, and adjusting the display visual angle of the marked target three-dimensional human body model by adopting the visual angle corresponding to the target visual angle control selected by the second selection operation.
Optionally, the apparatus further comprises:
the system comprises a creating module, a generating module and a processing module, wherein the creating module is used for creating a plurality of initial three-dimensional human body models according to the attribute information of a plurality of sample objects, and the initial three-dimensional human body models are used for representing the shape of a human body;
and the projection module is used for projecting various types of human body information on the plurality of initial three-dimensional human body models to obtain a plurality of preset three-dimensional human body models.
Optionally, the multiple types of human body information include at least two of the following items: human body part information, human body muscle information, human body skeleton information, human body acupuncture point information and human body channel information.
Optionally, the graphical user interface further displays: a plurality of treatment information selection areas corresponding to the plurality of part selection frames, respectively; the device further comprises:
and the editing module is used for responding to the editing operation input aiming at a target treatment information selection area in the plurality of treatment information selection areas and editing the rehabilitation treatment information aiming at the object to be treated in the target treatment information selection area.
Optionally, the apparatus further comprises:
the generating module is used for responding to the confirmation operation input by the first medical user through the graphical user interface and generating a rehabilitation application form aiming at the object to be treated according to the attribute information of the object to be treated, the marked target three-dimensional human body model and the rehabilitation information;
and the first sending module is used for sending the rehabilitation application form to the terminal equipment corresponding to the second medical user, and the rehabilitation application form is used for indicating the second medical user to execute rehabilitation training service on the object to be treated.
Optionally, the apparatus further comprises:
and the second sending module is used for sending the attribute information of the object to be treated, the marked target three-dimensional human body model and the rehabilitation treatment information to the terminal equipment corresponding to the object to be treated so as to enable the terminal equipment corresponding to the object to be treated to display the attribute information of the object to be treated, the marked target three-dimensional human body model and the rehabilitation treatment information.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the storage medium, and when the computer program is read and executed, the computer program implements the method according to any one of the above first aspects.
In summary, an embodiment of the present invention provides a medical information processing method, including: determining a target three-dimensional human body model aiming at the object to be treated from a plurality of preset three-dimensional human body models according to the attribute information of the object to be treated; displaying the target three-dimensional human body model in a graphical user interface; responding to the part selection operation input by a first medical user through a graphical user interface, and determining training part information aiming at the object to be treated; and displaying the marking information at the position corresponding to the training part information in the target three-dimensional human body model to obtain the marked target three-dimensional human body model so as to indicate the training part information. The first medical user can determine the training part information through part selection operation input by the graphical user interface, operation of the first medical user is facilitated, and the marked information is displayed at the position corresponding to the training part information in the target three-dimensional human body model, so that the training part marked in the target three-dimensional human body model can be visually displayed, and user experience is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a schematic flow chart of a medical information processing method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a graphical user interface provided by an embodiment of the invention;
fig. 3 is a schematic flow chart of a medical information processing method according to an embodiment of the present invention;
fig. 4 is a schematic flowchart of a medical information processing method according to an embodiment of the present invention;
fig. 5 is a schematic flow chart of a medical information processing method according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a graphical user interface provided by an embodiment of the present invention;
fig. 7 is a schematic diagram of a software framework based on which a medical information processing method according to an embodiment of the present invention is provided;
fig. 8 is a schematic structural diagram of a medical information processing apparatus according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention.
Thus, the following detailed description of the embodiments of the present application, as presented in the figures, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the description of the present application, it should be noted that if the terms "upper", "lower", etc. are used to indicate an orientation or a positional relationship based on an orientation or a positional relationship shown in the drawings or an orientation or a positional relationship which is usually placed when the product of the application is used, the description is merely for convenience of description and simplification of the application, but the indication or suggestion that the device or the element referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus, cannot be understood as a limitation of the application.
Furthermore, the terms "first," "second," and the like in the description and in the claims, as well as in the drawings, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in other sequences than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be noted that the features of the embodiments of the present application may be combined with each other without conflict.
In the related art, according to the condition of the patient, the diagnostician records the part of the patient, which needs to be rehabilitated, in a text mode, and the therapist can perform rehabilitation on the patient according to the text information recorded by the therapist. However, in the related art, the positions of the patient needing rehabilitation therapy are recorded by characters, so that the positions are not intuitive enough, and the user experience is reduced.
In view of the above technical problems in the related art, in the medical information processing method provided in the embodiment of the present application, the first medical user may determine the training part information through the part selection operation input by the graphical user interface, which is convenient for the first medical user to operate.
The medical information processing method provided by the embodiment of the application can be applied to terminal equipment, and the terminal equipment can be any one of the following items: desktop computers, notebook computers, tablet computers, smart phones, and the like. The terminal device may be a terminal device corresponding to the first medical user.
A medical information processing method provided in an embodiment of the present application is explained below.
Fig. 1 is a schematic flow chart of a medical information processing method according to an embodiment of the present invention, and as shown in fig. 1, the method may include:
s101, determining a target three-dimensional human body model aiming at the object to be treated from a plurality of preset three-dimensional human body models according to the attribute information of the object to be treated.
And each preset three-dimensional human body model has corresponding preset attribute information, and the plurality of preset three-dimensional human body models have a plurality of preset attribute information.
In some embodiments, target preset attribute information matching the attribute information of the object to be treated is selected from a plurality of preset attribute information according to the attribute information of the object to be treated, and a preset three-dimensional human body model corresponding to the target preset attribute information in the plurality of preset three-dimensional human body models is used as the target three-dimensional human body model for the object to be treated.
In practical application, a plurality of preset three-dimensional human body models can be deployed in a cloud model library in an addressable manner, wherein the model library can be developed in a component form, and in a three-dimensional scene, a required three-dimensional human body model is dynamically changed while environmental variables are not changed, so that the effect stable output is ensured, and the expansibility of the model library is improved.
Alternatively, the attribute information of the object to be treated may include the age of the object to be treated and the sex of the object to be treated.
And S102, displaying the target three-dimensional human body model in a graphical user interface.
The terminal device corresponding to the first medical user can present a graphical user interface, and the target three-dimensional human body model can be displayed in the graphical user interface. Alternatively, the target three-dimensional human model may be displayed in a display area in a graphical user interface.
S103, responding to the part selection operation input by the first medical user through the graphical user interface, and determining training part information aiming at the object to be treated.
In some embodiments, in response to a site selection operation input by the first medical user through the graphical user interface, site information selected by the site selection operation in the graphical user interface is determined, and the selected site information is used as training site information for the object to be treated.
It should be noted that the above-mentioned part selection operation may be a touch operation input by the first medical user through a graphical user interface, or may also be an operation input by the first medical user through an external device of the terminal device corresponding to the first medical user, for example, the external device may be a mouse and/or a keyboard, and of course, the first medical user may also adopt an operation input in other manners, which is not specifically limited in this embodiment of the application.
In practical applications, the first medical user may be a doctor who diagnoses the object to be treated.
And S104, displaying the marking information at the position corresponding to the training part information in the target three-dimensional human body model to obtain the marked target three-dimensional human body model so as to indicate the training part information.
In this embodiment of the present application, a position corresponding to the training portion information in the target three-dimensional human body model may be highlighted, an icon or a character may be added to a position corresponding to the training portion information in the target three-dimensional human body model, or other manners may be adopted to display the mark information at the position corresponding to the training portion information in the target three-dimensional human body model, which is not specifically limited in this embodiment of the present application.
It is worth to be noted that the marked information is displayed at the position corresponding to the training part information in the target three-dimensional human body model, and the user can visually know the position of the training part in the human body by observing the marked target three-dimensional human body model.
In summary, an embodiment of the present invention provides a medical information processing method, including: determining a target three-dimensional human body model aiming at the object to be treated from a plurality of preset three-dimensional human body models according to the attribute information of the object to be treated; displaying the target three-dimensional human body model in a graphical user interface; responding to the part selection operation input by a first medical user through a graphical user interface, and determining training part information aiming at the object to be treated; and displaying the marking information at the position corresponding to the training part information in the target three-dimensional human body model to obtain the marked target three-dimensional human body model so as to indicate the training part information. The first medical user can determine the training part information through part selection operation input by the graphical user interface, operation of the first medical user is facilitated, and the marking information is displayed at the position corresponding to the training part information in the target three-dimensional human body model, so that the training part marked in the target three-dimensional human body model can be visually displayed, and user experience is improved.
Optionally, before the step of displaying the labeled information at the position corresponding to the training part information in the target three-dimensional human body model in S104 to obtain the labeled target three-dimensional human body model so as to indicate the process of the training part information, the method may further include:
and determining the position corresponding to the training part information in the target three-dimensional human body model according to the preset mapping relation and the training part information.
The preset mapping relation is a mapping relation between a plurality of preset human body part information and positions in the target three-dimensional human body model.
In some embodiments, target preset body part information matched with the training part information is determined from a plurality of preset body part information of the preset mapping relation, and a target position in the target three-dimensional body model corresponding to the target preset body part information is used as a position corresponding to the training part information in the target three-dimensional body model.
It should be noted that the position corresponding to the training part information in the target three-dimensional human body model may be a point in the target three-dimensional human body model, or may be a region in the target three-dimensional human body model, which is not specifically limited in this embodiment of the present application.
Optionally, fig. 2 is a schematic diagram of a graphical user interface provided in an embodiment of the present invention, as shown in fig. 2, the graphical user interface further displays: a plurality of site selection boxes.
Optionally, fig. 3 is a flowchart of a medical information processing method according to an embodiment of the present invention, and as shown in fig. 3, the step of determining training part information for the object to be treated in response to the part selection operation input by the first medical user through the graphical user interface in S103 may include:
s201, responding to a first selection operation input by a first medical user aiming at a plurality of part selection frames, and displaying a part list in an area corresponding to a target selection frame selected by the first selection operation.
The graphical user interface may include a plurality of exercises, each of which has at least one corresponding part selection box, as shown in fig. 2, where the a-type exercises have a plurality of corresponding part selection boxes, and the b-type exercises have a plurality of corresponding part selection boxes.
In some embodiments, a first selection operation may be input for a first part selection box in the class a training, and the first part selection box is a target selection box, as shown in fig. 2, and a part list is displayed in an area corresponding to the first part selection box.
S202, responding to a second selection operation aiming at a target part in a plurality of parts of the part list, and determining training part information.
Wherein the first medical user may input a second selection operation for a target site of the plurality of sites. If the part list comprises a multi-level part list, second selection operation can be carried out on a plurality of parts in each level of part list according to actual requirements.
For example, the bit list may include multiple levels, and the first level bit list may include: head, neck, chest, belly, lumbosacral portion, the second level position list that the head corresponds includes face, cranium, strength lateral zone, and the corresponding third level position list of neck lateral zone includes: forehead, top, occiput, temple, ear.
It should be noted that the first selection operation and the second selection operation may be touch operations input by the first medical user through a graphical user interface, or operations input by the first medical user through an external device of the terminal device corresponding to the first medical user, for example, the external device may be a mouse and/or a keyboard, and of course, the first medical user may also adopt operations input in other manners, which is not specifically limited in this embodiment of the application.
In embodiments of the present application, the target site may include at least one of: human body parts, muscles, bones, acupuncture points, meridians, etc. The doctor selects the target parts of the rehabilitation part, the muscle, the skeleton, the acupuncture point, the channel system and the like corresponding to the training type according to the actual situation.
Optionally, the graphical user interface further displays: at least one perspective control; as shown in fig. 2, the at least one perspective control comprises: a front view control, a side view control, and a back view control. So as to carry out omnibearing self-defined display on the target three-dimensional human body model.
The method may further comprise:
and responding to a second selection operation input aiming at least one visual angle control, and adjusting the display visual angle of the marked target three-dimensional human body model by adopting the visual angle corresponding to the target visual angle control selected by the second selection operation.
In the embodiment of the application, if the target visual angle control is a front visual angle control, the display visual angle of the marked target three-dimensional human body model is adjusted to be a front visual angle; if the target visual angle control is a side visual angle control, adjusting the display visual angle of the marked target three-dimensional human body model to be a side visual angle; and if the target visual angle control is a back visual angle control, adjusting the display visual angle of the marked target three-dimensional human body model to be a back visual angle.
Optionally, fig. 4 is a schematic flowchart of a medical information processing method according to an embodiment of the present invention, and as shown in fig. 4, before the process of determining the target three-dimensional human body model for the object to be treated from the plurality of preset three-dimensional human body models according to the attribute information of the object to be treated in S101, the method may further include:
s301, a plurality of initial three-dimensional human body models are created according to the attribute information of the sample objects.
Wherein the initial three-dimensional human body model is used for representing the shape of a human body.
In some embodiments, the attribute information of the plurality of sample objects may include ages and genders of the plurality of sample objects, and a plurality of initial three-dimensional human body models may be created according to the plurality of sample objects of different ages and genders to comprehensively serve the rehabilitation medical industry.
S302, projecting various types of human body information on a plurality of initial three-dimensional human body models to obtain a plurality of preset three-dimensional human body models.
In the embodiment of the application, the human body parts can be classified according to the rehabilitation medical knowledge and the expert advice, the multiple initial three-dimensional human body models are subjected to block division by projecting multiple types of human body information on the multiple initial three-dimensional human body models and combining with the professional knowledge of human anatomy, and the multiple preset three-dimensional human body models are obtained, so that the visual function of positioning the target part, namely positioning the affected part, is realized.
A plurality of preset three-dimensional human body models are established by referring to human anatomy professional knowledge based on attribute information of a plurality of sample objects, namely age group division and gender classification. The model is correspondingly divided according to the medical foundation and the clinical experience of the medical rehabilitation department, so that the medical rehabilitation specialty and the practicability are ensured.
It should be noted that three-dimensional effect development can be performed based on Unity (a real-time three-dimensional interactive content creation and operation platform), and the simulation restoration of the human body effect is rapidly and efficiently realized through a real-time rendering technology.
Optionally, the plurality of types of human body information includes at least two of the following items: human body part information, human body muscle information, human body skeleton information, human body acupuncture point information and human body channel information.
Optionally, as shown in fig. 2, the graphical user interface further displays: and a plurality of treatment information selection areas corresponding to the plurality of part selection frames.
As shown in fig. 2, each therapy information selection area may include a plurality of different types of therapy information selection items, which may include, for example: days, frequency, dosage unit, duration, start time.
Of course, the image user interface may further include a department selection box corresponding to the plurality of part selection boxes, and the department selection box may be used to select a department for performing rehabilitation therapy.
The method may further comprise:
and editing the rehabilitation therapy information of the object to be treated in the target therapy information selection area in response to the editing operation input for the target therapy information selection area in the plurality of therapy information selection areas.
In some embodiments, the target therapy information selection region includes a plurality of different types of therapy information selection items, and different types of rehabilitation therapy information may be edited in response to editing operations for the different therapy information selection items, for example, the following may be true: and editing rehabilitation treatment information such as days, frequency, dosage unit, duration, starting time and the like.
Optionally, fig. 5 is a schematic flow diagram of a medical information processing method provided in an embodiment of the present invention, and as shown in fig. 5, the method may further include:
s401, responding to confirmation operation input by the first medical user through the graphical user interface, and generating a rehabilitation treatment application form aiming at the object to be treated according to the attribute information of the object to be treated, the marked target three-dimensional human body model and rehabilitation treatment information.
In some embodiments, a confirmation control may be included in the graphical user interface, and in response to a selection operation input for the confirmation control, a rehabilitation application form for the subject to be treated is generated according to the attribute information of the subject to be treated, the marked target three-dimensional human body model and the rehabilitation treatment information.
The first medical user inputs selection operation aiming at the input of the confirmation control, and can issue a rehabilitation treatment application form by one key.
S402, sending a rehabilitation application form to the terminal device corresponding to the second medical user, wherein the rehabilitation application form is used for instructing the second medical user to execute rehabilitation training service on the object to be treated.
The terminal device corresponding to the second medical user may be a terminal device corresponding to a therapist and/or a terminal device corresponding to a caregiver.
In this embodiment of the application, the terminal device corresponding to the first medical user may send a recovery treatment application form to the terminal device corresponding to the second medical user, and the second medical user knows the whole treatment item and the part content of the object to be treated through the recovery treatment application form to perform the recovery treatment on the object to be treated.
In some embodiments, the terminal device corresponding to the second medical user prints the rehabilitation application form in response to the printing operation for the rehabilitation application form input by the second medical user, the printed rehabilitation application form displays the training part in a full range of the front side, the side, the back and the like of the object to be treated, and the second medical user can explain the object to be treated.
Optionally, fig. 6 is a schematic diagram of a graphical user interface provided in an embodiment of the present invention, as shown in fig. 6, a graphical user interface corresponding to a second medical user may display multiple types of training, training parts corresponding to each type of training, and rehabilitation information, and a graphical user interface corresponding to the second medical user may also display a target three-dimensional human body model. And displaying the marking information at the position corresponding to the training part in the target three-dimensional human body model in response to the input selection operation aiming at different training parts.
As shown in fig. 6, a front view angle control, a side view angle control, and a back view angle control may also be displayed in the graphical user interface corresponding to the second medical user, so that the second medical user can adjust the view angle of the target three-dimensional human body model, and thus the target three-dimensional human body model can be displayed in an all-around and user-defined manner.
Optionally, the method further includes:
and sending the attribute information of the object to be treated, the marked target three-dimensional human body model and rehabilitation treatment information to the terminal equipment corresponding to the object to be treated so that the terminal equipment corresponding to the object to be treated shows the attribute information of the object to be treated, the marked target three-dimensional human body model and the rehabilitation treatment information.
The object to be treated can conveniently acquire the training part and the rehabilitation information when the object to be treated performs rehabilitation therapy, and the experience of the object to be treated is improved.
It should be noted that a WebGL (Web Graphics Library, which is a three-dimensional drawing protocol) platform may be deployed on the terminal device corresponding to the first medical user to support real-time illumination and dynamic rendering, so as to achieve simulation effect restoration. In the embodiment of the application, in the process of operating on the terminal device by the first medical user, the technical end takes the front end as a center, and the front end sends the instruction calling information and keeps data synchronization with the background.
In the embodiment of the application, based on real-time three-dimensional interaction, the full-view viewing of a target three-dimensional human body model is realized and a training part is marked through the up-down left-right sliding of a left mouse button, an interface button auxiliary form and the like, and based on the WebGL form, through a unity instance interface and JSTOUnitylibrary identification, front-end development and three-dimensional development are integrated, the problem of data compatibility of different platforms is solved, and a high-efficiency stable use environment is ensured; the method comprises the steps that a springboot (frame), mybatis (frame), sqlserver (frame) framework is constructed, data interaction at the front end and the rear end is achieved through a request, and mapping is achieved by one-to-one correspondence of dictionary data and a three-dimensional structure.
In addition, a JavaScript (a lightweight, interpreted, or just-in-time compiled programming language with function precedence) framework of the user interface can be built based on Vue (a tool). The system is constructed through standard HTML (a programming language), CSS (a programming language) and JavaScript, provides a set of declarative and componentized programming models, efficiently develops a user interface, and performs data interaction with three-dimensional and a back end simultaneously.
Optionally, fig. 7 is a schematic diagram of a software framework based on which the medical information processing method provided in the embodiment of the present invention is based, and as shown in fig. 7, the rehabilitation product may include: a rehabilitation application form, rehabilitation treatment, rehabilitation scientific research, rehabilitation diagnosis, rehabilitation recommendation, rehabilitation doctors and rehabilitation nursing; UI (User Interface) presentations may be based on: vue + elementUi (a framework), uni-app (an application), android (android system), three-dimensional mannequin; unifying the gateways; the service layer comprises: application form, treatment record, authority, statistical report, system log, elk (a software) log record and monitoring; intelligent service: unified knowledge management service, unified engine service, medical NLP (Natural Language Processing); a database: sqlserver (a database management system), redis (Remote Dictionary service), elatcsearch (a distributed search and analysis engine); the operating environment comprises: the system comprises a local host, an independent server and a third-party virtual host.
The following describes a medical information processing apparatus, a terminal device, a storage medium, and the like for executing the medical information processing method provided in the present application, and specific implementation procedures and technical effects thereof refer to relevant contents of the medical information processing method, which will not be described in detail below.
Fig. 8 is a schematic structural diagram of a medical information processing apparatus according to an embodiment of the present invention, and as shown in fig. 8, the apparatus may include:
a first determining module 801, configured to determine, according to attribute information of an object to be treated, a target three-dimensional human body model for the object to be treated from a plurality of preset three-dimensional human body models;
a first display module 802 for displaying the target three-dimensional human body model in a graphical user interface;
a second determining module 803, configured to determine, in response to a site selection operation input by the first medical user through the graphical user interface, training site information for the object to be treated;
a second display module 804, configured to display label information at a position corresponding to the training part information in the target three-dimensional human body model, to obtain a labeled target three-dimensional human body model, so as to indicate the training part information.
Optionally, the apparatus further comprises:
the third determining module is used for determining the position corresponding to the training part information in the target three-dimensional human body model according to a preset mapping relation and the training part information;
the preset mapping relationship is a mapping relationship between a plurality of preset human body part information and positions in the target three-dimensional human body model.
Optionally, the graphical user interface further displays: a plurality of site selection frames; the second determining module 802 is specifically configured to, in response to a first selection operation input by the first medical user for the multiple site selection boxes, display a site list in an area corresponding to a target selection box selected by the first selection operation; determining the training site information in response to a second selection operation for a target site among the plurality of sites of the site list.
Optionally, the graphical user interface further displays: at least one perspective control; the device further comprises:
and the adjusting module is used for responding to a second selection operation input aiming at the at least one visual angle control, and adjusting the display visual angle of the marked target three-dimensional human body model by adopting the visual angle corresponding to the target visual angle control selected by the second selection operation.
Optionally, the apparatus further comprises:
the system comprises a creating module, a generating module and a processing module, wherein the creating module is used for creating a plurality of initial three-dimensional human body models according to the attribute information of a plurality of sample objects, and the initial three-dimensional human body models are used for representing the shape of a human body;
and the projection module is used for projecting various types of human body information on the plurality of initial three-dimensional human body models to obtain a plurality of preset three-dimensional human body models.
Optionally, the multiple types of human body information include at least two of the following items: human body part information, human body muscle information, human body skeleton information, human body acupuncture point information and human body channel information.
Optionally, the graphical user interface further displays: a plurality of treatment information selection areas corresponding to the plurality of part selection frames, respectively; the device further comprises:
and the editing module is used for responding to editing operation input aiming at a target treatment information selection area in the plurality of treatment information selection areas and editing the rehabilitation treatment information aiming at the object to be treated in the target treatment information selection area.
Optionally, the apparatus further comprises:
the generation module is used for responding to confirmation operation input by the first medical user through the graphical user interface and generating a rehabilitation treatment application form for the object to be treated according to the attribute information of the object to be treated, the marked target three-dimensional human body model and the rehabilitation treatment information;
and the first sending module is used for sending the rehabilitation application form to the terminal equipment corresponding to the second medical user, and the rehabilitation application form is used for indicating the second medical user to execute rehabilitation training service on the object to be treated.
Optionally, the apparatus further comprises:
and the second sending module is used for sending the attribute information of the object to be treated, the marked target three-dimensional human body model and the rehabilitation treatment information to the terminal equipment corresponding to the object to be treated so as to enable the terminal equipment corresponding to the object to be treated to display the attribute information of the object to be treated, the marked target three-dimensional human body model and the rehabilitation treatment information.
The above-mentioned apparatus is used for executing the method provided by the foregoing embodiment, and the implementation principle and technical effect are similar, which are not described herein again.
These above modules may be one or more integrated circuits configured to implement the above methods, such as: one or more Application Specific Integrated Circuits (ASICs), or one or more microprocessors (DSPs), or one or more Field Programmable Gate Arrays (FPGAs), among others. For another example, when one of the above modules is implemented in the form of a Processing element scheduler code, the Processing element may be a general-purpose processor, such as a Central Processing Unit (CPU) or other processor capable of calling program code. For another example, these modules may be integrated together and implemented in the form of a system-on-a-chip (SOC).
Fig. 9 is a schematic structural diagram of a terminal device according to an embodiment of the present invention, and as shown in fig. 9, the terminal device may include: a processor 901, a memory 902.
Wherein, the memory 902 is used for storing programs, and the processor 901 calls the programs stored in the memory 902 to execute the above method embodiments. The specific implementation and technical effects are similar, and are not described herein again.
Optionally, the invention also provides a program product, for example a computer-readable storage medium, comprising a program which, when being executed by a processor, is adapted to carry out the above-mentioned method embodiments.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one type of logical functional division, and other divisions may be realized in practice, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute some steps of the methods according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a portable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other media capable of storing program codes.
The above is only a preferred embodiment of the present invention, and is not intended to limit the present invention, and various modifications and changes will occur to those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A medical information processing method characterized by comprising:
determining a target three-dimensional human body model aiming at an object to be treated from a plurality of preset three-dimensional human body models according to the attribute information of the object to be treated;
displaying the target three-dimensional human body model in a graphical user interface;
responding to the part selection operation input by a first medical user through the graphical user interface, and determining training part information aiming at the object to be treated;
and displaying mark information at a position corresponding to the training part information in the target three-dimensional human body model to obtain the marked target three-dimensional human body model so as to indicate the training part information.
2. The method of claim 1, wherein before displaying the labeled information at the position corresponding to the training site information in the target three-dimensional human model to obtain a labeled target three-dimensional human model to indicate the training site information, the method further comprises:
determining a position corresponding to the training part information in the target three-dimensional human body model according to a preset mapping relation and the training part information;
the preset mapping relationship is a mapping relationship between a plurality of preset human body part information and positions in the target three-dimensional human body model.
3. The method of claim 1, wherein further displayed in the graphical user interface are: a plurality of site selection frames; the determining training part information aiming at the object to be treated in response to the part selection operation input by the first medical user through the graphical user interface comprises the following steps:
responding to a first selection operation input by the first medical user for the plurality of part selection boxes, and displaying a part list in an area corresponding to a target selection box selected by the first selection operation;
determining the training site information in response to a second selection operation for a target site among the plurality of sites of the site list.
4. The method of claim 1, wherein further displayed in the graphical user interface are: at least one perspective control; the method further comprises the following steps:
and responding to a second selection operation input aiming at the at least one visual angle control, and adjusting the display visual angle of the marked target three-dimensional human body model by adopting the visual angle corresponding to the target visual angle control selected by the second selection operation.
5. The method according to claim 1, wherein before the determining a target three-dimensional human model for the object to be treated from a plurality of preset three-dimensional human models according to the attribute information of the object to be treated, the method further comprises:
creating a plurality of initial three-dimensional human body models according to the attribute information of the sample objects, wherein the initial three-dimensional human body models are used for representing the shape of a human body;
and projecting various types of human body information on the plurality of initial three-dimensional human body models to obtain a plurality of preset three-dimensional human body models.
6. The method of claim 5, wherein the plurality of types of human information includes at least two of: human body part information, human body muscle information, human body skeleton information, human body acupuncture point information and human body channel information.
7. The method of claim 3, wherein further displayed in the graphical user interface are: a plurality of treatment information selection areas corresponding to the plurality of part selection frames, respectively; the method further comprises the following steps:
and editing the rehabilitation therapy information aiming at the object to be treated in the target therapy information selection area in response to the editing operation input aiming at the target therapy information selection area in the plurality of therapy information selection areas.
8. The method of claim 7, further comprising:
responding to the confirmation operation input by the first medical user through the graphical user interface, and generating a rehabilitation treatment application form for the object to be treated according to the attribute information of the object to be treated, the marked target three-dimensional human body model and the rehabilitation treatment information;
and sending the rehabilitation application form to a terminal device corresponding to a second medical user, wherein the rehabilitation application form is used for indicating the second medical user to execute rehabilitation training service on the object to be treated.
9. The method of claim 8, further comprising:
and sending the attribute information of the object to be treated, the marked target three-dimensional human body model and the rehabilitation treatment information to a terminal device corresponding to the object to be treated so that the terminal device corresponding to the object to be treated displays the attribute information of the object to be treated, the marked target three-dimensional human body model and the rehabilitation treatment information.
10. A terminal device, comprising: a memory storing a computer program executable by the processor, and a processor implementing the medical information processing method according to any one of claims 1 to 9 when the computer program is executed by the processor.
CN202310086832.9A 2023-02-01 2023-02-01 Medical information processing method and terminal equipment Pending CN115981511A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310086832.9A CN115981511A (en) 2023-02-01 2023-02-01 Medical information processing method and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310086832.9A CN115981511A (en) 2023-02-01 2023-02-01 Medical information processing method and terminal equipment

Publications (1)

Publication Number Publication Date
CN115981511A true CN115981511A (en) 2023-04-18

Family

ID=85963097

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310086832.9A Pending CN115981511A (en) 2023-02-01 2023-02-01 Medical information processing method and terminal equipment

Country Status (1)

Country Link
CN (1) CN115981511A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116910828A (en) * 2023-09-13 2023-10-20 合肥工业大学 Intelligent medical picture information security processing method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116910828A (en) * 2023-09-13 2023-10-20 合肥工业大学 Intelligent medical picture information security processing method and system
CN116910828B (en) * 2023-09-13 2023-12-19 合肥工业大学 Intelligent medical picture information security processing method and system

Similar Documents

Publication Publication Date Title
US20060173858A1 (en) Graphical medical data acquisition system
US9489362B2 (en) Method for generating a graphical questionnaire, and method for filling in the graphical questionnaire
US20130093829A1 (en) Instruct-or
JP2014509030A (en) Generating reports based on image data
JP6113267B2 (en) Method and apparatus for developing medical training scenarios
TWI501189B (en) An Avatar-Based Charting Method And System For Assisted Diagnosis
CN103705306A (en) Operation support system
US20140303670A1 (en) Method and Device for Spinal Analysis
Ullah et al. Exploring the potential of metaverse technology in healthcare: applications, challenges, and future directions
CN115981511A (en) Medical information processing method and terminal equipment
CN111723277A (en) Image structured report cloud requirement classification system and method thereof
CN104376770A (en) Three-dimensional visualization operation simulation method and system
JP6471409B2 (en) Display control program, display control method, and display control apparatus
Nyumbeka et al. Using mobile computing to support malnutrition management in South Africa
JP2012003465A (en) Schema drawing device, schema drawing system and schema drawing program
Aggarwal et al. Automation in healthcare: a forecast and outcome–medical IoT and big data in healthcare
Preim HCI in medical visualization
Weng et al. Electronic medical record system based on augmented reality
CN113496770A (en) Information integration device
Gabor et al. Mobile application for medical diagnosis
TWM574324U (en) System for marking, identifying and managing medical devices
US20210390262A1 (en) Standardized data input from language using universal significance codes
Dong et al. User centered design for medical visualization
JPH10211173A (en) Diagnostic report preparing device
Hachach-Haram et al. Emerging Technologies: Data and the Future of Surgery

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination