WO2023226636A1 - 控制器、植入式神经刺激系统及计算机可读存储介质 - Google Patents

控制器、植入式神经刺激系统及计算机可读存储介质 Download PDF

Info

Publication number
WO2023226636A1
WO2023226636A1 PCT/CN2023/089492 CN2023089492W WO2023226636A1 WO 2023226636 A1 WO2023226636 A1 WO 2023226636A1 CN 2023089492 W CN2023089492 W CN 2023089492W WO 2023226636 A1 WO2023226636 A1 WO 2023226636A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
character
controller
program
training
Prior art date
Application number
PCT/CN2023/089492
Other languages
English (en)
French (fr)
Inventor
周国新
马艳
王倩
Original Assignee
苏州景昱医疗器械有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 苏州景昱医疗器械有限公司 filed Critical 苏州景昱医疗器械有限公司
Publication of WO2023226636A1 publication Critical patent/WO2023226636A1/zh

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N1/00Electrotherapy; Circuits therefor
    • A61N1/18Applying electric currents by contact electrodes
    • A61N1/32Applying electric currents by contact electrodes alternating or intermittent currents
    • A61N1/36Applying electric currents by contact electrodes alternating or intermittent currents for stimulation
    • A61N1/3605Implantable neurostimulators for stimulating central or peripheral nerve system
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N1/00Electrotherapy; Circuits therefor
    • A61N1/18Applying electric currents by contact electrodes
    • A61N1/32Applying electric currents by contact electrodes alternating or intermittent currents
    • A61N1/36Applying electric currents by contact electrodes alternating or intermittent currents for stimulation
    • A61N1/3605Implantable neurostimulators for stimulating central or peripheral nerve system
    • A61N1/36128Control systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/40ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mechanical, radiation or invasive therapies, e.g. surgery, laser therapy, dialysis or acupuncture

Definitions

  • This application relates to the technical fields of implantable devices, remote programming, Internet of Things and deep learning, such as controllers, implantable neurostimulation systems and computer-readable storage media.
  • Implantable devices refer to implantable devices that are fully or partially inserted into the human body or cavity (mouth) through surgery, or are used to replace the human epithelial surface or ocular surface, and remain in the human body for more than 30 days (inclusive) after the surgical procedure is completed, or Medical devices that are absorbed by the human body.
  • Implantable devices include implantable medical systems that include programmable devices and implantable devices. They can provide patients with precision treatment with controllable parameters and are welcomed by many consumers in the market.
  • Patent CN113362946A discloses a video processing device, electronic equipment and computer-readable storage media.
  • the video processing device is applied to electronic equipment.
  • the electronic equipment interacts with data respectively with a display screen, a camera and a doctor's equipment.
  • the electronic equipment Configured to perform data processing on videos of patients suffering from Parkinson's disease; the device includes: a prompt display module configured to use the display screen to display prompt information of at least one designated action; a video collection module configured To use the camera to collect a video of the patient; an action prediction module configured to input the patient's video into a Parkinson's detection model, predict the patient's action information and send the action information to the doctor device,
  • the action information at least includes the amplitude and/or frequency of one of the specified actions;
  • an information acquisition module configured to acquire the patient's disease information; a suggestion strategy module configured to based on the patient's disease information and action information , obtain the patient's recommended programming strategy and send the suggested programming strategy to the doctor's device.
  • This device
  • This application provides a controller, an implantable neurostimulation system and a computer-readable storage medium to automatically diagnose the patient's location and optimize the visual presentation of the program-controlled device.
  • the present application provides a controller configured to implement remote communication between a programmable device and a stimulator. control function, the stimulator is placed in the patient's body, and the controller is configured to:
  • Part images of one or more parts are intercepted from each frame of the whole body image
  • the display image is displayed using the program-controlled device.
  • the acquisition process of the one or more parts includes:
  • the program-controlled device is used to display multiple action types, and the multiple action types include pointing fingers, making fists, raising hands, raising arms, walking in a straight line, and walking in a preset curve;
  • the program-controlled device is used to receive a selection operation for one of the action types, and in response to the selection operation, one or more parts corresponding to the selected action type are obtained.
  • the corresponding area of each part in the person image includes a first corresponding area and a second corresponding area
  • the controller is configured to synthesize the display image in the following manner: :
  • the diagnosis result of each part is superimposed to the first corresponding area of each part in the person image, and the icon corresponding to the diagnosis result of each part is superimposed to the first corresponding area of each part in the person image.
  • the second corresponding area in the person image is synthesized to obtain the display image, and the icon corresponding to the diagnosis result of each part is used to graphically indicate the severity of the diagnosis result of each part.
  • the first corresponding area of each part in the person image is located around the corresponding part of the person, and the second corresponding area of each part in the person image is The area is located in the area where the corresponding part of the character is located.
  • the controller is further configured to:
  • the new display image is displayed using the program-controlled device.
  • the controller is configured to obtain the rotated image in the following manner:
  • the rotated image is obtained based on the rotation direction and the rotation angle.
  • the controller is configured to obtain the rotation image corresponding to the character image in the following manner:
  • the training process of the three-dimensional reconstruction model includes:
  • each training data in the first training set includes a 2D image used for training and annotation data of its corresponding 3D image;
  • the character image can be switched between a character color image, a character skeleton image, a character meridians image, and a character cartoon image.
  • This application also provides a control method for realizing the remote program control function between the program-controlled device and the stimulator, where the stimulator is installed in the patient's body.
  • the control method includes:
  • the whole-body video data includes multiple frames of whole-body images
  • Part images of one or more parts are intercepted from each frame of the whole body image
  • the display image is displayed using the program-controlled device.
  • the acquisition process of the one or more parts includes:
  • the program-controlled device is used to display multiple action types, and the multiple action types include pointing fingers, making fists, raising hands, raising arms, walking in a straight line, and walking in a preset curve;
  • the program-controlled device is used to receive a selection operation for one of the action types, and in response to the selection operation, one or more parts corresponding to the selected action type are obtained.
  • the corresponding area of each part in the person image includes a first corresponding area and a second corresponding area, and the diagnosis result of each part is superimposed on the The corresponding areas of the parts in the human image are synthesized to obtain the display image, including:
  • the diagnosis result of each part is superimposed to the first corresponding area of each part in the person image, and the icon corresponding to the diagnosis result of each part is superimposed to the first corresponding area of each part in the person image.
  • the second corresponding area in the person image is synthesized to obtain the display image, and the icon corresponding to the diagnosis result of each part is used to graphically indicate the severity of the diagnosis result of each part.
  • the first corresponding area of each part in the person image is located around the corresponding part of the person, and the second corresponding area of each part in the person image is The area is located in the area where the corresponding part of the character is located.
  • the method further includes:
  • the new display image is displayed using the program-controlled device.
  • obtaining a rotated image corresponding to the character image in response to the sliding operation includes:
  • the rotated image is obtained based on the rotation direction and the rotation angle.
  • obtaining the rotated image based on the rotation direction and the rotation angle includes:
  • the training process of the three-dimensional reconstruction model includes:
  • each training data in the first training set includes a 2D image used for training and annotation data of its corresponding 3D image;
  • the character image can be switched between a character color image, a character skeleton image, a character meridians image, and a character cartoon image.
  • This application also provides an implantable neurostimulation system, which includes:
  • the programmable device is arranged outside the patient's body, and the programmable device is configured to provide interactive functions and display functions;
  • a stimulator the stimulator is disposed in the patient's body, the stimulator is configured to release electrical stimulation energy to the patient's body tissue;
  • This application also provides a computer-readable storage medium, which stores a computer program.
  • the computer program When the computer program is executed by a processor, it can realize the function of any of the above-mentioned controllers or realize any of the above-mentioned functions. item control method.
  • Figure 1 shows a structural block diagram of an implantable neurostimulation system provided by this application.
  • Figure 2 shows a schematic flow chart of a control method provided by this application.
  • Figure 3 shows a schematic diagram of a display image provided by this application.
  • Figure 4 shows a schematic flowchart of a part acquisition process provided by this application.
  • FIG. 5 shows a schematic flowchart of another control method provided by this application.
  • Figure 6 shows a schematic flowchart of obtaining a rotated image provided by this application.
  • Figure 7 shows another schematic flowchart of obtaining a rotated image provided by this application.
  • Figure 8 shows a structural block diagram of a controller provided by this application.
  • Figure 9 shows a schematic structural diagram of a program product for implementing a control method provided by this application.
  • At least one means one or more, and “plurality” means two or more.
  • “And/or” describes the association of associated objects, indicating that there can be three relationships, for example, A and/or B, which can mean: A exists alone, A and B exist simultaneously, and B exists alone, where A, B can be singular or plural.
  • the character “/” generally indicates that the related objects are in an “or” relationship.
  • “At least one of the following” or similar expressions thereof refers to any combination of these items, including any combination of a single item (items) or a plurality of items (items).
  • At least one of a, b or c can mean: a, b, c, a and b, a and c, b and c, a and b and c, where a, b and c can It can be single or multiple. It is worth noting that "at least one item (item)” can also be interpreted as “one item (item) or multiple items (item)”.
  • Implantable neurostimulation systems mainly include stimulators implanted in the body and program-controlled equipment outside the body.
  • Neuromodulation technology mainly implants electrodes into specific structures (i.e. target points) in the body through stereotaxic surgery, and the stimulator implanted in the patient's body sends electrical pulses to the target point through the electrodes to regulate the electrical activity and activity of the corresponding neural structures and networks.
  • the functions of the neural structures and networks can improve symptoms and relieve pain.
  • the stimulator can be an implantable nerve electrical stimulation device, an implantable cardiac electrical stimulation system (also known as a pacemaker), an implantable drug delivery device (Implantable Drug Delivery System, IDDS) and a lead adapter device any of them.
  • Implantable neuroelectric stimulation devices include, for example, Deep Brain Stimulation (DBS) systems, Implantable Cortical Nerve Stimulation (CNS) systems, and Implantable Spinal Cord Stimulation (SCS) systems. system, implantable sacral nerve stimulation (Sacral Nerve Stimulation, SNS) system, implantable vagus nerve stimulation (Vagus Nerve Stimulation, VNS) system, etc.
  • DBS Deep Brain Stimulation
  • CNS Implantable Cortical Nerve Stimulation
  • SNS Implantable Spinal Cord Stimulation
  • VNS vagus Nerve Stimulation
  • the stimulator can include an Implantable Pulse Generator (IPG), extension wires, and electrode wires.
  • IPG Implantable Pulse Generator
  • the IPG is placed in the patient's body and relies on sealed batteries and circuits to provide controllable electrical stimulation energy to tissues in the body through the implanted Extension wires and electrode leads provide one or two channels of controllable specific electrical stimulation energy to specific areas of tissue in the body.
  • the extension lead is used in conjunction with the IPG as a transmission medium for electrical stimulation signals to transmit the electrical stimulation signals generated by the IPG to the electrode leads.
  • the electrode leads transmit the electrical stimulation signals generated by the IPG through multiple electrode contacts to release electrical stimulation energy to specific areas of the body tissue; the implantable medical device has one or more electrode leads on one or both sides, so A plurality of electrode contacts are provided on the electrode lead, and the electrode contacts may be uniformly or non-uniformly arranged in the circumferential direction of the electrode lead. As an example, the electrode contacts are arranged in an array of 4 rows and 3 columns (12 electrode contacts in total) in the circumferential direction of the electrode wire.
  • the electrode contacts may include stimulation electrode contacts and/or collection electrode contacts.
  • the electrode contacts may be in the shape of, for example, a sheet, a ring, a dot, or the like.
  • the stimulated body tissue may be the patient's brain tissue, and the stimulated site may be a specific part of the brain tissue.
  • the stimulated parts are generally different, the number of stimulation contacts used (single source or multiple sources), one or more channels (single channel or multi-channel) specific electrical stimulation signals
  • the application and stimulation parameter data are also different. This application does not limit the applicable disease types, which can be the disease types applicable to deep brain stimulation (DBS), spinal cord stimulation (SCS), pelvic stimulation, gastric stimulation, peripheral nerve stimulation, and functional electrical stimulation.
  • DBS deep brain stimulation
  • SCS spinal cord stimulation
  • pelvic stimulation gastric stimulation
  • peripheral nerve stimulation and functional electrical stimulation.
  • DBS Psychiatric disorders
  • psychiatric disorders e.g., major depressive disorder (MDD)
  • bipolar disorder anxiety disorder, post-traumatic stress disorder, mild depression, obsessive-compulsive disorder (OCD)
  • behavior disorders mood disorders, memory disorders, mental status disorders, mobility disorders (e.g., essential tremor or Parkinson's disease), Huntington's disease, Alzheimer's disease, drug addiction, autism, or other neurological or Psychiatric illness and impairment.
  • DBS can help drug addicts detoxify and improve their happiness and quality of life.
  • the program-controlled device when the program-controlled device and the stimulator establish a program-controlled connection, can be used to adjust the stimulation parameters of the stimulator's electrical stimulation signal, and the stimulator can also be used to sense the bioelectrical activity in the deep brain of the patient, and the sensed The bioelectric activity continues to adjust the stimulation parameters of the electrical stimulation signal of the stimulator.
  • the programmable device can be a doctor programmer or a patient programmer.
  • This application does not limit the data interaction between the doctor's programmer and the stimulator.
  • the doctor's programmer can interact with the stimulator through the server and the patient's programmer.
  • the doctor's programmer can interact with the stimulator through the patient's programmer, and the doctor's programmer can also directly interact with the stimulator.
  • the patient programmer may include a host (in communication with the server) and a slave (in communication with the stimulator), the host and slave being communicatively connected.
  • the doctor programmer can use the third-generation mobile communication technology/the fourth generation mobile communication technology/the fifth generation mobile communication technology (3rd-Generation/the 4th Generation mobile communication technology/the 5th Generation mobile communication technology, 3G/4G/ 5G) network interacts with the server.
  • the server can interact with the host through the 3G/4G/5G network.
  • the host can interact with the host through the Bluetooth protocol/Wireless Fidelity (WIFI) protocol/Universal Serial Bus (Universal Serial Bus). USB) protocol for data interaction with the slave machine.
  • WIFI Bluetooth protocol/Wireless Fidelity
  • USB Universal Serial Bus
  • the slave machine can interact with the stimulator through the 401MHz-406MHz working frequency band/2.4GHz-2.48GHz working frequency band.
  • the doctor programmable controller can interact with the stimulator through the 401MHz-406MHz working frequency band/2.4GHz-2.48GHz
  • the working frequency band directly interacts with the stimulator.
  • Figure 1 shows a structural block diagram of an implantable neurostimulation system provided by the present application.
  • the implantable neurostimulation system includes:
  • the programmable device 10 is arranged outside the patient's body, and the programmable device 10 is configured to provide interactive functions and display functions;
  • the stimulator 20 is arranged in the body of the patient, and the stimulator 20 is configured to release electrical stimulation energy to the tissue in the patient's body;
  • Controller 30 the controller 30 is configured to implement the steps of the control method.
  • the program-controlled device 10 may include, for example, one or more of a tablet computer, a notebook computer, a desktop computer, a mobile phone, and a smart wearable device.
  • the controller 30 may be integrated with the program-controlled device 10 .
  • the controller 30 may be integrated with the stimulator 20 .
  • Figure 2 shows a schematic flow chart of a control method provided by this application
  • Figure 3 shows a schematic diagram of a display image provided by this application.
  • the control method is used to realize the remote program control function between the program control device and the stimulator.
  • the stimulator is installed in the patient's body.
  • the control method includes the following steps.
  • Step S101 Utilize the program-controlled device to receive a stimulation configuration operation, and in response to the stimulation configuration operation, configure the stimulation parameters of the stimulator so that the stimulator releases the stimulation parameters corresponding to the stimulation parameters to the patient's body tissue. Electrical stimulation energy.
  • Step S102 Use a camera to collect whole-body video data of the patient, where the whole-body video data includes multiple frames of whole-body images.
  • Step S103 Intercept part images of one or more parts from each frame of the whole body image.
  • Step S104 Input multiple frames of part images of each part into the corresponding part diagnosis model to obtain the diagnosis result of each part.
  • Step S105 Superimpose the diagnosis results of each part to the corresponding area of each part in the person image to synthesize a display image, and the person image is provided with a person.
  • Step S106 Use the program-controlled device to display the display image.
  • the remote program control function between the program-controlled device and the stimulator is realized through the controller.
  • the program-controlled device and the stimulator can directly or indirectly interact with data. Therefore, after receiving the stimulation configuration operation using the program-controlled device, the program can The stimulation parameters of the stimulator are configured accordingly, so that the stimulator releases electrical stimulation energy corresponding to the stimulation parameters to the tissue in the patient's body.
  • the programmer that is, the user of the programmable equipment, usually a doctor with programming qualifications or corresponding abilities
  • the camera set around the patient can be used to collect the patient's whole-body video data (acquisition During the process, the patient can be in a preset state or make a preset action), and the part images corresponding to one or more parts are intercepted from each frame of the whole body image, and for each part, the corresponding parts of the part (with a chronological order sequence relationship) multi-frame part map
  • input the part diagnosis model corresponding to the part and use the part diagnosis model to (automatically) output the diagnosis result of the part.
  • the diagnosis result can, for example, indicate whether the part is in a normal state or an abnormal state, and can also use scores or levels to more precisely analyze the part. Indicates the normality of the part (for example, a higher score or grade indicates a more normal part, while a lower score or grade indicates an abnormal part).
  • the diagnosis result of the part is superimposed to the corresponding area of the part in the person image, so that the program control party can intuitively and clearly see each part and its characteristics in the person image.
  • the diagnostic results superimposed on the corresponding area of each part can provide the program control party with a more intuitive visual presentation method.
  • this high-efficiency, large-information presentation method optimizes the original visual presentation method of the program-controlled equipment and allows the program control party to quickly understand ( (Under the current stimulation parameters) the diagnosis results of each part, thereby assisting the program control party to continue configuring the stimulation parameters or adjusting the treatment plan.
  • the programmer often needs to spend more time (more time than offline face-to-face programming) to guide the patient to maintain the preset state or perform preset actions, and when the patient does so After that, the programmer needs to visually observe or use machine vision technology to calculate the patient's status parameters or action parameters. Based on the calculated parameters, the normality of each part of the patient can be judged manually or intelligently to obtain the diagnosis result of each part. , and then the diagnosis results are presented in text format, and the program control party reads the diagnosis results and performs the next step of processing; in the above link, due to factors such as insufficient intelligence and uncertain network quality, there is a gap between the program control party and the patient.
  • the stimulation parameters of the stimulator may include at least one of the following: frequency (for example, the number of electrical stimulation pulse signals per unit time 1 s, in Hz), pulse width (duration of each pulse, in ⁇ s), amplitude (Generally expressed in terms of voltage, that is, the intensity of each pulse, in V), stimulation mode (including one or more of current mode, voltage mode, timing stimulation mode and cyclic stimulation mode), upper and lower limits controlled by the doctor ( The range that the doctor can adjust) and the upper and lower limits of patient control (the range that the patient can adjust independently).
  • frequency for example, the number of electrical stimulation pulse signals per unit time 1 s, in Hz
  • pulse width duration of each pulse, in ⁇ s
  • amplitude Generally expressed in terms of voltage, that is, the intensity of each pulse, in V
  • stimulation mode including one or more of current mode, voltage mode, timing stimulation mode and cyclic stimulation mode
  • upper and lower limits controlled by the doctor The range that the doctor can adjust
  • the upper and lower limits of patient control the range that the patient can adjust
  • multiple stimulation parameters of the stimulator can be adjusted in current mode or voltage mode.
  • the stimulation parameter identification may be represented by at least one of Chinese characters, letters, numbers, symbols and special symbols. For example “A01”, “Amplitude” or "#01".
  • the configuration parameters of the stimulator include: the stimulation mode is voltage mode, the frequency is 130Hz, the pulse width is 60 ⁇ s, and the amplitude is 3V.
  • the stimulation configuration operation in this application can be, for example, an operation of stepwise adjustment of the current stimulation parameters. For example, in the voltage mode, you can click the increase button "+" corresponding to the amplitude to adjust the amplitude from 3.3V to 3.4V ( (using 0.1V as the step size); or you can click the increase button "+” corresponding to the frequency to adjust the frequency from 120Hz to 130Hz (using 10Hz as the step size).
  • the stimulation configuration operation in this application can also be an operation of entering numerical values.
  • the camera in this application is, for example, an optical camera and/or an infrared camera.
  • the whole-body video data in this application refers to video data that can capture the patient's whole body.
  • whole-body images refer to images in which the patient's entire body is shown.
  • the parts in this application may include, for example, hands, arms, legs, trunk, etc.
  • the preset state of the patient that the doctor is concerned about may be, for example, standing or sitting.
  • the preset actions may be, for example, pointing fingers, making a fist, raising a hand, raising an arm (making the arm perpendicular to the trunk), walking in a straight line, walking in a curve, and other action types.
  • the action type of fingering may include pointing fingers of the left hand and the right hand
  • the action type of making a fist may include making a fist with the left hand and the right hand
  • the action type of raising a hand may include raising the left hand, raising the right hand, and raising the arm.
  • the type can include raising the left arm, raising the right arm, walking in a straight line can include walking in a straight line for 30 steps and walking in a straight line for 10 steps, etc.
  • Walking in a curve can include walking in a circle, walking in a semicircle, walking in an S shape, etc.
  • This application does not limit the method of intercepting and obtaining part images from the whole body image.
  • the image segmentation model can be used to perform image segmentation on the whole body image to obtain the part image of each part.
  • the part image corresponding to each part can be intercepted from the whole body image according to the distribution information of human body parts.
  • Part images are intercepted from whole-body images, and multi-frame part images are used to diagnose the corresponding parts. Compared with whole-body images, the image size of part images is smaller, so the calculation amount of the part diagnosis model can be reduced.
  • the left hand image can be intercepted from the patient's whole body image, and multiple frames of the left hand image can be input into the diagnosis model corresponding to the hand part to obtain the diagnosis result of the patient's left hand, such as severe shaking or severe Parkinson's symptoms.
  • the "corresponding part diagnosis model" in “input multiple frames of part images of each part into the corresponding part diagnosis model” refers to the part diagnosis model corresponding to each part.
  • input multi-frame part images of the hand into the corresponding part diagnosis model of the hand, and input multi-frame part images of the arm into The bit images are input into the part diagnosis model corresponding to the arm, the multi-frame part images of the legs are input into the part diagnosis model corresponding to the legs, and the multi-frame part images of the trunk are input into the part diagnosis model corresponding to the trunk.
  • This application does not limit the acquisition method of the site diagnosis model.
  • this application can train the site diagnosis model.
  • this application can use a pre-trained site diagnosis model.
  • a corresponding part diagnosis model is set for each part.
  • a part diagnosis model corresponding to the hand also called a hand diagnosis model
  • a part diagnosis model also called a hand diagnosis model
  • a part diagnosis model also called a hand diagnosis model
  • a part diagnosis model corresponding to the arm is set for the arm.
  • an arm diagnostic model a part diagnostic model corresponding to the leg
  • a part diagnostic model corresponding to the trunk is set for the torso.
  • the part diagnosis model corresponding to each part can be trained using the following training process:
  • each training data in the second training set includes multi-frame part images of the part used for training and annotation data of the diagnosis results of the part corresponding to the training data;
  • the part diagnosis model can be trained with a large amount of training data, and can predict the diagnosis results of each part based on different input data. It has a wide range of applications and a high level of intelligence.
  • a preset second deep learning model By designing and establishing an appropriate number of neuron computing nodes and multi-layer computing hierarchies, and selecting appropriate input and output layers, a preset second deep learning model can be obtained. Through the learning of the preset second deep learning model and tuning to establish the functional relationship from input to output. Although the functional relationship between input and output cannot be found 100%, it can be as close as possible to the realistic correlation relationship.
  • the part diagnosis model thus trained can achieve part acquisition. The function of diagnosis results, and the calculation results are highly accurate and reliable.
  • this application can use the above training process to train and obtain a site diagnosis model, In other embodiments, this application may use a pre-trained site diagnosis model.
  • This application does not limit the method of obtaining the annotated data.
  • manual annotation, automatic annotation, or semi-automatic annotation may be used.
  • This application does not limit the training process of the part diagnosis model.
  • the training method of the above-mentioned supervised learning may be used, or the training method of semi-supervised learning may be used, or the training method of unsupervised learning may be used.
  • This application does not limit the preset training end condition.
  • the number of training times reaches the preset number of times (the preset number of times is, for example, 1 time, 3 times, 10 times, 100 times, 1000 times, 10000 times, etc.), or it can
  • the training data in the second training set have completed one or more trainings, or the total loss value obtained in this training is not greater than the preset loss value.
  • the "overlay" in step S105 of this application can be implemented, for example, by using a top display method or a layer mixing method.
  • the layer blending method can use any of the following blending modes: Normal, Dissolve, Darken, Multiply, Color Burn, Linear Burn, Darker, Lighten, Screen, Color Dodge, Linear Dodge ( Add), Light, Overlay, Soft Light, Hard Light, Bright Light, Linear Light, Spot Light, Solid Color Mix, Difference, Exclude, Subtract, Divide, Hue, Saturation, Color and Value.
  • the diagnosis results will float above the character image (or on the top layer).
  • the corresponding area of each part in the person image may include, for example, the surroundings of the corresponding part of the person in the person image and/or the area where the corresponding part of the person is located.
  • Figure 4 shows a schematic flow chart of a part acquisition process provided by this application.
  • the acquisition process of the one or more parts may include the following steps.
  • Step S201 Use the program-controlled device to display multiple action types.
  • the multiple action types include pointing fingers, making fists, raising hands, raising arms, walking in a straight line, and walking in a preset curve.
  • Step S202 Use the program-controlled device to receive a selection operation for one of the action types, and in response to the selection operation, obtain one or more parts corresponding to the selected action type.
  • the program control side sometimes requires the patient to perform preset action types, manually observe the patient's movement process, or use machine vision technology for image processing, so as to treat the patient artificially or intelligently. to make a diagnosis; in this case, the program control party is concerned about the action type. Therefore, the corresponding relationship between the action type and the part can be established in advance according to the medical usage preference, so that the program control party can directly select the part that is concerned.
  • the action type automatically corresponds to one or more parts, without the need for the programmer to calculate in the brain which parts correspond to the action type and manually select one or more parts.
  • one action type corresponds to multiple parts (for example, the action type of pointing fingers generally corresponds to the left hand and right hand)
  • multiple parts can be diagnosed separately. At this time, it can be performed in stages, one part at a time.
  • Image processing is used to obtain the diagnosis results. Through multiple diagnoses, the diagnosis results of multiple parts are obtained respectively.
  • the program control party only needs to select the action type once, and all subsequent steps will be automatically executed and multiple
  • the diagnosis results of the parts have improved the level of intelligence, improved the user experience of the program control side, and improved the effect of remote program control.
  • the action type of pointing fingers can correspond to the two parts of the left hand and the right hand
  • the action type of making a fist can correspond to the two parts of the left hand and the right hand
  • the action type of raising the hand can correspond to the two parts of the left arm and the right arm.
  • the action type of raising the arm can correspond to the left arm and the right arm
  • the action type of walking in a straight line can correspond to the left leg and the right leg
  • the action type of walking in the preset curve can correspond to the left leg and the right leg. parts.
  • the corresponding area of each part in the person image includes a first corresponding area and a second corresponding area
  • step S105 may include the following steps.
  • the diagnosis results of each part are superimposed on the first corresponding area of each part in the person image, and the icon corresponding to the diagnosis result of each part is superimposed on the second corresponding area of each part in the person image.
  • the display image is obtained by synthesis, and the icon corresponding to the diagnosis result of each part is used to graphically indicate the severity of the diagnosis result of each part.
  • the diagnosis result of the part is displayed in the first corresponding area of the part in the person image
  • the icon corresponding to the diagnosis result of the part is displayed in the part of the person.
  • the second corresponding area in the image that is, the person image will simultaneously display the person, the diagnosis results of each part, and the icon corresponding to each diagnosis result, and the icons graphically indicate the severity of the diagnosis results (for example, Use icons of different colors to distinguish different severities, or use icons with different patterns to distinguish different severities).
  • icons are more eye-catching and intuitive than diagnostic results in text format, and they are also more lively in appearance.
  • the first corresponding area of each part in the person image may be located around the corresponding part of the person, and the second corresponding area of each part in the person image may be located The area where the corresponding parts of the character are located.
  • the first corresponding area (for each part in the person image) is set to the corresponding part of the person. around the bits, so that the diagnosis results in text format will not be displayed on the upper layer (or top layer, foreground) of each part, preventing the character's parts from being blocked by the diagnosis results (especially when the text information corresponding to the diagnosis results is long);
  • the second corresponding area By setting the second corresponding area to the area where the corresponding part of the character is located, graphical icons can be directly displayed in the area where each part is located. From the perspective of the program control party, the icons on each part are intuitively It shows the severity of the part (disease), optimizes the visual presentation of the program-controlled equipment, and improves the user experience of the program-controlled side.
  • the diagnosis result of the hand can be displayed around the hand in the person image, and the icon corresponding to the diagnosis result of the hand can be displayed in the area where the hand is located in the person image.
  • the diagnosis result of the left hand [normal] is displayed around the left hand of the character in the character image, and the icon corresponding to the diagnosis result of the left hand [green check mark] (the check mark is, for example, " ⁇ ") displays the center position of the character's left hand in the character image.
  • diagnosis result of the right hand [severe tremor] is displayed around the right hand of the character in the character image, and the icon corresponding to the diagnosis result of the right hand [red error symbol] (the error symbol is, for example, " ⁇ " ) is displayed at the center of the character's right hand in the character image.
  • diagnosis result of the left leg [mild tremor] is displayed around the knee joint of the left leg of the character in the character image, and the icon [blue rectangle] corresponding to the diagnosis result of the left leg is displayed.
  • diagnosis result of the right leg [moderate tremor] is displayed around the knee joint of the character's right leg in the character image, and the icon corresponding to the diagnosis result of the right leg [purple diamond] is displayed on The center position of the knee joint of the character's right leg in the character image.
  • the first corresponding area of each part in the person image may be located in the area where the corresponding part of the person is located, and the second corresponding area of each part in the person image is Regions may be located around corresponding parts of the character.
  • the first corresponding area and the second corresponding area of each part in the character image may be located around the corresponding part of the character.
  • the first corresponding area and the second corresponding area of each part in the character image may be located in the area where the corresponding part of the character is located.
  • Figure 5 shows a schematic flow chart of another control method provided by this application.
  • the method may further include the following steps.
  • Step S107 Utilize the program-controlled device to receive a sliding operation for rotating the character image. In response to the sliding operation, a rotated image corresponding to the character image is obtained.
  • Step S108 Superimpose the diagnosis results of each part to the corresponding area of each part in the rotated image to synthesize a new display image.
  • Step S109 Use the program-controlled device to display the new display image.
  • the program control party is allowed to rotate the character image by sliding, and use the rotated (post-) image corresponding to the character image to synthesize a new display image. That is to say, although the viewing angle of the new display image has changed, it is still The diagnosis results of the site can be displayed in the corresponding area.
  • the human body is three-dimensional, and two-dimensional plane images cannot present depth information, nor can they present the back effect of the character image.
  • rotating images are used to provide the viewing angle of the character after rotation, which can intuitively reflect The spatial characteristics of each part enhance the visual presentation of program-controlled equipment.
  • step S107 the step of obtaining the rotated image corresponding to the character image in response to the sliding operation may include: in response to the sliding operation, when the When the sliding operation is sliding to the left or right, the rear view corresponding to the character image is used as the rotated image corresponding to the character image.
  • step S107 the step of obtaining the rotated image corresponding to the character image in response to the sliding operation may include:
  • the right view corresponding to the character image is used as the rotated image corresponding to the character image
  • the left view corresponding to the character image is used as the rotated image corresponding to the character image
  • the rear view corresponding to the character image is used as the rotated image corresponding to the character image.
  • step S107 the step of obtaining the rotated image corresponding to the character image in response to the sliding operation may include the following steps.
  • Step S301 In response to the sliding operation, obtain a sliding path of the sliding operation, where the sliding path is represented by a vector.
  • Step S302 Determine the rotation direction corresponding to the character image based on the path direction of the sliding path.
  • Step S303 Determine the rotation angle corresponding to the character image based on the path length of the sliding path.
  • Step S304 Obtain the rotated image based on the rotation direction and the rotation angle.
  • a vector is used to represent the sliding path, so that the path direction and path length of the sliding path can be obtained, and the rotation direction and rotation angle corresponding to the character image can be determined respectively.
  • This allows the program control party to finely control the sliding path through differentiated sliding operations.
  • the rotation direction and angle of the character image are used to obtain the required rotated image.
  • the corresponding relationship between the path direction and the rotation direction is configured, so that the rotation direction corresponding to each path direction can be obtained for each path direction.
  • the path directions can be classified into four direction types: left, right, upward, and downward, or can be classified into six, eight, or even more direction types.
  • the corresponding rotation direction is, for example, rotating to the left with the character's torso as the axis.
  • the corresponding relationship between the path length and the rotation angle is configured, so that the corresponding rotation angle can be obtained for each path length.
  • the rotation angle may be, for example, 30 degrees, 45 degrees, 60 degrees, 90 degrees, 135 degrees, 180 degrees, 270 degrees, etc.
  • step S304 may include the following steps.
  • Step S401 Input the person image into a three-dimensional reconstruction model used to convert the 2D image into a 3D image, and obtain a 3D image corresponding to the person image.
  • Step S402 Rotate the 3D image corresponding to the character image according to the rotation direction and the rotation angle to obtain a rotated image corresponding to the character image.
  • the training process of the three-dimensional reconstruction model includes:
  • each training data in the first training set includes a 2D image used for training and annotation data of its corresponding 3D image;
  • the 3D reconstruction model can be trained with a large amount of training data, and can predict corresponding 3D images for different input data. It has a wide range of applications and a high level of intelligence.
  • the 3D image After obtaining the 3D image, the 3D image is rotated according to the rotation direction and angle, thereby obtaining the rotated image required by the program control party.
  • the controller can use the program-controlled device to receive a new sliding operation to obtain a new rotation direction and angle, and then adjust the rotation direction and angle according to the new rotation direction and angle.
  • the 3D image corresponding to the character image is rotated to obtain a new rotated image.
  • the resulting 3D image can be reused to generate rotating images with multiple rotation directions and/or rotation angles to meet the viewing needs of the program control party from multiple viewing angles.
  • this application can use the above training process to train to obtain a three-dimensional reconstruction model.
  • this application can use a pre-trained three-dimensional reconstruction model, for example, it can be the patent "CN111243085B-Image Reconstruction Network Model” The image reconstruction network model trained in the "Training Method, Device and Electronic Equipment”, or the three-dimensional reconstruction network model trained in the patent "CN114399424A - Model Training Method and Related Equipment”.
  • This application does not limit the method of obtaining the annotated data.
  • manual annotation, automatic annotation, or semi-automatic annotation may be used.
  • This application does not limit the training process of the three-dimensional reconstruction model.
  • the training method of the above-mentioned supervised learning may be used, or the training method of semi-supervised learning may be used, or the training method of unsupervised learning may be used.
  • This application does not limit the preset training end condition. For example, it can be that the number of training times reaches the preset number of times (the preset number of times is, for example, 1 time, 3 times, 10 times, 100 times, 1000 times, 10000 times, etc.), or it can The training data in the first training set have completed one or more trainings, or the total loss value obtained in this training is not greater than the preset loss value.
  • the character image can be switched between a character color image, a character skeleton image, a character meridians image, and a character cartoon image.
  • This provides the program control party with character images in a variety of styles, making it easier for the program control party to choose a style that suits their own preferences and improving the program control party's visual experience.
  • the process of switching the character image may include: using the program-controlled device to display operation buttons corresponding to the character color image, the character's skeleton image, the character's meridian image, and the character's cartoon image around the character image, and utilizing the program-controlled device to receive one of the operation buttons.
  • the program-controlled device is used to display the character image corresponding to the clicked operation button.
  • the operation buttons can be displayed as thumbnails of each character image, or can be displayed in text format as descriptions of "color”, "skeleton”, “meridians", and "cartoon".
  • This application also provides a controller, the implementation of which is consistent with the implementation described in the above method implementation, and the technical effects achieved are consistent, and part of the content will not be described again.
  • the controller is configured to realize the remote program control function between the program control device and the stimulator, the stimulator is arranged in the patient's body, and the controller is configured to:
  • the whole-body video data includes multiple frames of whole-body images
  • Part images of one or more parts are intercepted from each frame of the whole body image
  • the display image is displayed using the program-controlled device.
  • the acquisition process of the one or more parts includes:
  • the program-controlled device is used to display multiple action types, and the multiple action types include pointing fingers, making fists, raising hands, raising arms, walking in a straight line, and walking in a preset curve;
  • the program-controlled device is used to receive a selection operation for one of the action types, and in response to the selection operation, one or more parts corresponding to the selected action type are obtained.
  • the corresponding area of each part in the person image includes a first corresponding area and a second corresponding area
  • the controller is configured to synthesize the display image in the following manner: :
  • the diagnosis result of each part is superimposed to the first corresponding area of each part in the person image, and the icon corresponding to the diagnosis result of each part is superimposed to the first corresponding area of each part in the person image.
  • the second corresponding area in the person image is synthesized to obtain the display image, and the icon corresponding to the diagnosis result of each part is used to graphically indicate the severity of the diagnosis result of each part.
  • the first corresponding area of each part in the person image is located around the corresponding part of the person, and the second corresponding area of each part in the person image is The area is located in the area where the corresponding part of the character is located.
  • the controller is further configured to:
  • the new display image is displayed using the program-controlled device.
  • the controller is configured to obtain the rotated image in the following manner:
  • the rotated image is obtained based on the rotation direction and the rotation angle.
  • the controller is configured to obtain the rotation image corresponding to the character image in the following manner:
  • the training process of the three-dimensional reconstruction model includes:
  • each training data in the first training set includes a Annotation data of 2D images and their corresponding 3D images;
  • the character image can be switched between a character color image, a character skeleton image, a character meridians image, and a character cartoon image.
  • FIG 8 shows a structural block diagram of a controller 200 provided by this application.
  • the controller 200 may include, for example, at least one memory 210, at least one processor 220, and a bus 230 connecting different platform systems.
  • the memory 210 may include readable media in the form of volatile memory, such as random access memory (Random Access Memory, RAM) 211 and/or cache memory 212, and may also include read-only memory (Read-Only Memory, ROM) 213 .
  • RAM Random Access Memory
  • ROM Read-Only Memory
  • the memory 210 also stores a computer program.
  • the computer program can be executed by the processor 220, so that the processor 220 realizes the function of any of the above controllers.
  • the implementation method is the same as the implementation described in the above method implementation and the technical effects achieved. Consistent, some contents will not be repeated.
  • Memory 210 may also include a utility 214 having at least one program module 215, such program module 215 including: an operating system, one or more application programs, other program modules, and program data, each or some combination of these examples. May include implementation of network environment.
  • the processor 220 can execute the computer program described above, and can execute the utility 214.
  • the processor 220 may use one or more application specific integrated circuits (Application Specific Integrated Circuit, ASIC), digital signal processor (Digital Signal Processor, DSP), programmable logic device (Programmable Logic Device, PLD), complex programmable logic device (Complex Programmable Logic Device, CPLD), Field-Programmable Gate Array (FPGA) or other electronic components.
  • ASIC Application Specific Integrated Circuit
  • DSP Digital Signal Processor
  • PLD programmable logic device
  • PLD complex programmable logic device
  • CPLD Complex Programmable Logic Device
  • FPGA Field-Programmable Gate Array
  • Bus 230 may represent one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, a graphics accelerated port, a processor, or any other bus structure using a variety of bus structures. Bus structure of local bus.
  • the controller 200 may also communicate with one or more external devices 240, such as a keyboard, a pointing device, a Bluetooth device, etc., and may also communicate with one or more devices capable of interacting with the controller 200, and/or with a device that enables the controller 200 to 200 is any device capable of communicating with one or more other computing devices (eg, router, modem, etc.). This communication may occur through the input/output interface 250.
  • the controller 200 can also communicate with one or more networks (such as a local area network (Local Area Network, LAN), a wide area network (Wide Area Network, WAN), and/or a public network, such as the Internet) through the network adapter 260.
  • Network adapter 260 may communicate with other modules of controller 200 via bus 230.
  • controller 200 may be used in conjunction with the controller 200, including: microcode, device drivers, redundant processors, external disk drive arrays, disk arrays (Redundant Arrays of Independent Disks) , RAID) systems, tape drives and data backup storage platforms, etc.
  • the present application also provides a computer-readable storage medium that stores a computer program.
  • the computer program When the computer program is executed by a processor, the computer program implements any of the above functions of the controller 200 or implements the above control method.
  • the steps and their implementation are consistent with the implementation described in the implementation of the controller and the technical effects achieved, and part of the content will not be described again.
  • Figure 9 shows a schematic structural diagram of a program product for implementing a control method provided by this application.
  • the program product can take the form of a portable Compact Disc Read Only Memory (CD-ROM) and include program code, and can be run on a terminal device, such as a personal computer.
  • CD-ROM Compact Disc Read Only Memory
  • the program product of the present application is not limited thereto.
  • the readable storage medium may be any tangible medium containing or storing a program, which may be used by or in combination with an instruction execution system, apparatus or device.
  • the Program Product may take the form of one or more readable media in any combination.
  • the readable medium may be a readable signal medium or a readable storage medium.
  • the readable storage medium may be, for example, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, device or device, or any combination thereof.
  • Readable storage media include: electrical connection with one or more wires, portable disk, hard disk, RAM, ROM, Erasable Programmable Read-Only Memory (EPROM), flash memory, optical fiber, portable CD - ROM, optical storage device, magnetic storage device, or any suitable combination of the above.
  • the storage medium may be a non-transitory storage medium.
  • a computer-readable storage medium may include a data signal propagated in baseband or as part of a carrier wave carrying the readable program code therein. Such propagated data signals may take many forms, including electromagnetic signals, optical signals, or any suitable combination of the above.
  • a readable storage medium may also be any readable medium that can transmit, propagate, or transport the program for use by or in connection with an instruction execution system, apparatus, or device.
  • Program code contained on a readable storage medium may be stored in any Appropriate media transmission, including wireless, wired, optical cable, radio frequency (Radio Frequency, RF), etc., or any suitable combination of the above.
  • the program code for performing the operations of the present application can be written in any combination of one or more programming languages, including object-oriented programming languages such as Java, C++, etc., and also includes conventional procedural programming languages. Such as C language or similar programming language.
  • the program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server execute on.
  • the remote computing device may be connected to the user computing device through any kind of network, including a LAN or WAN, or may be connected to an external computing device (such as through the Internet using an Internet service provider) .

Abstract

本申请提供了一种控制器、植入式神经刺激系统及计算机可读存储介质,控制器被配置成:利用程控设备接收刺激配置操作,响应于刺激配置操作,配置刺激器的刺激参数,以使刺激器向患者的体内组织释放刺激参数对应的电刺激能量;利用摄像头采集得到患者的全身视频数据,全身视频数据包括多帧全身图像;从每帧全身图像中截取得到一个或多个部位的部位图像;将每个部位的多帧部位图像输入每个部位对应的部位诊断模型,得到每个部位的诊断结果;将每个部位的诊断结果叠加至每个部位在人物图像中的对应区域,以合成得到显示图像,人物图像中设置有一个人物;利用程控设备显示显示图像。

Description

控制器、植入式神经刺激系统及计算机可读存储介质
本申请要求在2022年05月26日提交中国专利局、申请号为202210585177.7的中国专利申请的优先权,该申请的全部内容通过引用结合在本申请中。
技术领域
本申请涉及植入式器械、远程程控、物联网和深度学习的技术领域,例如涉及控制器、植入式神经刺激系统及计算机可读存储介质。
背景技术
随着科技发展和社会进步,患者渴望通过多种治疗手段来提高生命质量,其中植入式器械的应用前景非常广阔。植入式器械是指借助手术全部或者部分进入人体内或腔道(口)中,或者用于替代人体上皮表面或眼表面,并且在手术过程结束后留在人体内30日(含)以上或者被人体吸收的医疗器械。植入式器械包含程控设备和植入式器械的植入式医疗系统,能够为患者提供参数可控的精细化治疗,在市场上受到众多消费者的欢迎。
专利CN113362946A公开了一种视频处理装置、电子设备及计算机可读存储介质,所述视频处理装置应用于电子设备,所述电子设备分别与显示屏、摄像头和医生设备进行数据交互,所述电子设备被配置成对患有帕金森病的患者的视频进行数据处理;所述装置包括:提示显示模块,被配置成利用所述显示屏显示至少一种指定动作的提示信息;视频采集模块,被配置成利用所述摄像头采集所述患者的视频;动作预测模块,被配置成将所述患者的视频输入帕金森检测模型,预测得到所述患者的动作信息并将动作信息发送至所述医生设备,所述动作信息至少包括其中一种指定动作的幅度和/或频率;信息获取模块,被配置成获取所述患者的疾病信息;建议策略模块,被配置成基于所述患者的疾病信息和动作信息,获取所述患者的建议程控策略并将建议程控策略发送至所述医生设备。该装置帮助医生定量地了解到患者在运动过程中的局部体态和运动表现情况,但未提及针对患者的动作进行自动诊断,也未提及针对医生设备的视觉呈现方式进行优化。
发明内容
本申请提供控制器、植入式神经刺激系统及计算机可读存储介质,针对患者的部位进行自动诊断,针对程控设备的视觉呈现方式进行优化。
本申请提供了一种控制器,被配置成实现程控设备和刺激器之间的远程程 控功能,所述刺激器设置于患者的体内,所述控制器被配置成:
利用所述程控设备接收刺激配置操作,响应于所述刺激配置操作,配置所述刺激器的刺激参数,以使所述刺激器向所述患者的体内组织释放所述刺激参数对应的电刺激能量;
利用摄像头采集得到所述患者的全身视频数据,其中,所述全身视频数据包括多帧全身图像;
从每帧全身图像中截取得到一个或多个部位的部位图像;
将每个部位的多帧部位图像输入每个部位对应的部位诊断模型,得到所述每个部位的诊断结果;
将所述每个部位的诊断结果叠加至所述每个部位在人物图像中的对应区域,以合成得到显示图像,其中,所述人物图像中设置有一个人物;
利用所述程控设备显示所述显示图像。
在一些可选的实施方式中,所述一个或多个部位的获取过程包括:
利用所述程控设备显示多个动作类型,多个所述动作类型包括对指、握拳、举手、抬臂、走直线和走预设曲线中的多种;
利用所述程控设备接收针对其中一个所述动作类型的选择操作,响应于所述选择操作,获取被选择的动作类型对应的一个或多个部位。
在一些可选的实施方式中,所述每个部位在所述人物图像中的对应区域包括第一对应区域和第二对应区域,所述控制器被配置成采用如下方式合成得到所述显示图像:
将所述每个部位的诊断结果叠加至所述每个部位在所述人物图像中的第一对应区域,将所述每个部位的诊断结果对应的图标叠加至所述每个部位在所述人物图像中的第二对应区域,以合成得到所述显示图像,所述每个部位的诊断结果对应的图标用于图形化地指示所述每个部位的诊断结果的严重程度。
在一些可选的实施方式中,所述每个部位在所述人物图像中的第一对应区域位于所述人物的对应部位的周围,所述每个部位在所述人物图像中的第二对应区域位于所述人物的对应部位所处的区域。
在一些可选的实施方式中,所述控制器还被配置成:
利用所述程控设备接收用于旋转所述人物图像的滑动操作,响应于所述滑动操作,获取所述人物图像对应的旋转图像;
将所述每个部位的诊断结果叠加至所述每个部位在所述旋转图像中的对应 区域,以合成得到新的显示图像;
利用所述程控设备显示所述新的显示图像。
在一些可选的实施方式中,所述控制器被配置成采用如下方式获取所述旋转图像:
响应于所述滑动操作,获取所述滑动操作的滑动路径,所述滑动路径采用矢量表示;
基于所述滑动路径的路径方向,确定所述人物图像对应的旋转方向;
基于所述滑动路径的路径长度,确定所述人物图像对应的旋转角度;
基于所述旋转方向和所述旋转角度,获得所述旋转图像。
在一些可选的实施方式中,所述控制器被配置成采用如下方式获得所述人物图像对应的旋转图像:
将所述人物图像输入用于将二维(2-dimension,2D图像转换为3D图像的三维重建模型,得到所述人物图像对应的三维(3-dimension,3D)图像;
按照所述旋转方向和所述旋转角度对所述人物图像对应的3D图像进行旋转,以得到所述人物图像对应的旋转图像;
其中,所述三维重建模型的训练过程包括:
获取第一训练集,所述第一训练集中的每个训练数据包括一个用于训练的2D图像及其对应的3D图像的标注数据;
针对所述第一训练集中的每个训练数据,执行以下处理:
将所述训练数据中的2D图像输入预设的第一深度学习模型,得到所述2D图像对应的3D图像的预测数据;
基于所述2D图像对应的3D图像的预测数据和标注数据,对所述第一深度学习模型的模型参数进行更新;
检测是否满足预设的训练结束条件;如果是,则将训练得到的第一深度学习模型作为所述三维重建模型;如果否,则利用下一个所述训练数据继续训练所述第一深度学习模型。
在一些可选的实施方式中,所述人物图像能够在人物彩色图像、人物骨骼图像、人物经脉图像和人物卡通图像之间切换。
本申请还提供了一种控制方法,用于实现程控设备和刺激器之间的远程程控功能,所述刺激器设置于患者的体内,所述控制方法包括:
利用所述程控设备接收刺激配置操作,响应于所述刺激配置操作,配置所述刺激器的刺激参数,以使所述刺激器向所述患者的体内组织释放所述刺激参数对应的电刺激能量;
利用摄像头采集得到所述患者的全身视频数据,所述全身视频数据包括多帧全身图像;
从每帧全身图像中截取得到一个或多个部位的部位图像;
将每个部位的多帧部位图像输入对应的部位诊断模型,得到所述每个部位的诊断结果;
将所述每个部位的诊断结果叠加至所述每个部位在人物图像中的对应区域,以合成得到显示图像,所述人物图像中设置有一个人物;
利用所述程控设备显示所述显示图像。
在一些可选的实施方式中,所述一个或多个部位的获取过程包括:
利用所述程控设备显示多个动作类型,多个所述动作类型包括对指、握拳、举手、抬臂、走直线和走预设曲线中的多种;
利用所述程控设备接收针对其中一个所述动作类型的选择操作,响应于所述选择操作,获取被选择的动作类型对应的一个或多个部位。
在一些可选的实施方式中,所述每个部位在所述人物图像中的对应区域包括第一对应区域和第二对应区域,所述将所述每个部位的诊断结果叠加至所述每个部位在人物图像中的对应区域,以合成得到显示图像,包括:
将所述每个部位的诊断结果叠加至所述每个部位在所述人物图像中的第一对应区域,将所述每个部位的诊断结果对应的图标叠加至所述每个部位在所述人物图像中的第二对应区域,以合成得到所述显示图像,所述每个部位的诊断结果对应的图标用于图形化地指示所述每个部位的诊断结果的严重程度。
在一些可选的实施方式中,所述每个部位在所述人物图像中的第一对应区域位于所述人物的对应部位的周围,所述每个部位在所述人物图像中的第二对应区域位于所述人物的对应部位所处的区域。
在一些可选的实施方式中,所述方法还包括:
利用所述程控设备接收用于旋转所述人物图像的滑动操作,响应于所述滑动操作,获取所述人物图像对应的旋转图像;
将所述每个部位的诊断结果叠加至所述每个部位在所述旋转图像中的对应区域,以合成得到新的显示图像;
利用所述程控设备显示所述新的显示图像。
在一些可选的实施方式中,所述响应于所述滑动操作,获取所述人物图像对应的旋转图像,包括:
响应于所述滑动操作,获取所述滑动操作的滑动路径,所述滑动路径采用矢量表示;
基于所述滑动路径的路径方向,确定所述人物图像对应的旋转方向;
基于所述滑动路径的路径长度,确定所述人物图像对应的旋转角度;
基于所述旋转方向和所述旋转角度,获得所述旋转图像。
在一些可选的实施方式中,所述基于所述旋转方向和所述旋转角度,获得所述旋转图像,包括:
将所述人物图像输入用于将2D图像转换为3D图像的三维重建模型,得到所述人物图像对应的3D图像;
按照所述旋转方向和所述旋转角度对所述人物图像对应的3D图像进行旋转,以得到所述人物图像对应的旋转图像;
其中,所述三维重建模型的训练过程包括:
获取第一训练集,所述第一训练集中的每个训练数据包括一个用于训练的2D图像及其对应的3D图像的标注数据;
针对所述第一训练集中的每个训练数据,执行以下处理:
将所述训练数据中的2D图像输入预设的第一深度学习模型,得到所述2D图像对应的3D图像的预测数据;
基于所述2D图像对应的3D图像的预测数据和标注数据,对所述第一深度学习模型的模型参数进行更新;
检测是否满足预设的训练结束条件;如果是,则将训练得到的第一深度学习模型作为所述三维重建模型;如果否,则利用下一个所述训练数据继续训练所述第一深度学习模型。
在一些可选的实施方式中,所述人物图像能够在人物彩色图像、人物骨骼图像、人物经脉图像和人物卡通图像之间切换。
本申请还提供了一种植入式神经刺激系统,所述植入式神经刺激系统包括:
程控设备,所述程控设备设置于患者的体外,所述程控设备被配置成提供交互功能和显示功能;
刺激器,所述刺激器设置于所述患者的体内,所述刺激器被配置成向所述患者的体内组织释放电刺激能量;
上述任一项控制器。
本申请还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现权利要求上述任一项控制器的功能或者实现上述任一项控制方法。
附图说明
图1示出了本申请提供的一种植入式神经刺激系统的结构框图。
图2示出了本申请提供的一种控制方法的流程示意图。
图3示出了本申请提供的一种显示图像的示意图。
图4示出了本申请提供的一种部位获取过程的流程示意图。
图5示出了本申请提供的另一种控制方法的流程示意图。
图6示出了本申请提供的一种获得旋转图像的流程示意图。
图7示出了本申请提供的另一种获得旋转图像的流程示意图。
图8示出了本申请提供的一种控制器的结构框图。
图9示出了本申请提供的一种用于实现控制方法的程序产品的结构示意图。
具体实施方式
下面将结合本申请的说明书附图以及实施方式,对本申请中的技术方案进行描述,在不相冲突的前提下,以下描述的多个实施方式之间或多个技术特征之间可以任意组合形成新的实施方式。
在本申请中,“至少一个”是指一个或者多个,“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A,B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。“以下至少一项(个)”或其类似表达,是指的这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a,b或c中的至少一项(个),可以表示:a,b,c,a和b,a和c,b和c,a和b和c,其中a、b和c可以是单个,也可以是多个。值得注意的是,“至少一项(个)”还可以解释成“一项(个)或多项(个)”。
本申请中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。 本申请中被描述为“示例性的”或者“例如”的任何实施方式或设计方案不应被解释为比其他实施方式或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。
下面,对本申请的应用领域进行简单说明。
植入式神经刺激系统主要包括植入体内的刺激器以及体外的程控设备。神经调控技术主要是通过立体定向手术在体内特定结构(即靶点)植入电极,并由植入患者体内的刺激器经电极向靶点发放电脉冲,调控相应神经结构和网络的电活动及所述神经结构和网络的功能,从而改善症状、缓解病痛。其中,刺激器可以是植入式神经电刺激装置、植入式心脏电刺激系统(又称心脏起搏器)、植入式药物输注装置(Implantable Drug Delivery System,IDDS)和导线转接装置中的任意一种。植入式神经电刺激装置例如是脑深部电刺激(Deep Brain Stimulation,DBS)系统、植入式脑皮层刺激(Cortical Nerve Stimulation,CNS)系统、植入式脊髓电刺激(Spinal Cord Stimulation,SCS)系统、植入式骶神经电刺激(Sacral Nerve Stimulation,SNS)系统、植入式迷走神经电刺激(Vagus Nerve Stimulation,VNS)系统等。
刺激器可以包括植入式脉冲发生器(Implantable Pulse Generator,IPG)、延伸导线和电极导线,IPG设置于患者体内,依靠密封电池和电路向体内组织提供可控制的电刺激能量,通过植入的延伸导线和电极导线,为体内组织的特定区域提供一路或两路可控制的特定电刺激能量。延伸导线配合IPG使用,作为电刺激信号的传递媒体,将IPG产生的电刺激信号,传递给电极导线。电极导线将IPG产生的电刺激信号,通过多个电极触点,向体内组织的特定区域释放电刺激能量;所述植入式医疗设备具有单侧或双侧的一路或多路电极导线,所述电极导线上设置有多个电极触点,所述电极触点可以均匀排列或者非均匀排列在电极导线的周向上。作为一个示例,所述电极触点以4行3列的阵列(共计12个电极触点)排列在电极导线的周向上。电极触点可以包括刺激电极触点和/或采集电极触点。电极触点例如可以采用片状、环状以及点状等形状。
在一些可能的实现方式中,受刺激的体内组织可以是患者的脑组织,受刺激的部位可以是脑组织的特定部位。当患者的疾病类型不同时,受刺激的部位一般来说是不同的,所使用的刺激触点(单源或多源)的数量、一路或多路(单通道或多通道)特定电刺激信号的运用以及刺激参数数据也是不同的。本申请对适用的疾病类型不做限定,其可以是脑深部刺激(DBS)、脊髓刺激(SCS)、骨盆刺激、胃刺激、外周神经刺激、功能性电刺激所适用的疾病类型。其中,DBS可以用于治疗或管理的疾病类型包括:痉挛疾病(例如,癫痫)、疼痛、 偏头痛、精神疾病(例如,重度抑郁症(Major Depressive Disorder,MDD))、躁郁症、焦虑症、创伤后压力心理障碍症、轻郁症、强迫症(Obsessive-Compulsive Disorder,OCD)、行为障碍、情绪障碍、记忆障碍、心理状态障碍、移动障碍(例如,特发性震颤或帕金森氏病)、亨廷顿病、阿尔茨海默症、药物成瘾症、自闭症或其他神经学或精神科疾病和损害。当DBS用于治疗药物成瘾症患者时,可以帮助吸毒人员戒毒,提升他们的幸福感和生命质量。
本申请中,程控设备和刺激器建立程控连接时,可以利用程控设备调整刺激器的电刺激信号的刺激参数,也可以通过刺激器感测患者脑深部的生物电活动,并可以通过所感测到的生物电活动来继续调节刺激器的电刺激信号的刺激参数。
程控设备可以是医生程控器或者患者程控器。
本申请对医生程控器和刺激器的数据交互不进行限制,当医生远程程控时,医生程控器可以通过服务器、患者程控器与刺激器进行数据交互。当医生线下和患者面对面进行程控时,医生程控器可以通过患者程控器与刺激器进行数据交互,医生程控器还可以直接与刺激器进行数据交互。
患者程控器可以包括(与服务器通信的)主机和(与刺激器通信的)子机,主机和子机可通信的连接。其中,医生程控器可以通过第三代移动通信技术/第四代移动通信技术/第五代移动通信技术(3rd-Generation/the 4th Generation mobile communication technology/the 5th Generation mobile communication technology,3G/4G/5G)网络与服务器进行数据交互,服务器可以通过3G/4G/5G网络与主机进行数据交互,主机可以通过蓝牙协议/无线保真(Wireless Fidelity,WIFI)协议/通用串行总线(Universal Serial Bus,USB)协议与子机进行数据交互,子机可以通过401MHz-406MHz工作频段/2.4GHz-2.48GHz工作频段与刺激器进行数据交互,医生程控器可以通过401MHz-406MHz工作频段/2.4GHz-2.48GHz工作频段与刺激器直接进行数据交互。
参见图1,图1示出了本申请提供的一种植入式神经刺激系统的结构框图。
所述植入式神经刺激系统包括:
程控设备10,所述程控设备10设置于患者的体外,所述程控设备10被配置成提供交互功能和显示功能;
刺激器20,所述刺激器20设置于所述患者的体内,所述刺激器20被配置成向所述患者的体内组织释放电刺激能量;
控制器30,所述控制器30被配置成实现控制方法的步骤。
程控设备10例如可以包括平板电脑、笔记本电脑、台式机、手机和智能穿戴设备中的一种或多种。
在一些可选的实施方式中,所述控制器30可以与所述程控设备10结合为一体。
在另一些可选的实施方式中,所述控制器30可以与所述刺激器20结合为一体。
下文将对控制方法进行说明。
参见图2和图3,图2示出了本申请提供的一种控制方法的流程示意图,图3示出了本申请提供的一种显示图像的示意图。所述控制方法用于实现程控设备和刺激器之间的远程程控功能,所述刺激器设置于患者的体内,所述控制方法包括以下步骤。
步骤S101:利用所述程控设备接收刺激配置操作,响应于所述刺激配置操作,配置所述刺激器的刺激参数,以使所述刺激器向所述患者的体内组织释放所述刺激参数对应的电刺激能量。
步骤S102:利用摄像头采集得到所述患者的全身视频数据,所述全身视频数据包括多帧全身图像。
步骤S103:从每帧全身图像中截取得到一个或多个部位的部位图像。
步骤S104:将每个部位的多帧部位图像输入对应的部位诊断模型,得到每个部位的诊断结果。
步骤S105:将每个部位的诊断结果叠加至每个部位在人物图像中的对应区域,以合成得到显示图像,所述人物图像中设置有一个人物。
步骤S106:利用所述程控设备显示所述显示图像。
由此,通过控制器实现程控设备和刺激器之间的远程程控功能,例如,程控设备和刺激器之间可以直接或者间接进行数据交互,因此,在利用程控设备接收到刺激配置操作后,可以据此配置刺激器的刺激参数,以使刺激器向患者的体内组织释放刺激参数对应的电刺激能量。
在远程程控过程中,程控方(即程控设备的使用者,一般是具有程控资质或者相应能力的医生)无法直接观察患者,因此可以利用设置于患者周围的摄像头采集得到患者的全身视频数据(采集过程中患者可以处于预设状态或者做出预设动作),从每帧全身图像中截取得到一个或多个部位对应的部位图像,针对每个部位,将该部位对应的(在时间顺序上具有先后关系的)多帧部位图 像输入该部位对应的部位诊断模型,利用部位诊断模型(自动化地)输出该部位的诊断结果,诊断结果例如可以指示该部位处于正常状态或者异常状态,还可以采用分数或者等级来更精细化地指示该部位的正常程度(例如分数、等级越高表示部位越正常,而采用较低的分数、等级表示部位异常)。
在合成显示图像时,针对每个部位,将该部位的诊断结果叠加至该部位在人物图像中的对应区域,从而使程控方可以在人物图像中,直观、清晰地看到每个部位及其诊断结果,相对于单调的、单独的、不能与人物图像一起显示的文本格式的诊断结果,叠加于每个部位的对应区域的诊断结果,能够为程控方提供更直观的视觉呈现方式,当人物图像贴近患者自身(或者人体构造)、并且诊断结果采用与人物图像对比度较高的显示方式时,程控方可以在短时间内高效地获得较多的信息量,而不需要对照着文本格式的诊断结果自行脑内想象每个部位的位置(例如当诊断的部位比较多时),因此,这种高效率、大信息量的呈现方式优化了程控设备原本的视觉呈现方式,能够使程控方快速了解(在当前的刺激参数下)每个部位的诊断结果,从而辅助程控方继续进行刺激参数的配置或者进行治疗方案的调整。
事实上,在当前的远程程控技术中,程控方往往需要用较多时间(比线下面对面程控更多的时间)来指导患者保持预设状态或者做出预设动作,而当患者这样做了之后,又需要程控方肉眼观测或者利用机器视觉技术计算得到患者的状态参数或者动作参数,根据计算得到的参数人工或者智能化地判断患者的每个部位的正常程度从而获知每个部位的诊断结果,再对诊断结果进行文本格式的呈现,由程控方阅读诊断结果,并进行下一步的处置;在上述环节中,由于智能化程度不够高以及网络质量不确定等因素,程控方和患者之间的远程程控效率受到网络传输速度以及患者的表达能力等制约,不能有效发挥作用,尚未被所有程控方和患者所接受,在这个前提下,无论对上述环节中的哪一者进行改善,都将有助于从整体上改善双方的远程程控体验,从技术手段上使远程程控技术赢得程控方和患者的信任,让更多患者足不出户即可享受到便利的程控体验,而患者的高度认同会打消程控方应用新技术的顾虑,从而助力远程程控技术实现更大范围的推广和应用,使远程程控技术拥有更广阔的应用前景和更高的商业价值。
刺激器的刺激参数可以包括以下至少一种:频率(例如是单位时间1s内的电刺激脉冲信号个数,单位为Hz)、脉宽(每个脉冲的持续时间,单位为μs)、幅值(一般用电压表述,即每个脉冲的强度,单位为V)、刺激模式(包括电流模式、电压模式、定时刺激模式和循环刺激模式中的一种或多种)、医生控制上限及下限(医生可调节的范围)和患者控制上限及下限(患者可自主调节的范围)。
在一应用中,可以在电流模式或者电压模式下对刺激器的多个刺激参数进行调节。
刺激参数标识可以使用中文、字母、数字、符号和特殊符号中的至少一种来表示。例如“A01”、“幅值”或者“#01”。
在一应用中,刺激器的配置参数包括:刺激模式为电压模式、频率为130Hz、脉宽为60μs和幅值为3V。
本申请中的刺激配置操作例如可以是对当前刺激参数进行步进调节的操作,例如可以在电压模式下,点击幅值对应的增加按钮“+”,将幅值从3.3V调节为3.4V(以0.1V作为步长);或者可以点击频率对应的增加按钮“+”,将频率从120Hz调节为130Hz(以10Hz作为步长)。本申请中的刺激配置操作还可以是录入数值的操作。
本申请中的摄像头例如是光学摄像头和/或红外摄像头。
本申请中的全身视频数据是指能够拍摄到患者全身的视频数据。类似的,全身图像是指患者全身出镜的图像。
本申请中的部位例如可以包括手部、臂部、腿部以及躯干部等。采集过程中,医生所关心的患者的预设状态例如可以是保持站立以及坐态等状态。预设动作例如可以是对指、握拳、举手、抬臂(使手臂与躯干相垂直)、走直线以及走曲线等动作类型。例如,对指这一动作类型可以包括左手对指、右手对指,握拳这一动作类型可以包括左手握拳以及右手握拳,举手这一动作类型可以包括举左手以及举右手,抬臂这一动作类型可以包括抬左臂、抬右臂,走直线可以包括走直线30步以及走直线10步等,走曲线可以包括走圆形、走半圆形以及走S形等。
本申请对从全身图像中截取得到部位图像的方式不作限定,例如可以利用图像分割模型对全身图像进行图像分割,得到每个部位的部位图像。又例如可以按照人体部位分布信息,从全身图像中截取每个部位对应的部位图像。
从全身图像中截取得到部位图像,利用多帧部位图像对相应的部位进行诊断,相比于全身图像来说,部位图像的图像大小较小,因此可以减少部位诊断模型的计算量。例如可以从患者的全身图像中截取得到左手图像,将多帧左手图像输入手部对应的部位诊断模型,得到患者左手的诊断结果,例如是:严重抖动或者重度帕金森症状。
本申请的步骤S104中,“将每个部位的多帧部位图像输入对应的部位诊断模型”中的“对应的部位诊断模型”是指与每个部位相对应的部位诊断模型。例如,将手部的多帧部位图像输入手部对应的部位诊断模型,将臂部的多帧部 位图像输入臂部对应的部位诊断模型,将腿部的多帧部位图像输入腿部对应的部位诊断模型,将躯干部的多帧部位图像输入躯干部对应的部位诊断模型。
本申请对部位诊断模型的获取方式不作限定,在一些实施方式中,本申请可以训练得到部位诊断模型,在另一些实施方式中,本申请可以采用预先训练好的部位诊断模型。
本申请中,针对每个部位设置对应的部位诊断模型,例如,针对手部设置手部对应的部位诊断模型(又称手部诊断模型),针对臂部设置臂部对应的部位诊断模型(又称臂部诊断模型),针对腿部设置腿部对应的部位诊断模型(又称腿部诊断模型),针对躯干部设置躯干部对应的部位诊断模型(又称躯干部诊断模型)。
在一些可选的实施方式中,每个部位对应的部位诊断模型可以采用如下训练过程训练得到:
获取第二训练集,所述第二训练集中的每个训练数据包括用于训练的所述部位的多帧部位图像以及所述训练数据对应的所述部位的诊断结果的标注数据;
针对所述第二训练集中的每个训练数据,执行以下处理:
将所述训练数据中的多帧部位图像输入预设的第二深度学习模型,得到所述训练数据对应的所述部位的诊断结果的预测数据;
基于所述训练数据对应的所述部位的诊断结果的预测数据和标注数据,对所述第二深度学习模型的模型参数进行更新;
检测是否满足预设的训练结束条件;如果是,则将训练得到的第二深度学习模型作为所述部位诊断模型;如果否,则利用下一个所述训练数据继续训练所述第二深度学习模型。
部位诊断模型可以由大量的训练数据训练得到,能够针对不同的输入数据预测得到每个部位的诊断结果,适用范围广,智能化水平高。
通过设计、建立适量的神经元计算节点和多层运算层次结构,选择合适的输入层和输出层,就可以得到预设的第二深度学习模型,通过该预设的第二深度学习模型的学习和调优,建立起从输入到输出的函数关系,虽然不能100%找到输入与输出的函数关系,但是可以尽可能地逼近现实的关联关系,由此训练得到的部位诊断模型,可以实现获取部位诊断结果的功能,且计算结果准确性高、可靠性高。
在一些实施方式中,本申请可以采用上述训练过程训练得到部位诊断模型, 在另一些实施方式中,本申请可以采用预先训练好的部位诊断模型。
本申请对标注数据的获取方式不作限定,例如可以采用人工标注的方式,也可以采用自动标注或者半自动标注的方式。
本申请对部位诊断模型的训练过程不作限定,其例如可以采用上述监督学习的训练方式,或者可以采用半监督学习的训练方式,或者可以采用无监督学习的训练方式。
本申请对预设的训练结束条件不作限定,其例如可以是训练次数达到预设次数(预设次数例如是1次、3次、10次、100次、1000次、10000次等),或者可以是第二训练集中的训练数据都完成一次或多次训练,或者可以是本次训练得到的总损失值不大于预设损失值。
本申请的步骤S105中的“叠加”例如可以采用置顶显示方式或者图层混合方式实现。图层混合方式例如可以采用以下混合模式中的任意一种:正常、溶解、变暗、正片叠底、颜色加深、线性加深、深色、变亮、滤色、颜色减淡、线性减淡(添加)、浅色、叠加、柔光、强光、亮光、线性光、点光、实色混合、差值、排除、减去、划分、色相、饱和度、颜色和明度。当采用置顶显示方式时,诊断结果将会浮于人物图像的上方(或者说顶层)。
本申请中,每个部位在人物图像中的对应区域例如可以包括所述人物图像中人物的对应部位的周围和/或人物的对应部位所处的区域。
参见图4,图4示出了本申请提供的一种部位获取过程的流程示意图。在一些可选的实施方式中,所述一个或多个部位的获取过程可以包括以下步骤。
步骤S201:利用所述程控设备显示多个动作类型,多个所述动作类型包括对指、握拳、举手、抬臂、走直线和走预设曲线中的多种。
步骤S202:利用所述程控设备接收针对其中一个所述动作类型的选择操作,响应于所述选择操作,获取被选择的动作类型对应的一个或多个部位。
由此,从医学角度来说,程控方有时候会需要患者做出预设动作类型的动作,对患者的动作过程进行人工观察或者利用机器视觉技术进行图像处理,从而人工或者智能化地对患者的部位做出诊断;这种情况下,程控方关心的是动作类型,因此,可以根据医学方面的使用偏好,预先建立动作类型和部位之间的对应关系,这样程控方就可以直接选择所关心的动作类型,由该动作类型自动对应得到一个或多个部位,而不需要程控方在脑内换算该动作类型对应哪些部位并人工手动选择一个或多个部位。
例如当一个动作类型对应多个部位时(例如对指这个动作类型一般对应左手和右手两个部位),可以对多个部位进行单独诊断,此时可以分次进行,每次对其中一个部位进行图像处理以得到诊断结果,由此通过多次诊断,分别得到多个部位的诊断结果,在上述多次诊断过程中,程控方只需要选择一次动作类型,就自动执行后续所有步骤并得到多个部位的诊断结果,提升了智能化程度,提高了程控方的使用体验,提升了远程程控效果。
本申请中,对指这一动作类型可以对应左手和右手两个部位,握拳这一动作类型可以对应左手和右手两个部位,举手这一动作类型可以对应左臂和右臂两个部位,抬臂这一动作类型可以对应左臂和右臂两个部位,走直线这一动作类型可以对应左腿和右腿两个部位,走预设曲线这一动作类型可以对应左腿和右腿两个部位。
在一些可选的实施方式中,每个部位在所述人物图像中的对应区域包括第一对应区域和第二对应区域,所述步骤S105可以包括以下步骤。
将每个部位的诊断结果叠加至每个部位在所述人物图像中的第一对应区域,将每个部位的诊断结果对应的图标叠加至每个部位在所述人物图像中的第二对应区域,以合成得到所述显示图像,每个部位的诊断结果对应的图标用于图形化地指示每个部位的诊断结果的严重程度。
由此,针对每个部位,一方面,将该部位的诊断结果显示于该部位在人物图像中的第一对应区域,另一方面,将该部位的诊断结果对应的图标显示于该部位在人物图像中的第二对应区域,也就是说,人物图像中会同时显示人物、每个部位的诊断结果、每个诊断结果对应的图标,并且图标采用图形化的方式指示诊断结果的严重程度(例如使用不同颜色的图标来区分不同的严重程度,或者使用不同图案的图标来区分不同的严重程度),相对于枯燥的文本信息,图标比文本格式的诊断结果更醒目、直观,观感上也活泼一些,这就从视觉上提升了诊断结果的呈现效果,有利于程控方集中注意力、更专注地为患者提供远程程控服务,因此能够提升程控方使用远程程控功能的积极性,同时使患者享受到更好的服务质量,进而从整体上提升了程控方和患者的使用体验。
在一些可选的实施方式中,每个部位在所述人物图像中的第一对应区域可以位于所述人物的对应部位的周围,每个部位在所述人物图像中的第二对应区域可以位于所述人物的对应部位所处的区域。
由此,将(每个部位在人物图像中的)第一对应区域设置于人物的对应部 位的周围,使得文本格式的诊断结果不会显示在每个部位的上层(或者说顶层、前景),避免人物的部位被诊断结果所遮挡(尤其当诊断结果对应的文本信息较长时);将第二对应区域设置于人物的对应部位所处的区域,则可以将图形化的图标直接显示在每个部位所处的区域,从程控方的视角来看,每个部位上的图标直观地展示出该部位(病情)的严重程度,优化了程控设备的视觉呈现方式,提升了程控方的使用体验。
示例性的,可以将手部的诊断结果显示在人物图像中的手部的周围,将手部的诊断结果对应的图标显示在人物图像中的手部所处的区域。
继续参见图3,在一个可选的实施方式中,将左手的诊断结果【正常】显示在人物图像中的人物左手周围,将左手的诊断结果对应的图标【绿色对号】(对号例如是“√”)显示在人物图像中的人物左手的中心位置。
在另一可选的实施方式中,将右手的诊断结果【重度震颤】显示在人物图像中的人物右手周围,将右手的诊断结果对应的图标【红色错号】(错号例如是“×”)显示在人物图像中的人物右手的中心位置。
在又一可选的实施方式中,将左腿的诊断结果【轻度震颤】显示在人物图像中的人物左腿的膝关节周围,将左腿的诊断结果对应的图标【蓝色矩形】显示在人物图像中的人物左腿的膝关节的中心位置。
在又一可选的实施方式中,将右腿的诊断结果【中度震颤】显示在人物图像中的人物右腿的膝关节周围,将右腿的诊断结果对应的图标【紫色菱形】显示在人物图像中的人物右腿的膝关节的中心位置。
在另一些可选的实施方式中,每个部位在所述人物图像中的第一对应区域可以位于所述人物的对应部位所处的区域,每个部位在所述人物图像中的第二对应区域可以位于所述人物的对应部位的周围。
在又一些可选的实施方式中,每个部位在所述人物图像中的第一对应区域和第二对应区域可以都位于所述人物的对应部位的周围。
在又一些可选的实施方式中,每个部位在所述人物图像中的第一对应区域和第二对应区域可以都位于所述人物的对应部位所处的区域。
参见图5,图5示出了本申请提供的另一种控制方法的流程示意图。在一些可选的实施方式中,所述方法还可以包括以下步骤。
步骤S107:利用所述程控设备接收用于旋转所述人物图像的滑动操作,响 应于所述滑动操作,获取所述人物图像对应的旋转图像。
步骤S108:将每个部位的诊断结果叠加至每个部位在所述旋转图像中的对应区域,以合成得到新的显示图像。
步骤S109:利用所述程控设备显示所述新的显示图像。
由此,允许程控方通过滑动方式旋转人物图像,并利用人物图像对应的旋转(后的)图像合成新的显示图像,也就是说,虽然该新的显示图像的观看视角发生了变化,但仍然能够在对应区域显示部位的诊断结果。
人体是三维立体的,二维平面图像无法呈现深度信息,也无法呈现人物图像的背面效果,对于空间想象能力较弱的程控方而言,采用旋转图像提供人物旋转后的观看视角,能够直观反映每个部位的空间特点,提升程控设备的视觉呈现效果。
在一些可选的实施方式中,所述步骤S107中,所述响应于所述滑动操作,获取所述人物图像对应的旋转图像的步骤,可以包括:响应于所述滑动操作,当检测到所述滑动操作是向左滑动或者向右滑动时,将所述人物图像对应的后视图作为所述人物图像对应的旋转图像。
在另一些可选的实施方式中,所述步骤S107中,所述响应于所述滑动操作,获取所述人物图像对应的旋转图像的步骤,可以包括:
响应于所述滑动操作,当检测到所述滑动操作是向左滑动时,将所述人物图像对应的右视图作为所述人物图像对应的旋转图像;
当检测到所述滑动操作是向右滑动时,将所述人物图像对应的左视图作为所述人物图像对应的旋转图像;
当检测到所述滑动操作是向上滑动或者向下滑动时,将所述人物图像对应的后视图作为所述人物图像对应的旋转图像。
参见图6,图6示出了本申请提供的一种获得旋转图像的流程示意图。在又一些可选的实施方式中,所述步骤S107中,所述响应于所述滑动操作,获取所述人物图像对应的旋转图像的步骤,可以包括以下步骤。
步骤S301:响应于所述滑动操作,获取所述滑动操作的滑动路径,所述滑动路径采用矢量表示。
步骤S302:基于所述滑动路径的路径方向,确定所述人物图像对应的旋转方向。
步骤S303:基于所述滑动路径的路径长度,确定所述人物图像对应的旋转角度。
步骤S304:基于所述旋转方向和所述旋转角度,获得所述旋转图像。
由此,采用矢量表示滑动路径,从而可以得到滑动路径的路径方向和路径长度,分别确定人物图像对应的旋转方向和旋转角度,这就使得程控方能够通过差异化的滑动操作,精细化地控制人物图像的旋转方向和旋转角度,据此得到所需求的旋转图像。
本申请中,配置路径方向和旋转方向之间的对应关系,由此可以针对每个路径方向获取该每个路径方向所对应的旋转方向。路径方向例如可以分类为向左、向右、向上、向下四个方向类型,也可以分类为六类、八类乃至更多数量的方向类型。当路径方向是向左时,所对应的旋转方向例如是以人物躯干为轴线、向左旋转。
另外,配置路径长度和旋转角度之间的对应关系,由此可以针对每个路径长度获取其所对应的旋转角度。旋转角度例如可以是30度、45度、60度、90度、135度、180度、270度等。
参见图7,图7示出了本申请提供的另一种获得旋转图像的流程示意图。在一些可选的实施方式中,所述步骤S304可以包括以下步骤。
步骤S401:将所述人物图像输入用于将2D图像转换为3D图像的三维重建模型,得到所述人物图像对应的3D图像。
步骤S402:按照所述旋转方向和所述旋转角度对所述人物图像对应的3D图像进行旋转,以得到所述人物图像对应的旋转图像。
所述三维重建模型的训练过程包括:
获取第一训练集,所述第一训练集中的每个训练数据包括一个用于训练的2D图像及其对应的3D图像的标注数据;
针对所述第一训练集中的每个训练数据,执行以下处理:
将所述训练数据中的2D图像输入预设的第一深度学习模型,得到所述2D图像对应的3D图像的预测数据;
基于所述2D图像对应的3D图像的预测数据和标注数据,对所述第一深度学习模型的模型参数进行更新;
检测是否满足预设的训练结束条件;如果是,则将训练得到的第一深度学习模型作为所述三维重建模型;如果否,则利用下一个所述训练数据继续训练 所述第一深度学习模型。
由此,三维重建模型可以由大量的训练数据训练得到,能够针对不同的输入数据预测得到相应的3D图像,适用范围广,智能化水平高。
通过设计,建立适量的神经元计算节点和多层运算层次结构,选择合适的输入层和输出层,就可以得到预设的第一深度学习模型,通过该预设的第一深度学习模型的学习和调优,建立起从输入到输出的函数关系,虽然不能100%找到输入与输出的函数关系,但是可以尽可能地逼近现实的关联关系,由此训练得到的三维重建模型,可以实现获取人物图像对应的3D图像的功能,且计算结果准确性高、可靠性高。
在得到3D图像后,按照旋转方向和旋转角度对3D图像进行旋转,从而得到程控方所需求的旋转图像。
如果程控方希望得到其他旋转方向或者旋转角度的旋转图像,该控制器可以利用程控设备接收新的滑动操作以获取新的旋转方向和旋转角度,再按照新的旋转方向和新的旋转角度对所述人物图像对应的3D图像进行旋转,从而得到新的旋转图像。
这样做可以实现,只要进行一次重建,所得到的3D图像可以重复利用,用以生成多种旋转方向和/或旋转角度的旋转图像,满足程控方多种视角的观看需求。
在一些实施方式中,本申请可以采用上述训练过程训练得到三维重建模型,在另一些实施方式中,本申请可以采用预先训练好的三维重建模型,例如可以是专利《CN111243085B-图像重建网络模型的训练方法、装置和电子设备》中训练得到的图像重建网络模型,或者可以是专利《CN114399424A-模型训练方法及相关设备》中训练得到的三维重建网络模型。
本申请对标注数据的获取方式不作限定,例如可以采用人工标注的方式,也可以采用自动标注或者半自动标注的方式。
本申请对三维重建模型的训练过程不作限定,其例如可以采用上述监督学习的训练方式,或者可以采用半监督学习的训练方式,或者可以采用无监督学习的训练方式。
本申请对预设的训练结束条件不作限定,其例如可以是训练次数达到预设次数(预设次数例如是1次、3次、10次、100次、1000次、10000次等),或者可以是第一训练集中的训练数据都完成一次或多次训练,或者可以是本次训练得到的总损失值不大于预设损失值。
在一些可选的实施方式中,所述人物图像能够在人物彩色图像、人物骨骼图像、人物经脉图像和人物卡通图像之间切换。
由此,为程控方提供多种风格的人物图像,便于程控方选择符合自己偏好的风格,提升程控方的视觉体验。
切换人物图像的过程例如可以包括:利用所述程控设备在人物图像的周围显示与人物彩色图像、人物骨骼图像、人物经脉图像和人物卡通图像相对应的操作按钮,利用所述程控设备接收其中一个操作按钮的点击操作,响应于所述点击操作,利用所述程控设备显示所点击的操作按钮对应的人物图像。操作按钮可以显示为每一种人物图像的缩略图,或者可以采用文本格式显示为“彩色”、“骨骼”、“经脉”、“卡通”的说明文字。
本申请还提供了一种控制器,其实现方式与上述方法实施方式中记载的实施方式、所达到的技术效果一致,部分内容不再赘述。
所述控制器被配置成实现程控设备和刺激器之间的远程程控功能,所述刺激器设置于患者的体内,所述控制器被配置成:
利用所述程控设备接收刺激配置操作,响应于所述刺激配置操作,配置所述刺激器的刺激参数,以使所述刺激器向所述患者的体内组织释放所述刺激参数对应的电刺激能量;
利用摄像头采集得到所述患者的全身视频数据,所述全身视频数据包括多帧全身图像;
从每帧全身图像中截取得到一个或多个部位的部位图像;
将每个部位的多帧部位图像输入对应的部位诊断模型,得到每个部位的诊断结果;
将每个部位的诊断结果叠加至所述每个部位在人物图像中的对应区域,以合成得到显示图像,所述人物图像中设置有一个人物;
利用所述程控设备显示所述显示图像。
在一些可选的实施方式中,所述一个或多个部位的获取过程包括:
利用所述程控设备显示多个动作类型,多个所述动作类型包括对指、握拳、举手、抬臂、走直线和走预设曲线中的多种;
利用所述程控设备接收针对其中一个所述动作类型的选择操作,响应于所述选择操作,获取被选择的动作类型对应的一个或多个部位。
在一些可选的实施方式中,所述每个部位在所述人物图像中的对应区域包括第一对应区域和第二对应区域,所述控制器被配置成采用如下方式合成得到所述显示图像:
将所述每个部位的诊断结果叠加至所述每个部位在所述人物图像中的第一对应区域,将所述每个部位的诊断结果对应的图标叠加至所述每个部位在所述人物图像中的第二对应区域,以合成得到所述显示图像,所述每个部位的诊断结果对应的图标用于图形化地指示所述每个部位的诊断结果的严重程度。
在一些可选的实施方式中,所述每个部位在所述人物图像中的第一对应区域位于所述人物的对应部位的周围,所述每个部位在所述人物图像中的第二对应区域位于所述人物的对应部位所处的区域。
在一些可选的实施方式中,所述控制器还被配置成:
利用所述程控设备接收用于旋转所述人物图像的滑动操作,响应于所述滑动操作,获取所述人物图像对应的旋转图像;
将所述每个部位的诊断结果叠加至所述每个部位在所述旋转图像中的对应区域,以合成得到新的显示图像;
利用所述程控设备显示所述新的显示图像。
在一些可选的实施方式中,所述控制器被配置成采用如下方式获取所述旋转图像:
响应于所述滑动操作,获取所述滑动操作的滑动路径,所述滑动路径采用矢量表示;
基于所述滑动路径的路径方向,确定所述人物图像对应的旋转方向;
基于所述滑动路径的路径长度,确定所述人物图像对应的旋转角度;
基于所述旋转方向和所述旋转角度,获得所述旋转图像。
在一些可选的实施方式中,所述控制器被配置成采用如下方式获得所述人物图像对应的旋转图像:
将所述人物图像输入用于将2D图像转换为3D图像的三维重建模型,得到所述人物图像对应的3D图像;
按照所述旋转方向和所述旋转角度对所述人物图像对应的3D图像进行旋转,以得到所述人物图像对应的旋转图像;
其中,所述三维重建模型的训练过程包括:
获取第一训练集,所述第一训练集中的每个训练数据包括一个用于训练的 2D图像及其对应的3D图像的标注数据;
针对所述第一训练集中的每个训练数据,执行以下处理:
将所述训练数据中的2D图像输入预设的第一深度学习模型,得到所述2D图像对应的3D图像的预测数据;
基于所述2D图像对应的3D图像的预测数据和标注数据,对所述第一深度学习模型的模型参数进行更新;
检测是否满足预设的训练结束条件;如果是,则将训练得到的第一深度学习模型作为所述三维重建模型;如果否,则利用下一个所述训练数据继续训练所述第一深度学习模型。
在一些可选的实施方式中,所述人物图像能够在人物彩色图像、人物骨骼图像、人物经脉图像和人物卡通图像之间切换。
参见图8,图8示出了本申请提供的一种控制器200的结构框图。控制器200例如可以包括至少一个存储器210、至少一个处理器220以及连接不同平台系统的总线230。
存储器210可以包括易失性存储器形式的可读介质,例如随机存取存储器(Random Access Memory,RAM)211和/或高速缓存存储器212,还可以包括只读存储器(Read-Only Memory,ROM)213。
存储器210还存储有计算机程序,计算机程序可以被处理器220执行,使得处理器220实现上述任一项控制器的功能,其实现方式与上述方法实施方式中记载的实施方式、所达到的技术效果一致,部分内容不再赘述。
存储器210还可以包括具有至少一个程序模块215的实用工具214,这样的程序模块215包括:操作系统、一个或者多个应用程序、其它程序模块以及程序数据,这些示例的每一个或某种组合中可能包括网络环境的实现。
处理器220可以执行上述计算机程序,以及可以执行实用工具214。
处理器220可以采用一个或多个应用专用集成电路(Application Specific Integrated Circuit,ASIC)、数字信号处理器(Digital Signal Processor,DSP)、可编程逻辑器件(ProgrammableLogic Device,PLD)、复杂可编程逻辑器件(Complex Programmable Logic Device,CPLD)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或其他电子元件。
总线230可以为表示几类总线结构的一种或多种,包括存储器总线或者存储器控制器、外围总线、图形加速端口、处理器或者使用多种总线结构的任意 总线结构的局域总线。
控制器200也可以与一个或多个外部设备240例如键盘、指向设备、蓝牙设备等通信,还可与一个或者多个能够与该控制器200交互的设备通信,和/或与使得该控制器200能与一个或多个其它计算设备进行通信的任何设备(例如路由器、调制解调器等)通信。这种通信可以通过输入输出接口250进行。并且,控制器200还可以通过网络适配器260与一个或者多个网络(例如局域网(Local Area Network,LAN),广域网(Wide Area Network,WAN)和/或公共网络,例如因特网)通信。网络适配器260可以通过总线230与控制器200的其它模块通信。应当明白,尽管图中未示出,可以结合控制器200使用其它硬件和/或软件模块,包括:微代码、设备驱动器、冗余处理器、外部磁盘驱动阵列、磁盘阵列(Redundant Arrays of Independent Disks,RAID)系统、磁带驱动器以及数据备份存储平台等。
本申请还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现上述任一项控制器200的功能或者实现上述控制方法的步骤,其实现方式与上述控制器的实施方式中记载的实施方式、所达到的技术效果一致,部分内容不再赘述。
参见图9,图9示出了本申请提供的一种用于实现控制方法的程序产品的结构示意图。程序产品可以采用便携式紧凑盘只读存储器(Compact Disc Read Only Memory,CD-ROM)并包括程序代码,并可以在终端设备,例如个人电脑上运行。然而,本申请的程序产品不限于此,在本申请中,可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。程序产品可以采用一个或多个可读介质的任意组合。可读介质可以是可读信号介质或者可读存储介质。可读存储介质例如可以为电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。可读存储介质包括:具有一个或多个导线的电连接、便携式盘、硬盘、RAM、ROM、可擦式可编程只读存储器(Erasable Programmable Read-Only Memory,EPROM)、闪存、光纤、便携式CD-ROM、光存储器件、磁存储器件、或者上述的任意合适的组合。存储介质可以是非暂态(non-transitory)存储介质。
计算机可读存储介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了可读程序代码。这种传播的数据信号可以采用多种形式,包括电磁信号、光信号或上述的任意合适的组合。可读存储介质还可以是任何可读介质,该可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。可读存储介质上包含的程序代码可以用任何 适当的介质传输,包括无线、有线、光缆、射频(Radio Frequency,RF)等,或者上述的任意合适的组合。可以以一种或多种程序设计语言的任意组合来编写用于执行本申请操作的程序代码,程序设计语言包括面向对象的程序设计语言诸如Java、C++等,还包括常规的过程式程序设计语言诸如C语言或类似的程序设计语言。程序代码可以完全地在用户计算设备上执行、部分地在用户设备上执行、作为一个独立的软件包执行、部分在用户计算设备上部分在远程计算设备上执行、或者完全在远程计算设备或服务器上执行。在涉及远程计算设备的情形中,远程计算设备可以通过任意种类的网络,包括LAN或WAN,连接到用户计算设备,或者,可以连接到外部计算设备(例如利用因特网服务提供商来通过因特网连接)。

Claims (10)

  1. 一种控制器,被配置成实现程控设备和刺激器之间的远程程控功能,所述刺激器设置于患者的体内,所述控制器被配置成:
    利用所述程控设备接收刺激配置操作,响应于所述刺激配置操作,配置所述刺激器的刺激参数,以使所述刺激器向所述患者的体内组织释放所述刺激参数对应的电刺激能量;
    利用摄像头采集得到所述患者的全身视频数据,其中,所述全身视频数据包括多帧全身图像;
    从每帧全身图像中截取得到一个或多个部位的部位图像;
    将每个部位的多帧部位图像输入每个部位对应的部位诊断模型,得到每个部位的诊断结果;
    将每个部位的诊断结果叠加至每个部位在人物图像中的对应区域,以合成得到显示图像,其中,所述人物图像中设置有一个人物;
    利用所述程控设备显示所述显示图像。
  2. 根据权利要求1所述的控制器,其中,所述控制器被配置成采用如下方式获取所述一个或多个部位包括:
    利用所述程控设备显示多个动作类型,所述多个动作类型包括对指、握拳、举手、抬臂、走直线和走预设曲线中的多种;
    利用所述程控设备接收针对其中一个动作类型的选择操作,响应于所述选择操作,获取被选择的动作类型对应的一个或多个部位。
  3. 根据权利要求1所述的控制器,其中,每个部位在所述人物图像中的对应区域包括第一对应区域和第二对应区域,所述控制器被配置成采用如下方式合成得到所述显示图像:
    将每个部位的诊断结果叠加至每个部位在所述人物图像中的第一对应区域,将每个部位的诊断结果对应的图标叠加至每个部位在所述人物图像中的第二对应区域,以合成得到所述显示图像,每个部位的诊断结果对应的图标用于图形化地指示每个部位的诊断结果的严重程度。
  4. 根据权利要求3所述的控制器,其中,每个部位在所述人物图像中的第一对应区域位于人物的对应部位的周围,每个部位在所述人物图像中的第二对应区域位于所述人物的对应部位所处的区域。
  5. 根据权利要求1所述的控制器,其中,所述控制器还被配置成:
    利用所述程控设备接收用于旋转所述人物图像的滑动操作,响应于所述滑 动操作,获取所述人物图像对应的旋转图像;
    将每个部位的诊断结果叠加至每个部位在所述旋转图像中的对应区域,以合成得到新的显示图像;
    利用所述程控设备显示所述新的显示图像。
  6. 根据权利要求5所述的控制器,其中,所述控制器被配置成采用如下方式获取所述旋转图像:
    响应于所述滑动操作,获取所述滑动操作的滑动路径,其中,所述滑动路径采用矢量表示;
    基于所述滑动路径的路径方向,确定所述人物图像对应的旋转方向;
    基于所述滑动路径的路径长度,确定所述人物图像对应的旋转角度;
    基于所述旋转方向和所述旋转角度,获得所述旋转图像。
  7. 根据权利要求6所述的控制器,其中,所述控制器被配置成采用如下方式获得所述人物图像对应的旋转图像:
    将所述人物图像输入用于将二维2D图像转换为三维3D图像的三维重建模型,得到所述人物图像对应的3D图像;
    按照所述旋转方向和所述旋转角度对所述人物图像对应的3D图像进行旋转,以得到所述人物图像对应的旋转图像;
    其中,所述三维重建模型的训练过程包括:
    获取第一训练集,所述第一训练集中的每个训练数据包括一个用于训练的2D图像及所述2D图像对应的3D图像的标注数据;
    针对所述第一训练集中的每个训练数据,执行以下处理:
    将所述训练数据中的2D图像输入预设的第一深度学习模型,得到所述2D图像对应的3D图像的预测数据;
    基于所述2D图像对应的3D图像的预测数据和标注数据,对所述第一深度学习模型的模型参数进行更新;
    检测是否满足预设的训练结束条件;在检测到满足所述预设的训练结束条件的情况下,将训练得到的第一深度学习模型作为所述三维重建模型;在检测到不满足所述预设的训练结束条件的情况下,利用下一个训练数据继续训练所述第一深度学习模型。
  8. 根据权利要求1所述的控制器,其中,所述控制器被配置成使所述人物图像能够在人物彩色图像、人物骨骼图像、人物经脉图像和人物卡通图像之间 进行切换。
  9. 一种植入式神经刺激系统,包括:
    程控设备,所述程控设备设置于患者的体外,所述程控设备被配置成提供交互功能和显示功能;
    刺激器,所述刺激器设置于所述患者的体内,所述刺激器被配置成向所述患者的体内组织释放电刺激能量;
    如权利要求1-8中任一项所述的控制器。
  10. 一种计算机可读存储介质,其中,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1-8中任一项所述控制器的功能。
PCT/CN2023/089492 2022-05-26 2023-04-20 控制器、植入式神经刺激系统及计算机可读存储介质 WO2023226636A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210585177.7 2022-05-26
CN202210585177.7A CN114984450A (zh) 2022-05-26 2022-05-26 控制器、植入式神经刺激系统及计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2023226636A1 true WO2023226636A1 (zh) 2023-11-30

Family

ID=83028410

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/089492 WO2023226636A1 (zh) 2022-05-26 2023-04-20 控制器、植入式神经刺激系统及计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN114984450A (zh)
WO (1) WO2023226636A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114984450A (zh) * 2022-05-26 2022-09-02 苏州景昱医疗器械有限公司 控制器、植入式神经刺激系统及计算机可读存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060178709A1 (en) * 2004-12-21 2006-08-10 Foster Allison M Methods and systems for treating a medical condition by promoting neural remodeling within the brain
JP2017202310A (ja) * 2016-05-09 2017-11-16 東芝メディカルシステムズ株式会社 医用画像撮像装置及び方法
WO2018126779A1 (zh) * 2017-01-03 2018-07-12 江苏德长医疗科技有限公司 无线可穿戴功能性电刺激系统辅助下的神经网络重建训练方法
CN114984450A (zh) * 2022-05-26 2022-09-02 苏州景昱医疗器械有限公司 控制器、植入式神经刺激系统及计算机可读存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060178709A1 (en) * 2004-12-21 2006-08-10 Foster Allison M Methods and systems for treating a medical condition by promoting neural remodeling within the brain
JP2017202310A (ja) * 2016-05-09 2017-11-16 東芝メディカルシステムズ株式会社 医用画像撮像装置及び方法
WO2018126779A1 (zh) * 2017-01-03 2018-07-12 江苏德长医疗科技有限公司 无线可穿戴功能性电刺激系统辅助下的神经网络重建训练方法
CN114984450A (zh) * 2022-05-26 2022-09-02 苏州景昱医疗器械有限公司 控制器、植入式神经刺激系统及计算机可读存储介质

Also Published As

Publication number Publication date
CN114984450A (zh) 2022-09-02

Similar Documents

Publication Publication Date Title
US11324959B2 (en) Collection of clinical data for graphical representation and analysis
US9931511B2 (en) Method and apparatus for visualizing a migration history of pain maps and stimulation maps
WO2024001723A1 (zh) 控制设备、医疗系统及计算机可读存储介质
WO2023226636A1 (zh) 控制器、植入式神经刺激系统及计算机可读存储介质
US20220323766A1 (en) Systems and methods for providing neurostimulation therapy according to machine learning operations
CN103845805B (zh) 一种植入式医疗系统的医生程控仪演示功能实现方法
US20220134119A1 (en) Computer-assisted pain mapping and neuromodulation system
US20230414286A1 (en) Pathway planning apparatus, surgical system, and computer-readable storage medium
WO2023069940A1 (en) Systems and methods for providing neurostimulation therapy using multi-dimensional patient features
AU2022330124A1 (en) Systems and methods for providing digital health services
CN116825315A (zh) 程控交互方法、电子设备、存储介质及程序产品
CN115282477A (zh) 远程医疗设备、医疗系统及计算机可读存储介质
CN117577293A (zh) 远程程控设备、方法、医疗系统及可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23810710

Country of ref document: EP

Kind code of ref document: A1